Sunday, June 17, 2012

The ISM band: A review of the essentials.


In 1985 the Federal Communications Commission issued rules permitting “ intentional radiators” to use the “ Industrial, Scientific and Medical (ISM) Bands ( 902-928, 2400-2483.5, 5725-5850 Mhz) at power levels of up to one Watt without end-user licenses. Originally these bands had been reserved for unwanted, but unavoidable emissions from industrial and other processes, but they also supported a few ( often military) communications users. The new rules led to the development of a large number of consumer and professional products and is considered to be an important step towards the development of wireless computing or multimedia applications. Applications in the ISM band include, wireless LANs, short range links for advanced traveller systems ( electronic toll collection), garage door openers, home audio distribution, cordless phones, private point to point links, remote control, wireless telemetric systems ( e.g electrical power consumption monitoring) etc. Applications seem to be limited by the imagination rather than technology. A drawback of the ISM band is lack of any protection against interference. In particular microwave ovens limit the useful range of such communications devices. The techteam at Signal Processing Group Inc. recently released a brief whitepaper on the ISM band. Interested readers can find this under the "Engineer's Corner" menu item in the SPG website located at http://www.signalpro.biz.

Saturday, June 16, 2012

Thermal modeling of devices and subsystems


After a number of mishaps in the design of power ICs and modules with respect to devices blowing up it was decided that we would go back to first principles and understand thermal effects and furthermore use thermal modeling to design better, safer and robust ( with respect to thermal operation) devices and modules. In our attempt to do this we came across many different types of information in the literature concerning thermal design. From the very simplistic thermal resistance and power relationships to fairly complicated thermal models. We also came across thermal modeling software information. This was in the year 2008. We took this information and wrote a brief thermal modeling technical note in the hope that we would have less incidents of thermally caused destructive events. Another interesting result of this was that we were able to set up thermal models of MCMs and devices that were not yet in existence and study the effects of thermals on these to - be devices. These models were built up in MATLAB/SIMULINK and were still fairly simple. We used commercially available thermal modeling software for more complex models. All this effort did help and in the end we were able to meet our thermal design goals in a large number of projects. The initial note was released for publication and now resides in: www/signalro.biz >> engineer's corner for those interested in thermal modeling or thermal effects. We acknowledge the contribution made to thermal modeling by a number of authors both on and off the web.

Wireless design: substrates and laminates


Yesterday we spent an absolutely intense two hours in discussions of substrates for RF and high frequency design with a couple of experts. Frequencies from about 1 Ghz to 77 Ghz were in play. The amazing part of the discussions was the level of parameters to be considered, not only in the manufacture of the laminates but also the layout of the interconnect, filters, transmission lines, and heat sinking.For high speed digital the control of the impedance/constant line width was more of a factor, unlike in RF where multiple line widths and shapes are in common use. A multitude of transmission lines are used in a bewildering array of combinations. Other parameters such as the glass weave and its impact on impedance was a discussion worth having. Three laminates emerged as winners for the a large number of applications in design. The venerable FR4 was buried under the the new requirements at 77 Ghz and even at 24 Ghz.The impact of DF and DK ( buzz words of course to be treated in some detail in subsequent posts). The use of materials and their trade-offs were fascinating. The size of the material sold has also gone through revisions and large sizes are now common. Gone are the limits of 18 X 24. The other very interesting issue that surfaced was the role of, and difficulty in, testing of not only devices but also the substrates themselves. The relationships between the thickness and the width of lines changes from the simple expressions we all knew. The difficulty of modeling has increased and very few CAD tools appear to have the capability to do what is needed. Only one CAD tool was mentioned several times as a recommended one for design and modeling at the high performance levels. Some very interesting numbers for insertion loss and actual measured values of permittivity and loss tangents were presented and argued over. Very interesting empirical design equations and data was presented as well. In this discussion the effect of the roughness factor was presented and emphasized. Finally a detailed discussion on the materials of construction such as resins,fillers and reinforcements ended the presentations. In short a very interesting couple of hours. Interested parties may contact us about these subjects through our website at www.signalpro.biz>>contact.

The role of the heatsink in higher power devices


As power circuit designs and devices proliferate in products such as LED drivers, HID lighting, motor control and electric vehicles it is becoming important to understand themal effects in active devices. All active devices dissipate power, and power active devices dissipate lots of power. This power dissipation creates heat which must be removed by some means to prevent excessive heat buildup inside a package or module which ultimately would lead to destruction of the appliance, circuit or device. One of the ways devices can be made safer, thermally that is, is the use of a passve heat sink. The role of the heat sink in active device thermal management is explored in a recent report released by Signal Processing Group Inc.'s technical staff and may be found in: http://www.signalpro.biz>engineer's corner>heatsink.pdf.

Decimation filters for oversampled digital signals


A typical filter used as the pre-decimation filter for an oversampled A/D is the Hogenauer filter, also called the CIC filter. These filters have some advantages which make them particularly suitable for use as decimation filters. In general the output stream from a OSR A/D is a 1 bit high frequency digital signal. The 1 bit signal has to be downconverted in frequency and increased in bit width. This is the fundamental decimation operation. Hogenauer filters offer the following advantages (1) No multipliers are needed. (2) No storage is needed for filter coefficients. (3)Intermediate storage is reduced by integrating at the high sampling rate and comb filtering at a low rate. (4) The structure of the CIC filters is very uniform, using only two basic building blocks. (5) Little external control or complicated local timing is required. (6) The same design can easily be used for a wide range of rate change factors with the addition of a scaling unit. As a result of these advantages Hogenauer filters have been used and continue to be used in overampled systems. A technical report prepared by technical staff at Signal Processing Group Inc. is now available in a series of posts that deal with the Hogenauer filter as well as OSR A/D converters. It was assumed that since the CIC filter is an important component at the backend of an OSR ADC, understanding the design parameters of this filter is essential to the design of the overall OSR ADC. Subsequent posts to this one deal with the details of design for decimation filters. The paper may be found at http://www.signalpro.biz>engineer's corner.

Random signal generation for PSPICE/SPICE


The SPICE programs we use for circuit simulation do not have a direct way to generate random waveforms. i.e. there is no voltage or current source which can be attached to a circuit node and which can generate a random signal for analysis. As a result we had to develop code on MATLAB and C++ to allow us to generate a PWL random waveform of as long a length as required. It is used as a piece wise linear signal and can generate the random signal as required.Please contact us through our website located at http://www.signalpro.biz for more information about this circuit simulation tool.

Adjacent channel power ratio


In multicarrier systems, the carriers can be spaced quite close to each other. When this is the case a quantity referred to as the adjacent channel power ratio or ACPR becomes important. As mentioned above, multicarrier systems have a number of carriers which may generate signals whose power may add in phase. As more tones or signals start interacting, the peak additive power will increase. The average power of these signals may well be within the dynamic range of the system. However, the peaks of power may exceed the dynamic range. This will cause non linear odd - order distortion in the system. When this happens it results in adjacent channel power output or ACP. The ACPR is the ratio of the system output power at an offset frequency with respect to the power of the channel of interest. This can be considered one measure of linearity of a transmitter ( or RFPA). If the transmitter or the PA generates unwanted sidebands at an offset frequency that lies within the passband of an adjacent channel. For a given modulation scheme, the relationship between third order intermodulation products and the ACPR at a given power level is: ACPR = IMR(2-tone) + 10*log[ n**3/(16X + 4Y)].For a given modulation scheme, the relationship between third order intermodulation products and the ACPR at a given power level is: ACPR = IMR(2-tone) + 10*log[ n**3/(16X + 4Y)]. Here X and Y are given by:

X = (2n**3 – 3n**2 – 2n)/24 + [mod(n/2)]/8.0

And

Y = n**3 – {[mod(n/2)]/4.0}

All ratios here are in dBc. i.e. the ratio of the two tone intermodulation to signal carrier IMR and ACPR. Check out our website and engineer's corner. Go to http://www.signalpo.biz.

Dot rule for transformers


The dot rule for transformers is a convention used to present the voltage and current relationships and phase. It is a simple rule and therefore sometimes easy to forget, if not used every day. In order to use this rule we need to know two things: (1) The right hand rule for current and fluxes. i.e. If the fingers of the right hand are wrapped around the core in the direction of current flow, then the thumb will point in the direction of the flux. (2)If the current enters a dotted terminal, it causes a positive voltage at the other dotted terminal.If a current leaves a dotted terminal, it induces a negative voltage at the other dotted terminal. For more technical articles and items of interest please visit our website at http://www.signalpro.biz.

Input impedance of a common emitter differential amplifer with emitter degeneration


Use of the emitter coupled bipolar differential amplifier is prolific. In addition a good way to stablize gain and bias stability is the use of a emitter degeneration resistor. This post simply presents, without proof, what happens to the input impedance of the differential device when degeneration is used. First one has to know the rpi of the bipolar small signal model. This is calculated as: Beta0/gm. Where Beta0 is the dc gain of the bipolar. If no degeneration is used, this is the input impedance of the transistor. When a degeneration resistor is used then the impedance rises significantly. The rise in input impedance is: (Beta + 1)*Re. Here Beta is the current gain at the particular bias point and frequency and Re is the degeneration resistor. Therefore the total input impedance rises to rp1+(Beta+1)*Re. For other items of interest please visit our website at http://www.signalpro.biz.

PRBS power calculations using a sinc squared function integration


The power in a PRBS NRZ signal is expressed as a sinc squared function of the independent variable x. In order to calculate the power in this signal from 0 to some arbitrary x a definite integral of the sinc squared function has to be found. This is not an easy task. Having searched the web for ready solutions of this problem very few relevant references were found. Therefore a technique was evolved from series expansions of the sinc and sinc squared functions. The accuracy of the estimates found by using this technique is completely dependent on the engineer. We found that using just four or five terms in the expansion allowed us to calculate to within accuracies of interest to us. The technical report can be found in the engineer's corner in the SPG website located at http://www.signalpro.biz.

Why 50 Ohms?


Has anyone wondered why we use 50 Ohms as the the reference resistance in so many of our designs. Why 50 Ohm seems to be a defacto standard. We normalize to 50 Ohm; we use 50 Ohm in our oscilloscopes; we pick 50 ohms as a good convenient reference resistor. But how did this happen. Where did this 50 Ohm factor come from. We ran across a explanation which sounds reasonable enough and decided to post it to this blog. Standard coaxial lines in England in the 1930's used a commonly available center conductor which turned out to be 50 Ohms! Others say that for minimum signal attenuation, the transmission line characteristic impedance is 77 Ohm. For maximum power handling it is around 30 Ohm. A good compromise is 50 Ohm for both performance parameters. So this is how 50 Ohm became a convenient impedance level!?

Lumped and distributed elements


How does one determine whether to treat a component as a lumped or distributed one? The answer is, that if the the element size is greater than lambda/20, where lambda is the effective wavelength of the signal associated with the element, then it should be treated as a distributed component or element. This means that for typical discrete designs, the lumped approximations are valid for frequencies in the 500 to 1000 Mhz range. For ICs the frequency range is much larger because of the small size of the elements encountered there. This range may be up to 10 Ghz. One has to ask, where did the 5% of lambda come from? It is like most other things in practical engineering an approximation and a thumb rule. It should be considered a guideline. A distributed model is usually more accurate for any frequency above DC but experience says that the 5% guideline is a good transition value. Note: The effective lambda is the lambda in free space divided by the square root of the effective dielectric constant. The effective dielectric constant in homogeneous media is simply the relative permittivity. For non-homogeneous media is not. Usually for non-homogeneous systems such as microstrip the effective dielectric constant is less than the relative permittivity.

Saturday, June 9, 2012

First order filter parameter calculation algorithm


A recurring problem in ac filter circuit design, is the calculation of attenuation at a particular frequency or conversely the calculation of a frequency given the attenuation. In addition related calculations deal with estimations of time constants and filter parameters such as resistance and capacitance. These calculations play a crucial role in the design of anti-aliasing filters, low pass filters, phase lock loops etc. A paper published recently by the techteam at Signal Processing Group Inc, documents these calculations and provides examples for interested readers, cookbook fashion. The paper is located at http://www.signalpro.biz >engineer's corner.

The MOS Varactor: An introduction


In many IC designs frequency based trimming or control is required. For instance a filter may need to to be trimmed for corner frequencies. A PLL VCO needs to be controlled by changing the frequency based on its feedback signal. An adaptive equalizer needs to shift its pole-zero configuration. These and many other related applications need a device to be voltage controllable, and offer a change of reactance. The varactor is a useful component that is used frequently to do this. In general varactors are assumed to be junction type devices where the depletion capacitance can be changed to vary the reactance. In CMOS or BiCMOS processes another type of varactor is available, almost as a byproduct of the MOSFET structure. This is the MOS varactor It seems that every CMOS process has the capability to produce a MOS varactor. However, although the varactor is available it may have some limitations of Q and sensitivity. In addition CMOS technology vendors do not characterize or optimize their MOS varactors. This is left to those specialized technology vendors who offer high performance or RF type processes. A recent report on the MOS Varactor is available as an introduction at http://www.signalpro.biz/ > engineer's corner for interested parties.

Useful identities for bipolar design


Bipolar design has been popular for a very long time. It continues to provide a device that is being used today in various forms. In standard bipolar processes, in combination with CMOS in BiCMOS processes, in high current designs, in high voltage with high current designs. Technology and device vendors keep improving their technologies and processes. Recently the advent of SiGe technology also provides a very high performance bipolar device. For the design engineer a set of identities which provides a way for simple hand calculations of the bipolar device for use in a circuit can be useful. Ultimately, the circuit design can be either breadboarded or simulated to evaluate performance. However, hand calculations can be and should be a first step. To facilitate this process the technical team at Signal Processing Group Inc., have recently released a brief paper on some useful bipolar design identities. This is available on the SPG website in the " Engineer's corner". Please visit http://www.signalpro.biz.

De-embedding in high frequency measurements


High frequency measurements for circuits such as MMICs and high speed digital circuits are made using some kind of Vector Network Analyzer ( VNA) or some kind of TDR instrument. In most cases the DUT ( device under test) is mounted on a test fixture which probably has an input connector and microstrip and an output connector and microstrip. The measurements are to be made on the characteristics of the DUT. To do this the test fixtures have to be de-embedded. This technique and its basics form the subject of the latest brief paper from the technical team at Signal Processing Group Inc. It can be found at http://www.signalpro.biz in the Engineer's Corner.

The wave number β, or the phase constant definition


β is an important quantity used in understanding transmission lines and waveguides. It is not intuitive so this treatment presents a brief explanation of the quantity in the analysis of transmission lines, waveguide and other wave systems.

Sometimes β is referred to as the phase constant of the line or guide. If the cartesian coordinate system is used and a coordinate, say “z” is used as the direction of wave propagation then βz measures the instantaneous phase at point z on the line with respect to z =0.

In addition, voltage or current on the line is the same at any two points separated in z such that βz differs by multiples of 2π. Since the shortest distance between points where voltage or current is at the same phase is a wavelength, then:


βλ = 2π

( replacing z by λ),

β = 2π/λ

Multi-chip in a package technology


When a designer has a system that is designed with chips with differing voltages, currents, frequencies and special characteristics it is difficult to integrate the system for cost or size reduction. In this case the usual approach is a motherboard - daughter board combination. ( Usually, but not always). Recently it appears that designers are turning to multi-chip in a package technology. In this case a package is used which has die in it assembled in a vertical configuration or a side by side combination. Properly done , this can be a powerful way of getting the job done in a shorter time with less cost than a difficult integration approach. The design of the multi-chip configuration is the key. Some parameters to be considered seriously are temperature effects, parasitic connections, grounding, and frequency performance. Signal Processing Group Inc., is offering a multi-chip in a package design and assembly service for interested users. SPG website is located at http://www.signalpro.biz.

Temperature independent resistor design.


In IC technology all resistor materials have an associated temperature coefficient. Most commonly, resistors are made from polysilicon, diffusion of various kinds and metal. The most common of these resistors is poly and diffusion. In certain applications a temperature independent resistor may be required. In order to do this one has to search the technology properties to see if there are resistor materials in the technology that can provide (1) An appropriate sheet resistance and (2) opposite temperature coefficients. Almost all semiconductor technologies provide this. Once the materials are established a first order temperature independent resistor may be synthesized as shown in a recent report released by Signal Processing Group Inc. This report may be found at:
http://www.signalpro.biz>engineer's corner.

Ground loops in analog and wireless design

Ground loops are parasitic paths in a PCB or an IC that can cause a number of bad effects. These are caused primarily by bad layout, either accidently or because of lack of experience. In order to avoid ground loops one has to understand: (1) What they are, (2) how they are caused,and (3)how to eliminate them or minimize their effects. A good review paper on this subject was recently released for publication by the techteam at Signal Processing Group Inc., and is available for review at their "engineering pages" in the website located at http://www.signalpro.biz.

1.2V battery to 3.3v and 5.0V output power PCB design

Recent needs in low power and single battery ( 1.2V) design demand a power converter circuit. A recent requirement for a handheld FM receiver ( multichannel) necessitated
the design of a voltage supply converter using off the shelf devices. After a thorough search for off the shelf devices which could be used to realize such a converter a device was chosen from a popular manufacturer and the design was started. Here are some issues that we encountered, ( perhaps not new, but nonetheless need to be mentioned). Our biggest challenges, suprizingly enough had very little to do with the IC that we used. This was because the vendor had excellent technical support and application notes and had a well organized technical support environment. Our requests were handled within a 24 to 48 hour turnaround. The challenges that were difficult had to do with selecting, acquiring and using the passive components. These were harder to find and technical support left much to be desired. In spite of these issues we managed to finish the design in time and test the board. It worked quite well and provided 3.3V at close to 1.0A and 5.0V at close to 0.5A as needed with a 20% margin. Interested readers are invited to contact us through our website at http://www.signalpro.biz.

RF Design: Electrical length


Sooner or later, the design engineer who is working in microwave or high frequency electronics, is going to come up against the concept of electrical length. In order to understand this concept lets work out the following arithmetic:

1.0 The wave number or phase constant = β = 2π/λ

For those unfamiliar with this, we recommend looking up the description of this quantity in the SPG blog at (http://signalpro-ain.blogspot.com/).

2.0 The electrical length is defined by θ = βl where l = physical length

3.0 θ = βl = (l/ λ) *360 degrees

Here λ is the wavelength of the signal in the applicable dielectric ( or sometimes called the guide wavelength).

4.0 For a frequencies in Ghz, this becomes: [360 * fGhz * l(cm) * √εeff]/30 cm


In this case frequency is in Ghz, physical length is in centimeters.

For example:

Let frequency be 1 Ghz.
Let λ = 0.8 λ(air) or √εeff = 1.25
Let l = 0.1 meters = 0.1E2 centimeters

Then :

θ = [360* 1*0.1E2*1.25]/30 degrees

θ = 150 degrees

Lumped element filters to transmission line equivalents


As frequencies increase in filters, lumped elements no longer satisfy the requirements for various reasons ( parasitics, accuracy etc). At this point the designer may choose to convert the lumped element filter to a distributed element filter. One of the techniques used is transmission line stubs in the conversion. This technique is described in a white paper released from Signal Processing Group Inc. recently. The paper may be found at http://www.signalpro.biz >> engineer's corner by interested readers.

Definitions of the Q factor


Definitions of the Q factor


1.0 Unloaded Q : Energy stored in the component/Energy dissipated in component.

2.0 Loaded Q:Energy stored in component/Energy dissipated in component and
external circuit./load.

Thermal coefficients for the dielectric constant of PCB materials


In a recent group discussion on Linkedin, a member asked about the thermal coefficients of FR-4 and other common pcb materials. For interested persons a really good paper by John Coonrod, in the September issue of www.onboard-technology.com tabulates these coefficients.

PCB electrical and thermal parameters for design


PCB design is not only a science but also an art, as every PCB designer knows. However, the art of design is predicated by the science to a certain extent and in order to do a good job a designer has to know certain parametric effects of PCBs. A recent whitepaper released by Signal Processing Group Inc., offers a brief treatment of the first order electrical and thermal parameters for PCB design that can be useful in PCB design, both for understanding, as well as robustness of design. The paper can be accessed via the "Engineer's corner" menu item in the SPG website located at http://www.signalpro.biz.

How many times did Thomas Edison fail?


A light hearted post. Having read a number of books, both on Thomas Edison's life as well as motivational we have come to the conclusion that no one really knows how many times Edison failed before he got his first light bulb working. Here are the different numbers so far: 6000 times, 11000 times, 21000 times, 1000 times, 10000 times and an astronomical 56000 times!! Perhaps someone out there has other numbers they might want to share with us. We can be contacted on our email at: spg@signalpro.biz.

Thursday, June 7, 2012

Why is power transfer and power quantities used in RF design?

It is seen that in high frequency circuits, power transfer and power quantities are used. Typically dBm will be a standard unit in use. The question is: why? The answer to this question is found in relative performance of circuits at high and low frequencies. When frequencies are low, a voltage or current signal applied at an input of a circuit or chip is reproduced quite faithfully in the chip or at the operating terminals of the circuit. The same is true at the outputs. The reason is that parasitic quantities do not play as large a role at low frequencies.The situation is quite different at high or microwave frequencies. At these frequencies the voltage or current signal applied to the input terminal of a device package is not what the active device sees inside the package. The reason is of course, the parasitics of the circuit.If instead of input current or input voltage as the signal quantities we use power delivered to the input port then this problem goes away since reactances do not dissipate power. At the output, if the true available power gain of the device is given, we can calculate accurately what to expect assuming no power is dissipated in the parasitic elements. These reasons are why RF/MMIC circuits are almost always designed with power flow or power transfer considerations

Super-beta or high current gain bipolar transistors

In certain analog ICs it is necessary to have very high input impedance and very low base currents. For such applications, the typical current gains of an integrated npn transistor are not high enough. It is possible to increase the current gain of an npn transistor significantly by improving the base transport efficiency. In this case the base is very narrow ( a few hundred angstroms or less). The collector to emitter breakdown of a structure like this is relatively low ( 2V - 3V) because the collector base depletion layer can punch through the active base region into the emitter. This is the punch-through or "super-beta" transistor. Current gains of 5000 are obtainable using this technique at currents of 20uA or so with a Vce of around 0.5V. The fabrication of super-beta transistors in a standard process can be done by using one extra masking step and diffusion. After the base diffusion for the normal NPN transistors a special mask is used to open up the emitter diffusion for the super-beta transistors. At this stage the emitter of the super-beta transistor is only partially diffused.This step is then followed by the masking and n+ diffusion of the standard npn. Owing to the extra diffusion step for the super-beta transistor, the emitter of the super-beta transistor is diffused slightly deeper
than the normal npn resulting in a narrow base width.

Wireless entrepreneurs alert

This post is not engineering-centric.It is addressed to engineers who may have the entrepreneurial bug. Times have changed so much that there are new opportunities for people who have a developed product and want to profit by their efforts with little or no money expended. The online/cyber world has had a tremendous impact that could not be imaginged only a decade ago. Then an engineer with a product had very few avenues open to him/her to gain from it financially . Today there are a myriad of ways that can be done. For more information about these techniques please contact Signal Processing Group Inc., via the "Contact" menu item in the website located at http://www.signalpro.biz.