ELET2100 - Microprocessors I

Analog to Digital Conversion

A/D & D/A chips allows us to interface with the analog (real) world. Most sensors and many output devices are analog in operation. What follows below is a attempt to fill in some gaps left over from the lectures.

Basic interface

To control an A/D from a microprocessor, the A/D converter must have, at least, the following three (3) signals:

Start Conversion (SC) – sets the A/D converting

Output Enable (OE) – places the digital reading on the microprocessor’s data bus/port

End of Conversion (EOC) --- tells the microprocessor when the A/D is busy (optional)

These signals may come under other names but their operation would be similar.


The resolution of an A/D conversion is defined as the smallest voltage difference that can be detected. The resolution is also referred to as the magnitude of the least significant bit (LSB) in the conversion. The fact that an A/D conversion has a finite (non-zero) resolution results in what is called quantization error. Quantization error results because a continuously varying analog signal is represented digitally as a series of discrete steps differing by the resolution of the conversion process. The resolution depends on two other quantities:

Number of digital bits in the conversion

This is the length of the digital word that the A/D conversion produces as its output. Typical values for this are 8, 12 and 16 bits. The higher the number of bits the longer the conversion takes and the more accurate it is. This number is fixed for a given converter. The number of discrete values that can represented (quantized) by a given length digital word is equal to 2 raised to the number of bits. For example, if the converter is a 12 bit system then 212 = 4096 values can be represented.

Input voltage range

This is the total range in volts of the A/D converter and depends on the amount of gain that the converter has. Typically the amount of gain is adjustable.

The resolution in volts is defined as:

Resolution = (Input Voltage Range) / (2 Number of bits - 1)

For example,

Input Voltage Range = 10 V = 20 V
Number of Bits = 12

Resolution = 20 V / (2 - 1) = 0.0049 Volts

Note that if the Input Voltage Range is decreased to 0.050 (50 millivolts) the resolution = 0.10 / 4095 = 0.0000244 volts or 2.e-5 volts.

The choice of the input range value is critical in ensuring that we obtain enough resolution to accurately measure the input signal.

In both cases above, the resolution expressed an a percentage of the Input Voltage Range is the same.

Resolution / Input Voltage Range * 100
0.0049 / 20 * 100 = 0.0245 %
2.e-5 / 0.10 * 100 = 0.02 %

This is certainly low enough for almost any application.

However, unless our signals use the input voltage range we do not obtain this percent resolution. Consider the case of measuring a signal of 0.1 volts using a input voltage range of 10 volts. The percent error in our measurement of this signal using a 12 bit A/D is:

0.0049 / 0.1 * 100 = 4.9 %

which is probably not acceptable. In this case an input range of 1.0 or 0.1 volts should be used, giving either 0.49 or 0.049 percent resolution.

An alternative to adjusting the gain of the A/D is to provide the A/D with signals that use the entire 10 volt range. This is generally preferable due to signal noise. Each amplification stage introduces noise into the signal and each succeeding amplification amplifies the noise from the previous stage. Thus it is generally preferable to have the first stage provide as much gain as possible. And the A/D, which is the last analog stage, should have the least amount of gain necessary to resolve the signals. That is, the best procedure is to choose the gain range on the the load cells, etc., so that a 10 volt range on the A/D results in acceptable resolution of the signals.

Sampling Frequency

The basic idea behind sampling frequency considerations is that the A/D conversion must occur quickly enough to capture the rate at which the signal to be measured is changing. The primary consideration is to obtain a reasonable amount of data. This is enough data so that the changes in the signals can be resolved (sometimes this topic is called consideration of the time resolution of the A/D process). But not so much data that it is difficult to analyze. In fact sometimes that lack of resolution in time can be put to advantage by sampling too slow to resolve high frequency noise in the signal.

The time resolution of a A/D conversion is usually expressed in terms of the maximum frequency that can be resolved. This frequency is called the Nyquist frequency and it is equal to half of the sampling frequency. The basic idea is that at least two data points are required in each cycle of a waveform to just start to resolve it (one data point at the maximum and one at the minimum in the waveform). Note that any shape signal (sinusoidal, triangular, or square wave) with the Nyquist frequency will appear the same digitally. Thus in practice a signal must be significantly below the Nyquist frequency (in the area of biomechanics it is often up to ten times bleow) to be accurately measured and its wave form determined. For example audio CD's are a digital recording of sounds and they use a sampling frequency of 44 kHz. This is roughly 3 times the maximum frequency of human hearing which is roughly 18 kHz.

The minimum frequency that can be resolved is determined by the length of time data is collected. The total time of the data collection is equal to the number of scans collected divided by the number of scans per second. For example if 100 scans are collected at 100 scans/second then the total data collection time = 100 scans / (100 scans/sec) = 1 second. The lowest freqency that can be resolved has a period equal to the data collection time. For this example a period of 1 second corresponds to a frequency of 1 Hz.

Multiple Channel A/D Conversion

Typically one wishes to measure several analog signals simultaneously. There are several methods this can be accomplished:

  • One A/D for each channel (expensive!)
  • One A/D multiplexed (switched) between the channels, using either
    • a sample and hold amplifier for each channel (also expensive)
    • a single sample and hold after the multiplexer (our case)
    • no sample and hold amplifiers

To understand this you first need the following definition:

Sample and Hold Amplifier

This is an amplifier (usually with a gain of 1.0) which can store an analog signal for a period of time, which is typically just longer than an A/D conversion takes. A capacitor is a crude sample and hold amplifier, in that it can store a voltage for a given amount of time determined by the size of the capacitor and the resistance (load) that is across the capacitor.

Because an A/D conversion takes time, reading multiple channels simultaneously is a problem. It is possible with multiple A/D converters, but that is expensive. Instead if truly simultaneous readings are required (say for real time control) then a sample and hold amplifier can be used to remember the signals at the same time and then a single A/D converter can be used to convert the stored voltages. However this multiplexed approach reduces the maximum sampling frequency, because the conversions are done serially instead of at the same time.

A sample and hold amplifier will also tend to increase the accuracy of the conversion. If the input voltage to an A/D converter is changing while the conversion takes place errors can result. This is because the A/D conversion process reads the input voltage several times (typically once for each bit) during the conversion process. A sample and hold is used to keep the voltage the A/D converter sees during the conversion process from changing.

How does it work?

The typical A/D converter uses some variation of a successive approximation scheme to determine what the input voltage is. This process uses a digital to analog converter (DAC, described below) and compares the output of theDAC to the input signal. The procedure works as follows. The A/D converter sets the highest order bit on the D/A to 1 and all other bits to 0. It then compares the DAC output voltage to the input signal (using an analog comparator). If the input is higher than the DAC signal, the bit is left at 1, if the input is lower the bit it set to 0. Then the procedure is repeated with the next lower order bit leaving the higher order bit(s) with the previously determined value(s). Thus the digital value of the input signal is successively approximated until finally the least significant bit (LSB) is determined and the conversion is complete. This process is all controlled by a clock that must run at the number of bits times the maximum sampling frequency (at least). This is why for a given clock speed, a conversion with more bits takes longer.

A DAC is a much simpler device which essentially consists of a summing amplifier with an input for each bit. When a bit is set the summing amplifier adds in the voltage corresponding to that bit. This process operates at the speed at which the summing amplifier can settle (reach a stable output value), which is extremely short compared to the time an A/D conversion takes with its iterative successive approximation process. Here are some A/D types:

Flash Analog-to-Digital Converter

Flash Analog-to-Digital Converters are used for systems that need the highest speeds available. Some applications of flash ADC include radar, high speed test equipment, medical imaging and digital communication.  The difference with this and other types of ADC is that the input signals are processed in a parallel method.  Flash converters operate by simultaneously comparing the input signal with unique reference levels spaced 1 least significant bit apart.  This requires many front-end comparators and a large digital encoding section.  Simultaneously, each comparator generates an output to a priority encoder which then produces the digital representation of the input signal level.  In order for this to work you have to use one comparator for each least significant bit. Therefore, a 8-bit flash converter requires 255 comparators along with high speed logic to encode the comparator outputs.


Like in the diagram note that input signal is simultaneously measured by each comparator which has a unique resistor ladder generated reference.  This produces a series of 1s and 0s such that the outputs will be all 1s when below the reference levels.  The comparator output is called a thermometer.  Following the comparator outputs is the digital section consisting of several logic gates for encoding the thermo codes.  The thermometer decoder determines the point where the series of 1s and 0s form a boundary.  The priority encoder encoder uses this boundary threshold for conversion to binary output.  Output from the priority encoder are then available to the system memory. It is important that the memory system be designed properly to prevent lost data since every new conversion will overwrite the previous result.


There are two types of Flash ADC where one is bipolar and the other is CMOS.  The difference is in how the front end of the comparators are created.  CMOS is used for the ease of using analog switches and capacitors.  CMOS flash converters can equal the speed of all except the bipolar designs with emitter-coupled logic.  The advantage of using CMOS is that the power consumption is less with the N and P channels.

Bipolar Flash ADC

Using Bipolar components gives a different frequency response limitation due to the transistors.  Using buffers are used to prevent the input and reference signal from excessive comparator loading.  These buffers responsible for the dynamic performance of ADC.  Although it is possible to use TTL or CMOS, ECL (emitter-coupled logic) is used for the highest speed.  The high speed is possible by using ECL for the encoding stage which requires a negative supply voltage.  This means that the bipolar comparators also need a negative supply voltage.  ECL is faster because is keeps the logic transistors from operation in the saturated state (restricted to either cutoff or active).  This eliminates the charge storage delays that occur when a transistor is driven in the saturated mode.

Tracking Analog-to-Digital Converter

Tracking uses a up down counter and is faster that the digital ramp single or multi-slope ADC because the counter is not reset after each sample. It tracks analog input hence the name tracking.  In order for this to work the output reference voltage should be lower than the analog input.  When the comparator output is high the counter is in counting up mode of binary numbers.  This as a result increases stair step reference voltage out until the ramp reaches the input voltage amount.  When reference voltage equals the input voltage the comparators output is switched to low mode and starts counting down.  If the analog input is decreasing the counter will continue to back down to track input. If the analog input is increasing the counter will go down one count and resume counting up to follow the curve or until the comparison occurs.

An 8 - bit tracking ADC

Single-Slope Analog-to-Digital Converter

Single-slope ADCs are appropriate for very high accuracy of high-resolution measurements where the input signal bandwidth is relatively  low.  Besides the accuracy these types of converters offer a low-cost alternative to others such as the successive-approximation approach.  Typical applications for these are digital voltmeters, weighing scales, and process control.  They are also found in battery-powered instrumentation due to the capability for very low power consumption.

The name implies that single-slope ADC use only one ramp cycle to measure each input signal.  The single-scope ADC can be used for up to 14-bit accuracy. The reason for only 14-bit accuracy is because single-slope ADC is more susceptible to noise.  Because this converter uses a fixed ramp for comparing the input signal, any noise present at the comparator input while the ramp is near the threshold crossing can cause errors.

The basic idea behind the single slope converter is to time how long it takes for a ramp to equal and input signal at a comparator.  Absolute measurements require that an accurate reference (Vref) matching the desired accuracy be used for comparing the time with the unknown input measurement. Therefore the input unknown (Vun) can be determined by:

Vun = Vref(Tun/Tref)

Where the ration is directly proportional to the difference in magnitudes.  The main part of the single-slope analog to digital converter is the ramp voltage required to compare with the input signal.  If the ramp function is highly linear then the system errors will be completely cancelled.  Since each input is measured with the same ramp signal and hardware, the component tolerances are exactly the same for each measurement. Regardless of the initial conditions or temperature drifts, no calibration or auto zero function is required

Dual-Slope Analog-to-Digital Converter


Dual-Slope ADC operate on the principle of integrating the unknown input and then comparing the integration times with a reference cycle.  The The basic way is to use two slopes (dual) as in this diagram:

This circuit operates by switching in thr unknown input signal and then integrating for full scale number counts.  During this cycle the reference is switched in and if the reference is of opposite polarity, the ramp will be driven back towards ground.  The time that is takes for the ramp to again reach  the comparator threshold of ground will be directly proportional to the unknown input signal.  Since the circuit uses the same time constant for the integrator, the component tolerances will be the same for both the integration and differentiation  cycles.  Therefore the errors will cancel except for the offset voltage that will be additive during both the cycles.

The main benefits of this Type is the increased range, the increased accuracy and resolution, and the increased speed.

Successive-Approximation Analog-to-Digital Converter

The successive-approximation ADC is also called a sampling ADC.  The term successive-approximation comes from testing each bit of resolution from the most significant bit to the lowest.  These are the most popular today because there is a wide range in performances and levels of integration to fit a different amount of task.  Newer successive-approximation converters sample only sample the input once per conversion, where as the earlier ones sampled as many times as the number of bits.  Sampled converters have the advantage over the one that don't sample because they can tolerate input signals changing between bit tests.  Non sampled converters performance would be downgraded if the input only changed more than a 1/2 if a least significant bit.  This is due to the inputs being sampled n times for every conversion, where n is the bits of resolution.

The basic principle behind this device is to use a DAC approximation of the input and make a comparison with the input for each bit of resolution. The most significant bit is tested first generating 1/2Vref with the DAC and comparing it to the sampled input signal. the successive-approximation register (SAR) drives the DAC to produce estimates of the input signal and continuing this process to the least significant bit.  Each estimation more accurately closes in on the input level.  For each bit test, the comparator output will determine if the estimate should stay as a 1 or 0 in the result register.  If the comparator indicates that the estimated value is under the input level, then the bit stays set.  Otherwise, the bit is reset in the result register.

An Example of successive-approximation (sampling) ADC:

Acknowledgements: Ray Smith, Fardeen Haji , Eric Jeeboo , Richard Phagu

On to Lesson 12

Table of Contents