Sensor Dan Aktuator 2 PDF [PDF]

  • 0 0 0
  • Suka dengan makalah ini dan mengunduhnya? Anda bisa menerbitkan file PDF Anda sendiri secara online secara gratis dalam beberapa menit saja! Sign Up
File loading please wait...
Citation preview

SENSOR DAN AKTUATOR OLEH : TEGUH Pudji Purwanto Jurusan Teknik Mesin FT-UGM



Sensor Terminology • • • • • • • • • •



Sensitivity Range Precision Resolution Accuracy Offset Linearity Hysteresis Response Time Dynamic Linearity



Sensitivity • The sensitivity of the sensor is defined as the slope of the output characteristic curve (DY/DX in Figure 1) or, more generally, the minimum input of physical parameter that will create a detectable output change. In some sensors, the sensitivity is defined as the input parameter change required to produce a standardized output change. In others, it is defined as an output voltage change for a given change in input parameter. For example, a typical blood pressure transducer may have a sensitivity rating of 10 mV/V/mm Hg; that is, there will be a 10-mV output voltage for each volt of excitation potential and each mm Hg of applied pressure. • Sensitivity Error The sensitivity error (shown as a dotted curve in Figure 1) is a departure from the ideal slope of the characteristic curve. For example, the pressure transducer discussed above may have an actual sensitivity of 7.8 mV/V/mm Hg instead of 10 mV/V/mm Hg.



Range • The range of the sensor is the maximum and minimum values of applied parameter that can be measured. For example, a given pressure sensor may have a range of 400 to +400 mm Hg. Alternatively, the positive and negative ranges often are unequal. For example, a certain medical blood pressure transducer is specified to have a minimum (vacuum) limit of -50 mm Hg (Ymin in Figure 1) and a maximum (pressure) limit of +450 mm Hg (Ymax in Figure 1). This specification is common, incidentally, and is one reason doctors and nurses sometimes destroy blood pressure sensors when attempting to draw blood through an arterial line without being mindful of the position of the fluid stopcocks in the system. A small syringe can exert a tremendous vacuum on a closed system.



Precision • The concept of precision refers to the degree of reproducibility of a measurement. In other words, if exactly the same value were measured a number of times, an ideal sensor would output exactly the same value every time. But real sensors output a range of values distributed in some manner relative to the actual correct value. For example, suppose a pressure of exactly 150 mm Hg is applied to a sensor. Even if the applied pressure never changes, the output values from the sensor will vary considerably. Some subtle problems arise in the matter of precision when the true value and the sensor's mean value are not within a certain distance of each other (e.g., the 1-s range of the normal distribution curve).



Resolution • This specification is the smallest detectable incremental change of input parameter that can be detected in the output signal. Resolution can be expressed either as a proportion of the reading (or the full-scale reading) or in absolute terms.



Accuracy • The accuracy of the sensor is the maximum difference that will exist between the actual value (which must be measured by a primary or good secondary standard) and the indicated value at the output of the sensor. Again, the accuracy can be expressed either as a percentage of full scale or in absolute terms.



Offset • The offset error of a transducer is defined as the output that will exist when it should be zero or, alternatively, the difference between the actual output value and the specified output value under some particular set of conditions. An example of the first situation in terms of Figure 1 would exist if the characteristic curve had the same sensitivity slope as the ideal but crossed the Yaxis (output) at b instead of zero. An example of the other form of offset is seen in the characteristic curve of a pH electrode shown in Figure 2. The ideal curve will exist only at one temperature (usually 25°C), while the actual curve will be between the minimum temperature and maximum temperature limits depending on the temperature of the sample and electrode.



Linearity •



The linearity of the transducer is an expression of the extent to which the actual measured curve of a sensor departs from the ideal curve. Figure 3 shows a somewhat exaggerated relationship between the ideal, or least squares fit, line and the actual measured or calibration line (Note in most cases, the static curve is used to determine linearity, and this may deviate somewhat from a dynamic linearity) Linearity is often specified in terms of percentage of nonlinearity, which is defined as:







where Nonlinearity (%) is the percentage of nonlinearity Din(max) is the maximum input deviation INf.s. is the maximum, full-scale input The static nonlinearity defined by Equation 6-1 is often subject to environmental factors, including temperature, vibration, acoustic noise level, and humidity. It is important to know under what conditions the specification is valid and departures from those conditions may not yield linear changes of linearity.



Hysteresis • A transducer should be capable of following the changes of the input parameter regardless of which direction the change is made; hysteresis is the measure of this property. Figure 4 shows a typical hysteresis curve. Note that it matters from which direction the change is made. Approaching a fixed input value (point B in Figure 4) from a higher value (point P) will result in a different indication than approaching the same value from a lesser value (point Q or zero). Note that input value B can be represented by F(X)1, F(X)2, or F(X)3 depending on the immediate previous value—clearly an error due to hysteresis.



Response Time • Sensors do not change output state immediately when an input parameter change occurs. Rather, it will change to the new state over a period of time, called the response time (Tr in Figure 5). The response time can be defined as the time required for a sensor output to change from its previous state to a final settled value within a tolerance band of the correct new value. This concept is somewhat different from the notion of the time constant (T) of the system. This term can be defined in a manner similar to that for a capacitor charging through a resistance and is usually less than the response time. The curves in Figure 5 show two types of response time. In Figure 5a the curve represents the response time following an abrupt positive going step-function change of the input parameter. The form shown in Figure 5b is a decay time (Td to distinguish from Tr, for they are not always the same) in response to a negative going stepfunction change of the input parameter.



Dynamic Linearity •







The dynamic linearity of the sensor is a measure of its ability to follow rapid changes in the input parameter. Amplitude distortion characteristics, phase distortion characteristics, and response time are important in determining dynamic linearity. Given a system of low hysteresis (always desirable), the amplitude response is represented by: F(X) = aX + bX2 + cX3 + dX4 + ••• + K (6-2)







In Equation 6-2, the term F(X) is the output signal, while the X terms represent the input parameter and its harmonics, and K is an offset constant (if any). The harmonics become especially important when the error harmonics generated by the sensor action fall into the same frequency bands as the natural harmonics produced by the dynamic action of the input parameter. All continuous waveforms are represented by a Fourier series of a fundamental sinewave and its harmonics. In any nonsinusoidal waveform (including time-varying changes of a physical parameter). Harmonics present will be that can be affected by the action of the sensor.



Measurement Fundamentals Basic Analog Circuits Analog Sampling Basics Sampling Quality Windowing: Optimizing FFTs Using Window Functions Dithering, Layout, and High-Quality Components: Tools to Decrease the Noise Floor Ground Loops and Returns High-Voltage Measurements and Isolation Low Frequency and DC Measurements



Basic Analog Circuits • • • • •



Ohm's Law and Basic Analog Circuit Concepts Capacitance Calculations Inductance Calculations Analog Amplifier Circuits Analog RC Filters



Ohm's Law and Basic Analog Circuit Concepts



The current I through the resistor R is defined as: I = V/R V = I * R or R = V/I The power P dissipated in R is defined as: P=I*V P = V2/R or P = I2 * R



Voltage Divider Calculation: The formula used to calculate the applied voltage is: E1 = I * R1 (E1 = Voltage drop across R1) E2 = I * R2 (E2 = Voltage drop across R2) I = E / Req where Req = R1 + R2 E = E1 + E2 E = I * (R1 +R2) To calculate the voltage across R2: E2 = R2 * I E2 = R2 * (E/Req) E2 = R2 * [E/(R1 + R2)] E2 = E * [R2 / (R1 + R2)] Note: Voltage divider is described by the equation above.



Current Divider Calculation: The above figure depicts two resistors in a parallel configuration. I = I1 + I2 E = I1 * R1 E = I2 * R2 I = (E/R1) + (E/R2) I = E [(1/R1) + (1/R2)] Since E = I * Req Req = (1/R1) + (1/R2) = [(R1 * R2)/(R1 + R2)]



Capacitance Calculations



Reading Capacitor Values: The unit of capacitance is Farad which is represented by the letter F. The formula to calculate capacitance is:



C = Q/V Where, C = Capacitance in farads Q = Accumulated Charge in Coulombs V = Voltage difference between the plates



Series configuration:



(1/CT) = (1/C1) + (1/C2) + (1/C3) + ….



Parallel Configuration:



Q1 = C1 * V Q2 = C2 * V Q = Q1 + Q2 Q = V * (C1 + C2) Ceq = C1 + C2



Inductance Calculations • Series Configuration:



E1 = L1 (dI/dt) E2 = L2 (dI/dt) LT = L1 + L2 Where dI/dt is the change of current over time



• Parallel Configuration:



(1/LT) = [1 / (L1)] + [1 / (L2 )]



IMPEDANSI •







• • •







IMPEDANSI : HAMBATAN NETO TERHADAP ARUS DIDALAM SUATU RANGKAIAN TERMASUK JUGA KOMPONEN REAKTIF. IMPEDANSI Z = R + jX – R = KOMPONEN RESISTIF – X = KOMPONEN REAKTIF – j = 1 JADI Z MENJADI RESISTIF MURNI JIKA X = 0 IMPEDANSI RESISTOR = R IMPEDANSI KAPASITOR 1 j ZC   jC C IMPEDANSI INDUKTOR



Z L  jL



IMPEDANSI SERI DAN PARALEL • SERI i n



ZT   Z i i 1



• PARALEL i n



1 1  ZT i 1 Z i



Analog Amplifier Circuits



1) Differential Amplifier: An amplifier whose output is proportional to the difference between the input signals.



2) Gain/Frequency Response: A filter changes the amplitude or phase characteristics of a signal with respect to frequency. The frequency domain behavior of a filter is mathematically described in terms of a transfer function or a network function. The transfer function H(s) is described as a ratio between output and input signals. H(s) = Vout(s) / Vin(s) Where, Vout(s) and Vin(s) are the output and input voltage signals and s is the complex frequency variable. The magnitude of transfer function is called amplitude response or frequency response especially in radio applications. 3) Output Buffer



Inverting Amplifier



Calculating the gain of an inverting amplifier: (Vs – V1)/R1 = (V – Vo)/R2 Since V1 = V = 0 (Virtual ground) Vs/R1 = -Vo/R2 Gain = Vo/Vs = -R2/R1



Non-Inverting Amplifier



Calculating the gain of a non-inverting amplifier: VoR1 = (VsR1 +VsR2) (Vo – Vs) R1 = VsR2 [(Vo/Vs) – 1] = (R2/R1) Gain – 1 = (R2/R1) Gain = 1 + (R2/R1) Where Gain = Vo/Vs



Analog RC Filters



RC Low Pass Filter



RC High Pass Filter



Analog Sampling Basics Table of Contents • Bandwidth Definition and Calculations • Sampling Rate • Nyquist Theorem and Nyquist Frequency • Aliasing and Anti-Aliasing Filters • Quantization Error • Dithering • Relevant NI products



Bandwidth Definition and Calculations • Bandwidth is defined as the measure of a circuit or transmission channel to pass a signal without significant attenuation over a range of frequencies. Bandwidth is measured between the lower and upper frequency points where the signal amplitude falls to -3 dB below the passband frequency. The -3 dB points are referred to as the half-power points. • Units Hertz (Hz)



• Example If you input a 1 V, 100 MHz sine wave into high-speed digitizer with a bandwidth of 100 MHz, the signal will be attenuated by the digitizer’s analog input path and the sampled waveform will have amplitude of approximately 0.7 V. The value of ~0.7 V can be calculated by using the following equation: -3 dB = 20 LOG (Vppout / Vppin) Where Vppout = Peak to peak Voltage of the output waveform Vppin = Peak to peak Voltage of the input waveform = 1 V (in the above example) -3 = 20 LOG (Vppout / 1) Vppout = 0.7079 V = 0.7 V approximately



Typical 100 MHz Digitizer Input Response



Theoretical amplitude error of a measured signal • It is recommended that the bandwidth of your digitizer be 3 to 5 times the highest frequency component of interest in the measured signal to capture the signal with minimal amplitude error (bandwidth required = (3 to 5)*frequency of interest). The theoretical amplitude error of a measured signal can be calculated from the ratio (R) of the digitizer's bandwidth (B) in relation to the input signal frequency (fin).



Where R = B / fin



Using equation 1, the error in amplitude when measuring a 100 MHz sine wave with a 100 MHz high-speed digitizer, which yields a ratio R=1, is approximately 29.3%. Referring to figure 1, this would mean that if the input waveform has peak to peak amplitude of 1 V, then the output waveform would have peak to peak amplitude of approximately 0.707 V. As another example, if you input a 75 MHz sine wave to a National Instruments NI 5124 High-Speed Digitizer which has a bandwidth of 150 MHz, it yields a ratio R= 2. Using equation 1, this means that the theoretical error in amplitude would be approximately 10.6%



Rise Time • Another important topic related to the bandwidth is rise time. The rise time of an input signal is the time for a signal to transition from 10% to 90% of the maximum signal amplitude and is inversely related to bandwidth.



• It is recommended that the rise time of the digitizer input path be 1/3 to 1/5 the rise time of the measured signal to capture the signal with minimal rise time error. The theoretical rise time measured (Trm) can be calculated from the rise time of the digitizer (Trd) and the actual rise time of the input signal (Trs).



• For example, if a sinusoid signal with a rise time of 15 ns is passed through the NI 5122 High-Speed Digitizer which has a rise time of 3.5 ns, using equation 2 the theoretical measured rise time for the sinusoid signal would be approximately 15.4 ns.



Sampling Rate • Sampling rate is not directly related to the bandwidth specifications of a high-speed digitizer. Sampling rate is the speed at which the digitizer’s ADC converts the input signal, after the signal has passed through the analog input path, to digital values.



Product



Bandwidth



Sampling rate



Resolution



Digital Multimeters (DMM)



300 kHz



1.8 MS/s



10 bits to 23 bits



Dynamic Signal Acquisition (DSA)



45 kHz



Up to 204.8 KS/s



16 bits, 24 bits



M-series Data Acquisition



700 kHz



Up to 1.25 MS/s



16 bits, 18 bits



S-series Data Acquisition



1.3 MHz



Up to 10 MS/s



12 bits, 14 bits, 16 bits



High-Speed Digitizers



150 MHz



200 MS/s



8 bits to 21 bits



Example :Sampling of a sine wave using a 3 bit digitizer



Nyquist Theorem and Nyquist Frequency • Nyquist Theorem: Sampling rate (f s) > 2 * highest frequency component (of interest) in the measured signal The Nyquist theorem states that a signal must be sampled at a rate greater than twice the highest frequency component of interest in the signal to capture the highest frequency component of interest; otherwise, the high-frequency content will alias at a frequency inside the spectrum of interest (pass-band). • Nyquist Frequency describes the highest frequency component allowed to avoid Aliasing for a given sampling frequency.



Effects of various sampling rates while sampling a signal



Aliasing and Anti-Aliasing Filters If a signal is sampled at a sampling rate smaller than twice the Nyquist frequency, false lower frequency component(s) appears in the sampled data. This phenomenon is called Aliasing. The following figure shows a 5 MHz sine wave digitized by a 6 MS/s ADC. The dotted line indicates the aliased signal recorded by the ADC. The 5 MHz frequency aliases back in the pass-band, falsely appearing as a 1 MHz sine wave.



SAMPLING RATE DAN ALIASING



• Alias frequency The alias frequency is the absolute value of the difference between the frequency of the input signal and the closest integer multiple of the sampling rate. Alias Freq. = ABS (Closest Integer Multiple of Sampling Freq. – Input Freq.)



where ABS means the absolute value



• Real-world signals often contain frequency components that lie above the Nyquist frequency. These frequencies are erroneously aliased and added to the components of the signal that are sampled accurately, producing distorted sampled data. In systems where you want to perform accurate measurements using sampled data, the sampling rate must be set high enough (about 5 to 10 times the highest frequency component in the signal) to prevent aliasing, or an optional anti-aliasing filter (a low pass filter that attenuates any frequencies in the input signal that are greater than the Nyquist frequency) must be introduced before the ADC to restrict the bandwidth of the input signal to meet the sampling criteria. For example, in the NI 4461 Dynamic Signal Acquisition device, the analog inputs have both analog and digital filters implemented in hardware to prevent aliasing. Input signals are first passed through a fixed analog filter to remove any signals with frequency components beyond the range of the ADCs. Then digital anti-aliasing filters automatically adjust their cutoff frequency to remove any frequency components above half the programmed sampling rate.



Example Assume fs, the sampling frequency, is 100 Hz and that the input signal contains the following frequencies: 25 Hz, 70 Hz, 160 Hz, and 510 Hz. These frequencies are shown in the following figure.



Alias F2 = |100 – 70| = 30 Hz Alias F3 = | (2)100 – 160| = 40 Hz Alias F4 = | (5)100 – 510| = 10 Hz



Quantization Error • Quantization is defined as the process of converting an analog signal to a digital representation. Quantization is performed by an analog-to-digital converter (A/D converter or ADC). If we can convert our analog signals to a stream of digital data, we can take advantage of the power of the personal computer and software to do any manipulation or calculation on the signals. To do this, we must sample our analog waveform at well-defined discrete (but limited) times so we can maintain a close relationship between time in the analog domain and time in the digital domain. If we do this, we can reconstruct the signal in the digital domain, do our processing on it, and later, reconstruct it into the analog domain if we need to.



When converting an analog signal to digital domain, signal values are taken at discrete time instants



• For example, a 3-bit ADC divides the range into 23 or eight divisions. A binary or digital code between 000 and 111 represents each division. The ADC translates each measurement of the analog signal to one of the digital divisions. Figure 10 shows a 5 kHz sine wave digital image obtained by a 3-bit ADC. As shown in figure 11, the digital signal does not represent the original signal adequately because the converter has too few digital divisions to represent the varying voltages of the analog signal. However, increasing the resolution to 16 bits to increase the ADC number of divisions from eight (23) to 65,536 (216) allows the 16-bit ADC to obtain an extremely accurate representation of the analog signal. This inherent uncertainty in digitizing an analog value is referred to as the Quantization error. The quantization error depends on the number of bits in the converter, along with its errors, noise, and non-linearities.



Quantization error when using a 3 bit ADC



Dithering • During Quantization, in the time domain, we could almost completely preserve the waveform information by sampling fast enough. In the amplitude domain we can preserve most of the waveform information by dithering. Dithering involves the deliberate addition of noise to our input signal. It helps by smearing out the little differences in amplitude resolution. The key is to add random noise in a way that makes the signal bounce back and forth between successive levels. Of course, this in itself just makes the signal noisier. But, the signal smoothes out by averaging this noise digitally once the signal is acquired. Note: Mathematically averaging the digital signals without dithering does not remove the quantization steps. It simply rounds them out a little, as shown in figure 13b.



Effects of dithering and averaging on a sine wave input



Decreasing quantization error on 12-bit devices using dithering



Sampling Quality Table of Contents • Resolution • Measurement Sensitivity • Accuracy and Example Accuracy Calculations • Difference between Precision and Accuracy • Noise and Noise Sources • Noise Reduction Strategies



Resolution • Resolution is defined as the smallest amount of input signal change that an instrument or sensor can detect reliably. Resolution can be expressed as a %, x parts out of y, or most conveniently, as bits. Resolution is determined by the instrument noise (either circuit or quantization noise) and the smallest change that is detectable by the display system of the instrument. For example, if you have a noiseless digital multimeter that has 5 ½-displayed digits and is set to the 20 V input range, the resolution of this digital multimeter is 0.1 mV. This can be determined looking at the change associated with the least significant digit. Now, if this same digital multimeter had 10 counts of peak-to-peak noise, then the effective resolution is decreased to 1 mV, because any signal change less then 1 mV is indistinguishable from the noise.



Measurement Sensitivity • Sensitivity is defined as a measure of the smallest signal the instrument can measure at the lowest range setting of the instrument. Sensitivity is not related to resolution. For example, an 8-bit analog meter could have more sensitivity than a 16-bit Data Acquisition board. As another example, a digital multimeter with a lowest measurement range of 10 V may be able to detect signals with 1 mV resolution but the smallest detectable voltage it can measure may be 15 mV. In this case, the digital multimeter has a resolution of 1 mV but a sensitivity of 15 mV.



Accuracy and Example Accuracy Calculations A digital multimeter is often specified as: (% Reading) + Offset or (% Reading) + (% Range) or ±(ppm of reading + ppm of range) For example, assume a digital multimeter set to the 10 V range is operating 90 days after calibration at 23ºC ±5ºC, and is expecting a 7 V signal. The accuracy specifications for these conditions state ±(20 ppm of reading + 6 ppm of range). To determine accuracy of the digital multimeter under these conditions, use the following formula:



Accuracy = ±(20 ppm of reading + 6 ppm of range) Accuracy = ±(20 ppm of 7 V + 6 ppm of 10 V) Accuracy = ±((7 V(20/1,000,000) + (10 V(6/1,000,000))



Accuracy = 200 µV Therefore, the reading should be within 200 µV of the actual input voltage. Accuracy can also be defined in terms of the deviation from an ideal transfer function as follows:



A data acquisition device is often specified as: AbsoluteAccuracy = Reading · (GainError) + Range × (OffsetError) + NoiseUncertainty GainError = ResidualAIGainError + GainTempco × (TempChangeFromLastInternalCal) + ReferenceTempco × (TempChangeFromLastExternalCal) OffsetError = ResidualAIOffsetError + OffsetTempco × (TempChangeFromLastInternalCal) + INL_Error



For example, on the 10 V range, the absolute accuracy at full scale of an NI 628X M-series data acquisition device is as follows:



GainError = 40 ppm + 17 ppm × 1 + 1 ppm × 10 GainError = 67 ppm OffsetError = 8 ppm + 11 ppm × 1 + 10 ppm OffsetError = 29 ppm



NoiseUncertainty = 18 µV AbsoluteAccuracy = 10 V × (GainError) + 10 V × (OffsetError) + NoiseUncertainty Absolute Accuracy = 980 µV It is important to note that the accuracy of an instrument depends not only on the instrument, but also on the type of signal being measured. If the signal being measured is noisy, the accuracy of the measurement gets adversely affected.



Difference between Precision and Accuracy Precision is defined as a measure of the stability of the instrument and its capability of resulting in the same measurement over and over again for the same input signal. It is given by: Precision = 1 - │ Xn - Av(Xn)│/ │ Av(Xn) │ where Xn = the value of the nth measurement and Av(Xn) = the average value of the set of n measurement. For instance, if you are monitoring a constant voltage of 1 V, and you notice that your measured value changes by 20 µV between measurements then your measurement precision is Precision = (1 – 20 µV/ 1 V) × 100 = 99.998 %



Noise and Noise Sources • Noise is any unwanted signal that interferes with the desired signal. Noise interferes with the measurement by inducing uncertainty that tends to be time-variant. It can be random or periodic. Noise may either be transient in nature, have fixed frequencies such as harmonic or mixer products, or be broadband random noise. Noise is sometimes considered separately from accuracy specifications, because averaging and other techniques can be used to reduce it in the measurement. However, other times it is included in the accuracy specifications. Footnotes in the specifications will tell you if it is included or not.



Sources of Noise There are various sources of noise in instrumentation. Noise that is a result of the source (or Device Under Test) itself are called Intrinsic. These noise sources can be due to thermal sources, like the noise of a resistor, or can be 1/F in nature, which is caused by semiconductor devices. Also, noise can come from the outside world, such as from power lines, lights in the room, motors, and radio frequency sources (radio transmitters, cell phones, radio stations, etc). Thermal Noise An ideal electronic circuit produces no noise of its own, so the output signal from the ideal circuit contains only the noise that was in the original signal. But real electronic circuits and components do produce a certain level of inherent noise of their own. Even the simple fixed-value resistor is noisy.



Figure 3 Resistor noise, (a) Ideal, noise-free resistor. (b) Practical resistor has internal thermal noise source



Figure 3a shows the equivalent circuit for an ideal, noise-free resistor. The inherent noise is represented in Figure 3b by a noise voltage source, Vn, in series with the ideal, noise-free resistance, Ri. At any temperature above absolute zero (0°K or about -273°C), electrons in any material are in constant random motion. Because of the inherent randomness of that motion, however, there is no detectable current in any one direction. In other words, electron drift in any single direction is cancelled over short time periods by equal drift in the opposite direction. Electron motions are therefore statistically de-correlated. There is, however, a continuous series of random current pulses generated in the material, and those pulses are seen by the outside world as a noise signal. This signal is called by several names: Johnson noise, thermal agitation noise, or thermal noise.



The expression for Johnson noise is: (Vn) 2 = 4KTRB V2/Hz where Vn is the noise voltage (V) K is Boltzmann's constant (1.38 X 1023 J/°K) T is the temperature in degrees Kelvin (°K) R is the resistance in ohms (Ω) B is the bandwidth in hertz (Hz) With the constants collected, and the expression normalized to 1 kΩ, the above equation reduces to:



Flicker or 1/F noise Semiconductor devices tend to have noise that is not flat with frequency. It rises at the low end. This is called 1/F noise, Pink Noise, Excess Noise or Flicker Noise. 1/F noise also occurs in many physical systems other than electrical. Examples are proteins, reaction times of cognitive processes, and even earthquake activity.



Noise Reduction Strategies 1. Keep the source resistance and the amplifier input resistance as low as possible. Using high value resistances will increase thermal noise proportionally. 2. Total thermal noise is also a function of the bandwidth of the circuit. Therefore, reducing the bandwidth of the circuit to a minimum will also minimize noise. But this job must be done mindfully because signals have a Fourier spectrum that must be preserved for accurate measurement. The solution is to match the bandwidth to the frequency response required for the input signal. 3. Prevent external noise from affecting the performance of the system by appropriate use of grounding, shielding, cabling, careful physical placement of wires and filtering. 4. Use a low-noise amplifier in the input stage of the system. 5. For some semiconductor circuits, use the lowest DC power supply potential that will do the job.



Ground Loops and Returns Table of Contents • Grounding and Measurements • Signal Sources • Measurement Systems • Signal Source - Measurement System Configurations • Conclusion



Grounding and Measurements



Typical data acquisition block diagram



• Proper ground configuration is essential for a good data acquisition system. Most measurement systems such as data acquisition devices allow for many different types of ground configurations depending on the type of signal being acquired or measured. This flexibility is the source of confusion when deciding which configuration to use in each specific situation. Figure 1 below depicts the typical data acquisition system discussed in this tutorial. The blocks converting the physical phenomena into voltage signals constitute our signal source. Our Measurement system is constituted of the signal conditioning and the data acquisition blocks. In some cases, the signal conditioning block is also considered as a signal source to the data acquisition. This usually happens when the signal conditioning and the data acquisition devices are from different manufacturers.



Signal Sources



• Grounded or Ground-Referenced Signal Sources A grounded signal source is one in which the voltage signals are referenced to a system ground, such as earth or building ground. Note that the negative terminal of the signal source shown above is referenced to ground. The most common examples of grounded signal sources are devices, such as power supplies, oscilloscopes, and signal generators that plug into the building ground through a wall outlet, as shown in figure 3. The grounds of two independently grounded signal sources generally will not be at the same potential. The difference in ground potential between two instruments connected to the same building ground system is typically 10mV to 200mV, or even more. The difference can be higher if power distribution circuits are not properly connected.



• Ungrounded or Floating Signal Sources A floating or ungrounded signal source is one in which the voltage signal is not referenced to a system ground, such as earth or building ground. Note on Figure 2 above that neither the positive nor the negative terminal is referenced to ground for the ungrounded source. Common examples of floating signal sources are digital multimeters, batteries, thermocouples, transformers, and isolation amplifiers.



Measurement Systems • Differential Measurement System In a differential measurement system neither input to the instrumentation amplifier is referenced to a system ground. In Figure 4, each channel of the measurement system has a negative and a positive lead, none of which is connected to the measurement system ground (AIGND). Measuring a signal on channel 0 in differential mode would require connecting one lead of the signal source to CH0+ and the other lead to CH0-.



• Measurements done in differential mode require more channels since each measurement requires two analog input channels. However, differential mode can deliver more accurate measurements, because it allows the amplifier to reject commonmode voltage and other common-mode noise present in the signal. Common-mode voltage is any voltage present at the instrumentation amplifier inputs with respect to the amplifier ground. The formula for calculating the amount of common-mode voltage with respect to the DAQ device ground is as follows: Vcm = (V+ + V-) / 2 • V+ and V- are the voltages at "common" to both the positive and negative terminals of the amplifier referenced to the amplifier ground. Figure 5 displays how you measure the common mode voltage by shorting V+ and V- ; therefore making the differential voltage zero. Any voltage measured at Vout is equal to the common mode voltage at the input of the amplifier.



Common Mode Voltage



• An ideal differential measurement system reads only the potential difference between the positive and negative terminals of the amplifier and thus it completely rejects common-mode voltages. However, practical devices are limited in their ability to reject common-mode voltage. You can calculate the amount of common-mode voltage your system can reject with the following formula: Vcm(max) = {MVW - [(Vdiff(max)) x (Amplification)]}/2 • MVW is the Maximum Working Voltage listed in the device specifications • Vdiff(max) is the maximum expected difference between the amplifier terminals • Amplification is device gain setting



Single-Ended Measurement System • Single ended is the “default” configuration for most data acquisition devices, modular instruments, and stand-alone devices. Figure 6a show an example of Single Ended configuration. In contrast to the Differential mode, channels in Single Ended modes require only a single analog input channel. The second lead is common and used by all channels. This configuration has the advantage that the channel count is doubled. This results from the fact that all the channels that were used in Differential mode as the negative channel inputs are now available. However, single-ended systems are very susceptible to ground loops, for more information about removing ground loops from your system please read the Measurement Configuration section below. There are essentially two main types of Single-Ended measurement systems: • Ground Referenced Single Ended (GRSE) or simply called Referenced Single Ended (RSE) refers to single-ended systems in which the common channel is connected to ground. Figure 6a below depicts a Reference Single Ended system in which all the channels are referenced to AIGND which represents the system ground.



Ground Referenced Single-Ended (GRSE) or Referenced Single Ended (RSE)



 In a Non-Referenced Single-Ended (NRSE) system, all the channels are still referenced to a common point. However, the common channel in this case is not grounded. Figure 6b below shows an example of a NonReferenced Single Ended system where all the channels are referenced to AISENSE which is not connected to system ground.



Signal Source - Measurement System Configurations



Measurement System with a Ground Loop











Measuring Grounded Signal Sources A grounded signal source is best measured with a differential or non-referenced measurement system. Figure 7 shows the pitfall of using a ground-referenced measurement system to measure a grounded signal source. In this case, the measured voltage, Vm, is the sum of the signal voltage, Vs, and the potential difference, ΔVg, that exists between the signal source ground and the measurement system ground. This potential difference is generally not a DC level; thus, the result is a noisy measurement system often revealing power-line frequency (60 Hz) components in the readings. As mentioned earlier, there can exist up to 200 mV difference between two ground connections. This difference causes a current called ground loop current to flow in the interconnection which can greatly affect measurements causing offset errors, especially when measuring low level signals from sensors. A ground-referenced system is an acceptable solution if the signal voltage levels are high and the interconnection wiring between the source and the measurement device has a low impedance. In this case, the signal voltage measurement is degraded by ground loops, but the degradation may be tolerable. The polarity of a grounded signal source must be carefully observed before connecting it to a ground-referenced measurement system because the signal source can be shorted to ground, thus possibly damaging the signal source.







Measuring Floating (Non-referenced) Sources Floating signal sources can be measured with both differential and single-ended measurement systems. In the case of the differential measurement system, however, care should be taken to ensure that the common-mode voltage level of the signal with respect to the measurement system ground remains in the common-mode input range of the measurement device. A variety of phenomena – for example, the instrumentation amplifier input bias currents – can move the voltage level of the floating source out of the valid range of the input stage of a data acquisition device. To anchor this voltage level to some reference, resistors are used. These resistors, called bias resistors, provide a DC path from the instrumentation amplifier inputs to the instrumentation amplifier ground. These resistors should be of a large enough value to allow the source to float with respect to the measurement reference (AIGND in the previously described measurement system) and not load the signal source, but small enough to keep the voltage in the range of the input stage of the device. Typically, values between 10 kΩ and 100 kΩ work well with low-impedance sources such as thermocouples and signal conditioning module outputs. These bias resistors are connected between each lead and the measurement system ground. Failure to use these resistors may result in erratic or saturated (positive full-scale or negative full-scale) readings.



• If the input signal is DC-coupled, only one resistor connected from the (-) negative input to the measurement system ground is required to satisfy the bias current path requirement, but this leads to an unbalanced system if the source impedance of the signal source is relatively high. Balanced systems are desirable from a noise immunity point of view. Consequently, two resistors of equal value – one for signal high (+) input and the other for signal low (-) input to ground – should be used if the source impedance of the signal source is high. A single bias resistor is sufficient for lowimpedance DC-coupled sources such as thermocouples. Balanced circuits are discussed further later in this application note. If the input signal is ACcoupled, two bias resistors are required to satisfy the bias current path requirement of the instrumentation amplifier. If the single-ended input mode is to be used, a GRSE input system (Figure 8a) can be used for a floating signal source. No ground loop is created in this case. The NRSE input system (Figure 12b) can also be used and is preferable from a noise pickup point of view. Floating sources do require bias resistor(s) between the AISENSE input and the measurement system ground (AIGND) in the NRSE input configuration.



Floating Signal Source and Single-Ended Configurations



Taking Thermocouple Temperature Measurements • • • •



Table of Contents What Is Temperature? What Is a Thermocouple? Thermocouple Measurement and Signal Conditioning • DAQ Systems for Thermocouple Measurements • Relevant NI Products



What Is Temperature? • Qualitatively, the temperature of an object determines the sensation of warmth or coldness felt by touching it. More specifically, temperature is a measure of the average kinetic energy of the particles in a sample of matter, expressed in units of degrees on a standard scale.



Reference Temperatures We must rely upon temperatures established by physical phenomena which are easily observed and consistent in nature. The International Practical Temperature Scale (IPTS) is based on such phenomena. Revised in 1968, it establishes eleven reference temperatures.



Sources of temperature measurement error



What Is a Thermocouple? •



One of the most frequently used temperature sensors is the thermocouple. Thermocouples are very rugged, inexpensive devices that operate over a wide temperature range. A thermocouple is created whenever two dissimilar metals touch and the contact point produces a small open-circuit voltage as a function of temperature. This thermoelectric voltage is known as the Seebeck voltage, named after Thomas Seebeck, who discovered it in 1821. The voltage is nonlinear with respect to temperature. However, for small changes in temperature, the voltage is approximately linear, or



where is the change in voltage, α is the Seebeck coefficient, and ΔT is the change in temperature.



Measuring Thermocouple Voltage



The Reference Junction



RdF Surface Temperature Sensors Celcius.htm



Hardware Compensation



Cold Junction Compnesator (CJC)



Thermocouple Types Conductors – Positive



Thermocouple Type



Conductors – Negative



B



Platinum-30% rhodium



Platinum-6% rhodium



E



Nickel-chromium alloy



Copper-nickel alloy



J



Iron



Copper-nickel alloy



K



Nickel-chromium alloy



Nickel-aluminum alloy



N



Nickel-chromium-silicon alloy



Nickel-silicon-magnesium alloy



R



Platinum-13% rhodium



Platinum



S



Platinum-10% rhodium



Platinum



T



Copper



Copper-nickel alloy



Sealed and Isolated from Sheath: Good relatively trouble-free arrangement. The principal reason for not using this arrangement for all applications is its sluggish response time - the typical time constant is 75 seconds Sealed and Grounded to Sheath: Can cause ground loops and other noise injection, but provides a reasonable time constant (40 seconds) and a sealed enclosure. Exposed Bead: Faster response time constant (typically 15 seconds), but lacks mechanical and chemical protection, and electrical isolation from material being measured. The porous insulating mineral oxides must be sealed Exposed Fast Response: Fastest response time constant, typically 2 seconds but with fine gauge of junction wire the time constant can be 10-100 ms. In addition to problems of the exposed bead type, the protruding and light construction makes the thermocouple more prone to physical damage.



Thermopile



• Resistance temperature detectors (RTDs) operate on the inherent propensity of metal to exhibit a change in electrical resistance as a result of a change in temperature. We are all aware that metals are conductive materials. It is actually the inverse of a metal's conductivity, or its resistivity, that brought about the development of RTDs. Each metal has a specific and unique resistivity that can be determined experimentally. This resistance, “R”, is directly proportional to a metal wire's length, “L”, and inversely proportional to the crosssectional area, “A”: • (1) • • where: = the constant of proportionality, or the resistivity of the material.



• RTDs are manufactured from metals whose resistance increases with temperature. Within a limited temperature range, this resistivity increases linearly with temperature:



Rt = Resistance at temperatre t R0 = Resistance at temperature to And setting to to 0oC,



α = temperature coefficient of resistance



• Platinum RTDs are made of either IEC/DIN grade platinum or reference grade platinum. The difference lies in the purity of the platinum. The IEC/DIN standard is pure platinum that is intentionally contaminated with other platinum group metals. The reference grade platinum is made from 99.999+% pure platinum. Both probes will read 100Ω at 0°C, but at 100°C the DIN grade platinum RTD will read 138.5Ω and the reference grade will read 139.24Ω in RdF's maxiumum performance strain-free assemblies. International committees have been established to develop standard curves for RTDs. Only platinum RTDs have an international standard. Standards for any other metal are local. The committees have adopted a mean temperature coefficient between the 0°C and 100°C resistance values as the (“alpha”) for industrial platinum RTDs conforming to the relationships below.



• IEC/DIN grade platinum: α = 0.00385 Ω/Ω/°C • Reference grade platinum: α = 0.003926 Ω/Ω/°C (max.) • The relationship between resistance and temperature can be approximated by the Callendar-Van Dusen equation:



RTD Materials



The coiled element sensor, made by inserting the helical sensing wires into a packed powder-filled insulating mandrel, provides a strain-free sensing element.



The thin film sensing element is made by depositing a thin layer of platinum in a resistance pattern on a ceramic substrate. A glassy layer is applied for seal and protection.



RTD Construction



RTD Measurement Configurations:



Cable Lead : Source of Error



Three Wire Configuration



Four Wire Configuration



Specifications When discussing RTDs, several specifications must be considered: Wiring configuration (2, 3 or 4-wire) Self-heating Accuracy Stability Repeatability Response time



Wiring Configuration Two-wire RTDs are typically used only with very short lead wires, or with a 1000Ω element.



Resistances L1 and L3 in leads up to tens of feet long usually match well enough for 100 ohm three-wire RTDs. The worst case is resistance offset equal to 10% of single-lead resistance The optimum form of connection for RTDs is a fourwire circuit . It removes the error caused by mismatched resistance of the lead wires. A constant current is passed through L1 and L4; L2 and L3 measure the voltage drop across the RTD. With a constant current, the voltage is strictly a function of the resistance and a true measurement is achieved



Two-wire configuration



The simplest resistance thermometer configuration uses two wires. It is only used when high accuracy is not required, as the resistance of the connecting wires is added to that of the sensor, leading to errors of measurement. This configuration allows use of 100 meters of cable. This applies equally to balanced bridge and fixed bridge system.



Three-wire configuration



In order to minimize the effects of the lead resistances, a three-wire configuration can be used. Using this method the two leads to the sensor are on adjoining arms. There is a lead resistance in each arm of the bridge so that the resistance is cancelled out, so long as the two lead resistances are accurately the same. This configuration allows up to 600 meters of cable



Four-wire configuration



The four-wire resistance thermometer configuration increases the accuracy and reliability of the resistance being measured: the resistance error due to lead wire resistance is zero. In the diagram above a standard two-terminal RTD is used with another pair of wires to form an additional loop that cancels out the lead resistance



four-wire Kelvin connection



It provides full cancellation of spurious effects; cable resistance of up to 15 Ω can be handled.



• Self-Heating , I2R The amount of self-heating also depends heavily on the medium in which the RTD is immersed. An RTD can self-heat up to 100x higher in still air than in moving water



Accuracy/Interchangeability, Stability & Repeatability •







These terms are often confused, but it is important to understand the difference. Accuracy/Interchangeability IEC standard 751 sets two tolerance classes for the interchangeability of platinum RTDs: Class A and Class B: Class A: Dt °C = ± ( 0.15 + 0.002 • | t | ) Class B: Dt °C = ± ( 0.30 + 0.005 • | t | ) where:| t | = absolute value of temperature in °C Class A applies to temperatures from –200°C to 650°C, and only for RTDs with three or four-wire configurations. Class B covers the entire range from –200°C to 850°C. A major advantage of platinum RTDs is that calibration at as few as two temperatures offers accuracy, preserved by high stability, much tighter than even Class A interchangeability. No other temperature sensor offers specifications for stability (see following) that will preserve laboratory accuracy embedded in calibrations over long time periods and wide temperature ranges and every configuration. Primary Standard Resistance Temperature Sensors (SPRTS) are platinum for good reason.



• Stability This is the sensor's ability to maintain a consistent output when a constant input is applied. Physical or thermal shocks can cause small, one-time shifts. The material that the platinum is adhered to, when wound on a mandrel or deposited on a substrate, can expand and contract differentially to cause strain incorporated in normal performance but not cause shifts. Stability limits conservatively specified by RdF are typically 0.05°C/yr in wide temperature ranges or 0.05/5yrs in medium ranges. • Repeatability Repeatability is the sensor’s ability to give the same output or reading under repeated identical conditions. In platinum RTDs, cycle to cycle differences normally can’t be measured and are considered lumped into stability specifications. Absolute accuracy is not necessary in most applications. The focus should be on the stability and repeatability of the sensor. If an RTD in a 100.00°C bath consistently reads 100.06°C, the electronics can easily compensate for this error. The stability of platinum RTDs is exceptional, with most experiencing drift ratesf



P1



Flow = F2 (m3/min) Pressure = P2 (kPa)



P2








P1