Analog vs digital signal / gain amplifiers

Most scale concepts I have seen so far use a digital amplifier like the HX711 for scale projects. Why is this? What are the benefits compared to a analog amplifier like the MCP3424?

The linked MCP3424 is very similar to the HX711 or ADS1231! It’s not a “normal” analog amp only. The descriptions shows the two functional parts of the IC

  • Programmable Gain Amplifier (PGA)
  • 18 bit ADC
  • 4 channels (not so important for your question)

So it consists of (1) an “amplifier”, this has no fixed amplification factor but is programmable via code. In the spec you can read that possible factors are x1, x2, x4, x8 (let’s notice this for later!). And is has (2) an additional analog digital converter (ADC) with 18 bit!

Similar for the HX711

  • PGA, but 32, 64 and 128
  • 24 bit ADC
  • 2 channels

So we have a much higher amplification on this chip, a deeper resolution but only 2 channels, and btw. the 2nd channel is only rideable with a fixed gain of 32. The amplification of x32 is too low for our setup! So we can use only one channel. That means also that the maximum amplification factor 8 from the MCP3424 is even lower and so the MCP3424 is no option for us!

In comparison the ADS1231:

  • fixed gain x128
  • 24 bit ADC
  • 1 channel

or the ADS1232 or ADS1234 with 2 or 4 channels.

About your initial question: Yes, you can use a normal instrumentation amplifier, e.g. the INA125 and read out the load with an Arduino but in the end you will have a maximum resolution of 10 bit, this is the Arduino’s ADC. So in case we have 200 kg maximum load and 10 bit (1024) ADC we can measure in steps of 200 kg / 1024 = 0.195 kg or round about 200 g! Our load cell can make it better, so we should use a “better” ADC. Because Arduino’s ADS is not “pimpable” (or at least not for our target resolution) we have to count on an additional external ADC in our case in a single package with the amp.

Great clear explanation. Thank you very much.

How can I compute the resolution of the load cell? What role does the gain factor play in this calculations?
I want calculate the minimum resolution from the load cell we can measure.

Found some explanation:

Regarding accuracy I think we have to look at different sides:

Load Cell itself

On the Bosche datasheet for the H30a you can read “Genauigkeitsklasse C3, Y=7.500”. As far as I know means that the maximum laod divided by Y, so it depends on the maximum load and is different for a 50 kg load cell: 50 kg / 7500 = 7 g vs. a 200 kg load cell 200 kg / 7500 = 27 g . I think it is not the resolution the cell can do but a more or less “guaranteed” reliable accuracy.

Input Voltage

In case you have a higher input voltage you will have a higher output voltage (for the same load). So you can increase the resolution or better you can expand the output range of mVolts, the downside is that you have to provide a second voltage rail beside the 3.3 V or 5 V for the microcontroller and you have a higher power consumption. So this is no way for me.

There are load cells with e.g. 1 mV output per V input or 2 mV/V This must fit to the gain factor of the ADC.

Gain Factor

The amplifier must have a ration that fits to the ADC and its input voltage in the end. So in case it is too low you you can mesure the lower range of the scale case it is too high it could be that it outrun the maximum input voltage of the ADC.

Resolution of the ADC

As explained above for our scenario and wanted resolution a 10bit ADC is not sufficient.

Do you want to know the “minimum resolution” or the maximum resolution?

tl;dr The analog front end features 24bit (about 10mg @150kg fs), but real world effects demand at least half the effective digital resolution or worse (50…200g per count @150kg fs). If not counteracted, especially the impact of temperature on various parameters of the measurement in fact dictates your usable resolution erm, accuracy.

Let’s use a sensitivity of many load cells offered: 2mV/V (nom.) - means: per volt excitation voltage a load of the specified maximum (e.g. 150kg) on the load cell presents 2mV output voltage change. Given an excitation voltage of 5V, it computes to 10mV - which is analog dynamic range or full scale (fs) of the sensor.

An ADS1232/ADS1234 has a maximum differential input range of ±0.5VREF/gain, and with PGA gain=128 and ratiometric measurement (AVDD=REFP=5V, REFN=(A)GND, resulting in 5V VREF), you get almost 40mV differential input range (0.0390625V) - this maps to the digital fs of 24bit (a PGA gain of 32 would here be a perfect match).

With this scale configuration 150kg load means 10mV output (change), and also means we loose 2 bits: log₂(2^24/4) = 22 bits because of the 40mV ADC fs. 150kg by 22bits is 0.03576g or 36mg - one third of a worker bee.

Now to the downsides: look in the datasheet for the effective number of bits (ENOB, ENB) and the noise-free bits (NFB) here (ADS1232/4): 21.1 resp. 18.4 bits (AVDD=VREF=5V, ADC data rate=10sps; datasheet, p. 5).

To compare this: the RMS noise in this case is 17nV (datasheet value), and one LSB here is ‘worth’ 2.384nV (10mV/22bit). That means that we need about seven times the voltage one bit values to get a usable resolution out of the noise, and this is how more bits of resolution slip away: log₂(2^22*(2.384nV/17nV)) = 19.1 bit are left to the desired signal here. Whether taken from the datasheet or calculated: in this configuration you can distinguish 18bit (18.4) out of ADC-introduced noise.

150kg/2^18bit = 0.572g - a five worker bees resolution remains. (17bits here would be 1.2g resolution, 16bits 2.4g and so on.)

This all is only true for one given moment, constant (known or unknown) temperature of all parts involved. But the actual sensor, a strain gauge weight scale suffers from many analog real-world effects: gain errors, offset errrors, non-linearities in the transfer function (btw, all this does your ADC front end too, but less bad…) combined with hysteresis/repeatablility problems - and they all vary with temperature, some even with aging (most underestimated: creep effecs)! Any two-point ‘calibration’ is only valid at first for a given temperature. If our 150kg scale has a temperature coefficient of 0.005% fs/K (or 50ppm), then every degree Celsius forges the readout value wrong by 7.5g. When you first set up the scale at 25°C, in winter at -10°C any readout is at least about 260g less (or more, depends) than the same readout in summer - means 2000 bees error, -and this was only the influence of offset drift, and we didn’t even mention Seebeck and parasitic thermocouples (here, there…) yet…

If our 150kg fs cell has an accuracy class of Y=7500 (see @clemens’ posting above), every count is worth 20g - but only at a certain, the same temperature! Of course this analog sensor is able to produce any analog voltage (in its output range), but as a digital scale, they put all the analog output values in 7500 bins: higher digital resolution does not deliver higher analog resolution. 13bit (12,87) only are needed to code 7500 counts.

So your usable resolution is in best case 13bit - here 20g. If no temperature compensation countermeasurements took place and remaining errors taken into consideration (also the ones of the ADC front end’s analog parts) and creep influence was observed, over the year your scale might be wrong by 100 times that value - 2kg and worse (more than 1% fs here), but still with the same relative precision…

Some commercial hive scales’ datasheets expose a resolution of 100 or 200g (11 or 10 bit), this is also a way to cover a lot of other strain gauge weight scale -introduced errors. The linear slope and offset model works, but it is only one truth.

@juergen: you see, our analog front end comes with 24bit, and we end up with 10bit, or worse in the extremes. We waste half the ADC resolution to overcome those particular uncertainties of our analog weight sensor choosen. And even to maintain real 13bit resolution (here Y=7500 (or 20g)) over 50K temperature range with a predictable absolute error is not easy. We could measure gravity ahead of every weigth measurement, but the huge complex ‘drift on temperature of most parameters and their drift in time (creep)’ is much more promising! :)

Beekepers with brood all year have much smaller environmental temperature changes and hence ‘better’ (a more reliable in terms of measurement uncertainty) accuracy.


@weef vielen Dank für diese klasse Erläuterungen zu dem Thema. Ich hab die Berechnungen zwar noch nicht ganz verstanden, werde mir das aber nochmal in einer ruhigen Minute genauer ansehen. Ich hätte noch eine Bitte: Könntest du mir zu meiner Frage hier: HX712 24-Bit ADC noch kurz was schreiben. Sorry dass ich so fordernd bin, ich hoffe ich kann mich irgendwann dafür erkenntlich zeigen.