When performing any measurement, whether it be gauging the clearance on a component or performing a power run on a dyno, you want to have accuracy and repeatability. But what do these terms actually mean? People often get the two confused, and replace one with the other, but from a testing standpoint the differences in their meaning and impact on test results is considerable.
The dictionary definition of accuracy is: “The degree to which the result of a measurement, calculation, or specification conforms to the correct value or a standard”, whereas for repeatability it is: “The ability of a measuring instrument to repeat the same results during the act of measurement”.
From these definitions it is quite possible to surmise that if a machine’s results are described as repeatable, it does not necessarily mean they are accurate, because no reference to accuracy is made. It is perfectly feasible to have a machine that reads 1 mm as 5 mm every single time; it is repeatable, but definitely not accurate.
Accuracy in engineering terms is gauged in terms of +/- tolerance. For example, a micrometer can be accurate to within +/- 0.005 mm, so measurements taken will never be more than 0.005 mm from the absolute measurement. The smaller the tolerance, the higher the accuracy, and the same principle can be applied to weights, power outputs and so. So from this, it is safe to presume that an accurate machine is also a repeatable one, otherwise it could not be classed as accurate.
As a general rule, accuracy is achieved by measuring something as directly as possible. For simple distance or clearance measurements, this is easily achieved by using a micrometer or verniers. For something like the power output of an engine though it is a little more complex. The most accurate method is to measure from the crankshaft – provided of course that the measuring equipment used operates to tight tolerances – as this is the most direct connection possible. However, if you want to measure the power output at the wheels, taking into account the parasitic losses present in the drivetrain, the most direct method of attachment would be to the hubs. If the drivetrain is already installed in a vehicle, removing the wheels from the equation increases the accuracy, as factors such as tyre carcass distortion and grip between the tyres and dyno rollers will not come into play. For every link in the measurement chain there will be an added tolerance, and accuracy will fall.
Both accuracy and repeatability can be affected by factors in the measurement chain – or those that are external to the chain but have an impact on the end result – and which vary depending on testing conditions. These can include, but are not limited to:
Environmental – inlet air temperature, measurement point, atmospheric pressure, weather/altitude, relative humidity and geographic location/season
Other – fuel type and quality, octane number/energy contents, fuel temperature, drivetrain temperature, engine cooling fluids, gearbox oil and rear axle oil
While it is nigh-on impossible to control all these factors precisely, even given a state-of-the-art test cell, variations must be accounted for in order to achieve both accurate and repeatable results. This is where correction factors come into play, and most dynamometer systems will have these calculations built into their processing systems. However, this introduces yet another link in the measurement chain, with the accuracy of components such a barometric pressure sensors influencing final power output figures.
Written by Lawrence Butcher