Friday, January 9, 2009

Scale Calibration

The procedure for calibrating a scale is very dependent on the nature and structure of a scale. Often the instruction manual for the scale will give suggestions on the best approach for that type of scale. In addition, the exact keystroke procedure on the indicator is completely dependent on the indicator type, and the manual for the indicator must be referred to.

The general procedure is to empty the scale, and tell the indicator that it is empty at which time the indicator will acquire the current load cell output as it’s zero reference. Then a known and calibrated weight is applied to the scale, and that amount of weight is typed into the indicator keyboard at which time the indicator will measure the change in millivolt output from no load to test weigh load, and calculate the millivolts per pound ratio (span), which it will use from then on to convert the voltage output of the load cells to a displayed weight.

The calibration process requires certain selections and decisions to be made. Some of the more
important ones, and related terms are as follows:

A. Capacity
This is the maximum amount of weight that is intended to be put on the scale, generally limited by the scale design. Specifying the capacity generally has an effect on two things:
1) Overload indication – The scale indicator will show an error when the weight exceeds this amount;
2) Full Scale Counts – the total number of different weights the scale can display (see below for further discussion).

B. Graduation Size
This is also referred to as the Increment Size or Displayed Division (dd). This is the smallest amount of weight change that the indicator can display, and is always some variation of 1,2, or 5 with decimal places to the left, or fixed zeros to the right. The smaller the graduation size the more precise the scale, but also the more unstable it will be.

C. Test Weight Amount
This is the amount of known test weight applied to the scale to span the scale. Several schools of thought abound on the proper test weight amount. Some say it should be equal to the capacity of the scale, others say it should be equal to the normal amount the scale is used to weigh. The answer depends on how the scale is used. If it typically weighs the same amount each time, then it should be calibrated at that level. If it is used to weigh a large range of weights then a test weight that is in the middle of the weighing range will provide the best linearity over the entire range. In all cases the scale should be tested at the maximum and minimum ranges after calibration.

D. Motion Detection
This is a setting used by the weight indicator to determine when it has settled. This is used mostly in legal for trade applications and prevents the scale from performing certain functions until it has settled. These functions typically are Tare, Push button Zero, Printing, and Auto Zero Acquisition. A typical legal for trade setting would be 3 displayed divisions.

E. Zero Range
This specifies the total percentage of the capacity of the scale that can be zeroed using the front panel push button. For legal for trade applications this is typically limited to 2%, however in process applications it can be set much higher, and 20% is not unusual.

F. Auto Zero Maintenance Range (AZM)
Most scale indicators have a feature that allows them to automatically re-zero themselves (same as push button zero) if they are stable, and close enough to zero. This setting determines how close to zero the scale has to be before it will automatically “push the zero button”. Typically this is set for less than 3dd. In process weighing (such as batching) this feature is usually turned off as it can cause in-accurate weighing.

G. Full Scale Counts
This value is internally calculated by the indicator after calibration, and is equal to the specified capacity divided by the graduation size. Basically this represents the total number of different weights the scale can display (similar to the number of ticks on your ruler). For legal for trade applications this is usually limited to either 3000, or 5000 – However in process applications this can be over 20,000 with the right indicator. The larger this number the more precise the scale, but also the more unstable it is likely to be.

H. Micro Volt Build
Also termed uv build, relates to the signal sensitivity of the indicator. Since the indicator converts the load cell voltage to a number it has to be able to distinguish between small voltage changes. The smallest voltage change that can be measured by the indicator is called its Sensitivity, and is measured in micro volts. The Micro Volt Build is the voltage change represented by one graduation of the scale. This can be calculated by dividing the graduation size by the total load cell capacity of the scale (sum of the name plate rating of all load cells), and multiplying by the full scale voltage output (calculated earlier). Generally a build of less than 1 micro volt may cause an unstable scale.


No comments:

Search here:

Translate to :

Official Website

Name:*
Email:*
Tel / HP:
Subject:*
Message:*
Verification No.:*
contact form faq

Our Services :

- Machine troubleshooting
- Automation, mechanical, instrumentation, electrical design, installation & commisioning
- Machine design, machine installation/relocation
- PLC software, programming, installation & commisioning
- Instruction manual, troubleshooting guide, technical guide etc
- Panel maker (distribution, control desk, capacitor bank, PLC, starter unit, inverter unit etc)
- Inverter design, installation & commisioning
- Motor
- Energy saving solutions
- Etc