Electricity meter sensor technologies
July 30, 2005
First, an assumption: I take it as read that solid state sensors are the future for electricity metering and that Ferraris (the old disc meter) is dead. I will go further – I would delete it from my spell checker if I didn’t still harbour expensive automotive ambitions.
The heart of any solid state electricity meter is the current sensor, and selection of the correct type of sensor for the metering application is more than a matter of engineering debate. In globalised meter markets, manufacturers need to differentiate their products. Some compete on price, some on reliability, some on performance and some on functionality. The choice of sensor type impacts on all these areas, and is the determining factor in some.
THE KEY SENSOR REQUIREMENTS
The sensors in the majority of meters measure instantaneous power. The rest of the meter accumulates the power over time to calculate the total energy (kWh) taken from the supply and presents this information to the utility for bill calculation. As the sensor dictates overall meter accuracy, it must meet national regulations for fiscal metering, such as OIML, which specify accuracy required under a wide range of environmental conditions, in the presence of interfering signals, and over a very long lifetime. In addition, in order to protect the utility’s revenue, current sensors must resist attacks and subterfuges of various kinds. Finally, to be competitive in a global market the sensor must be low cost – it must have low development and capital equipment costs, be made from low cost materials and be easy to assemble and integrate into the rest of the meter. If it needs expensive electronics then it’s no good for metering. Many types of meter have multiple sensors, so the direct and ‘knock-on’ costs of the sensor are especially critical.
SOLID-STATE SENSOR TECHNOLOGIES – A BUSINESSMAN’S GUIDE
It is usual to calculate power by measuring current and voltage separately and multiplying them together digitally, although some meters (such as Itron’s Centron, which uses a Hall effect sensor) cleverly measure power in one device.
A typical ‘front end’ of a sensor signal processing chain consists of an analogue-to-digital converter (ADC) that measures voltage. The role of the current sensor is to provide a voltage signal proportional to the current to be measured, so that it can be read by the ADC (see Figure 1). Commonly used current sensors fall into two categories – low-resistance ‘shunts’ that sit in line with the current, and those that depend on the magnetic field created by the electric current (including current transformer (CT), Hall effect and Rogowski coil). CT and Hall effect use permeable magnetic cores to concentrate the magnetic field through the measurement coil or sensor, and this concentration of the magnetic field means that a large signal level can be achieved. Rogowski coils have no such core and the signal levels they produce are consequently lower. However, in physics as in business, there is no such thing as a free lunch. Using a magnetic core introduces some serious drawbacks, because the calibration of the sensor will always have some dependence on its magnetic properties. These magnetic properties may vary from device to device and manufacturer to manufacturer and, for each individual core, will vary with operating conditions.
For example, the core can saturate or become permanently magnetised if a large magnetic field is caused, perhaps by a fault condition downstream of the meter resulting in excessive current flows, or by a fraudster with a powerful permanent magnet who is determined to make the meter under-read. Even for normal operating conditions, non-linear effects occur both at high currents (reducing permeability) and at low currents (through the energy required to excite the core) adding complexity to the calibration process. Despite their individual strengths and weaknesses, given sufficient engineering resources it is possible to make a functional meter from any of these sensor types.
A shunt is a small resistance through which the primary current flows, generating a small voltage (typically around 50mV full scale) that can be measured. For single phase meters a typical design would only measure the supply path (‘live’) using a single shunt. In markets where energy theft is an issue, one of the most effective ways of detecting it is to measure the current in both live and neutral and to use the higher of the two values to calculate power. However, shunts provide no isolation from the mains supply, so the metrology circuit of a 2-shunt meter would ‘see’ the whole mains voltage – not a sensible or safe design. So, when a second sensor is needed, isolation must be provided by using a different sensor type, or by adding another component such as a voltage isolation transformer. This adds extra cost and complexity to the design. Shunts generate heat proportional to the square of the current passing through them, and managing heat dissipation in the small unventilated meter enclosure can be a real issue, particularly for high-current installations. ‘Self-heating’ may also change the shunt’s resistance value, or affect the meter’s calibration by heating up the metrology circuitry. If the current is very large (perhaps due to a short-circuit in the load) then the shunt will heat dramatically: its calibration may permanently change, or it may even melt. Thus the dynamic range of a shunt between the maximum current it can measure before heating effects and the lowest current it can determine over electrical noise in the voltage measuring circuit is
relatively narrow. Shunts’ greatest advantage is that they are low cost for the level of performance they deliver, particularly for lower-current services. They are unaffected by DC magnetic field, so they are unaffected by some tampering techniques and nearby high current conductors (e.g. meter input cables). Consequently, they are deservedly popular in noncritical, cost-sensitive electricity metering where meter tampering is uncommon.
Unlike a normal (voltage) transformer, a current transformer (CT) keeps the voltage constant but alters the current according to the (inverse) ratio of the number of turns in its primary and secondary windings. By inserting a small (‘burden’) resistor into the secondary circuit, the secondary current can be converted to a voltage suitable for an ADC, and hence the primary current can be calculated from the turns ratio and burden resistor. In practice, a CT in an electricity meter is implemented as a toroidal secondary, with the primary conductor simply passing through the central aperture. CTs are the dominant sensor technology for high current and multiple phase energy measurement because they provide isolation, produce good signal levels and have high dynamic range. However, they have a permeable magnetic core and so suffer from the ‘free lunch’ syndrome. They are relatively expensive devices, especially for high current ratings. The burden resistor can be a weakness: a determined tamperer may shunt it with another resistor so that the meter under-reads or, if it ever goes open circuit, the CT will create a high voltage output that may damage or destroy the delicate electronics in the meter. Additionally, a CT is a fairly heavy device that must be mounted rigidly with the primary conductor passing through the middle. Physical mounting is therefore slightly more difficult than for other technologies.
The Hall effect produces a voltage across a currentcarrying conductor placed in a magnetic field, and is commonly used as a method of sensing the field level. Practical Hall sensors use a semiconducting element, as this leads to higher signal levels.
To make a suitable current sensor, the Hall effect device is placed in a small gap in a nearly closed magnetic core. The primary conductor is passed through the aperture of the core and its current generates an AC magnetic field that is concentrated through the Hall effect device by the core. A small current is passed through the Hall effect device, and the AC magnetic field through the device produces an AC voltage at the device’s outputs.
Semiconductor materials have some drawbacks: they change significantly with temperature, and compensating for these changes adds complexity and cost. There are also significant variations between devices. The magnetic core has the usual ‘free lunch’ drawbacks, and it must be precisely mounted and rigidly held in relation to the Hall effect device. As with a CT, the whole sensor must be mounted rigidly in relation to the primary conductor.
A conventional Rogowski coil is a transformer, usually toroidal, with no magnetic core – the magnetic field produced by the primary current loosely couples into the secondary winding. It is highly predictable, has huge dynamic range and no saturation, magnetisation or DC field effects, but the signal level is low, requiring conditioning by an integrator.
Careful design and precision construction can produce a sensor highly tolerant of DC and AC magnetic interference and ideal for tamper-resistant metering. Surge currents and lightning strike also have little effect – there is no mechanism for transferring significant
energy from the primary conductor to the metrology section, no magnetic core that can be magnetised, and no resistive element that generates heat.
A TECHNOLOGY TO WATCH
Until recently there was no practical, cost-effective implementation of this technology for metering. However, by making the coils in the PCB tracks (where precision comes for free) and using modern mixed signal metrology chips, the cost has been reduced to give a fully competitive sensor type.
Making the coil in the tracks of the PCB also controls and minimises the area enclosed by the connections between the sensor and the chip. In other sensor types that need flying leads, this area is large and variable, introducing unwanted susceptibility to magnetic fields. This is a technology to watch, especially for high accuracy, tamper-proof or high current meters.
OTHER REAL WORLD ISSUES
To summarise, there are key differences between different current sensors that make some much more suited to specific metering applications than others (see Figure 2).
Availability of easy to use, low-cost silicon It used to be difficult to make accurate and stable electronics for very low signal levels associated with some sensor technologies, but recent chips from, for example, Analog Devices, ST, TI, SAMES, Cirrus, AMS and TDK solve this problem. Thanks to the investment of these companies, it is no longer necessary to choose a sensor type that has a high signal level. Also, thanks to this investment, the development cost of a new electricity meter has been reduced. It is now possible for smaller players, offering other differentiators such as a novel system concept or a strong regional presence, to jump this market entry barrier.
Supply chain risk
Bespoke components and long product lifetimes are uncomfortable bedfellows. If a supplier can no longer provide your component, or your volumes fall to the point where you cannot justify retaining multiple suppliers, then you have a problem. Introducing a replacement supplier will not be easy and will probably affect meter certification. Meters based on bespoke shunts, CTs or Hall effect devices are at risk from supply-chain events. Meters based on Rogowski coils (if they are made in the PCB) or off-theshelf components (such as low performance CTs) are not.
Whilst a manufacturer may sell any meter that meets the relevant standards at the time of sale, utilities need to meet specific requirements (such as accuracy) throughout the service life of the meter. Concerned that meter performance will degrade over life, they
may demand some performance ‘headroom’. In the US market, this has led to meter manufacturers differentiating their products on performance that far exceeds the
typical standards for residential metering in the rest of the world.
ADDRESSING A RANGE OF APPLICATIONS
Another reason for utilities buying better meters than the law demands is that they can address a range of meter applications with one (larger volume) supply contract. For example, certain net (or bi-directional) metering applications require large dynamic range. The utility can either buy a small number of a different type of meter for this application, or can choose a slightly more expensive meter that can cover both the standard and the special application, and recover the extra cost through economies of scale.
The choice of sensor for your electricity meter has impacts on both the meter manufacturer and the utility. It is strange that most meter manufacturers do not even state the sensor technology in their literature, and that some utilities are happy to treat the meter as a
black box, despite their legal obligations to ensure accuracy throughout service life. However, that situation is changing. Meter manufacturers are delivering more functionality, and utilities are slowly changing their business models to take advantage. As the meter becomes more critical to the business model then, perhaps, the utilities will want to peek inside the box.