Legal Metrology in the Age of AI
Opportunities and Risks

Legal metrology is the system that keeps measurement fair and reliable in daily life. From how much electricity a household uses to the amount of fuel at the pump, it's all about making sure measurements are correct and trusted. Traditionally, this has meant strict rules, clear processes, and regular checks. Now, artificial intelligence (AI) is starting to play a role in this field. This change raises new questions: Can we trust a computer to make decisions about measurement? How do we keep the process safe and clear?

AI is not just another tool. It can change how we check, approve, and monitor measuring devices. Some see this as a way to make the work faster and more accurate. Others worry about mistakes, lack of clarity, and who is responsible if something goes wrong. The challenge is to use AI in a way that keeps trust at the centre of legal metrology.

This article looks at where AI is being used, what it can help with, and what problems it might cause. We'll also see how rules and standards are changing to keep up.

Where AI Meets Legal Metrology

AI is already being used in some parts of metrology. For example, smart meters can use AI to check for unusual readings that might mean tampering or faults. In the past, people had to look for these problems by hand or with simple rules. Now, AI can find patterns in large amounts of data that people might miss.

AI is also used for maintenance. By looking at past data, it can predict when a meter or test device might fail, so repairs can be planned before problems happen. Some companies are testing AI to help with type approval, using it to compare test results quickly and spot anything unusual.

Other uses include reading analogue meters with image recognition, or helping with customer service and reporting. These examples show that AI can help, but they also bring new challenges.

Opportunities

AI can help make testing, checking, and finding problems faster and more reliable. It can look at data in real time and spot issues early, which is useful as energy systems become more complex.

For reporting, AI can help regulators get information more quickly. Instead of waiting for audits, they can see problems as they happen. This could make the system more open and easier to manage.

AI might also speed up the approval of new meters and devices. By checking data faster, it could help new products reach the market sooner. For manufacturers, this means less waiting and more time to improve their products.

Risks and Challenges

Using AI also brings risks. One big problem is that some AI systems are hard to understand. If an AI says a meter is faulty, it might not be clear why. This makes it difficult for regulators to check and explain decisions.

AI systems can also make mistakes if the data they learn from is not complete or fair. In legal metrology, where fairness is key, this is a serious issue. If an AI system makes a mistake, it can affect many devices before anyone notices.

There are also questions about who is responsible if an AI system makes a wrong decision. How do you check if an AI is working as it should? And how do you keep data safe from hackers? As AI becomes more common, these questions need clear answers.

Regulatory Response and the Path Forward

Regulators are starting to update rules and standards for AI. Groups like OIML and WELMEC are actively discussing how to check and approve AI systems. Notably, OIML's TC 17 and WELMEC WG7 are drafting specific AI guidelines, with new documents expected in 2025. These efforts focus on ensuring that people remain involved in important decisions and that detailed records are kept on how AI systems are built and tested.

There is also a strong push for human oversight. Draft EU regulations, such as the AI Act, require that high-risk systems—including those used in legal metrology—must have human control and intervention. This approach aims to keep accountability clear and prevent automated decisions from undermining trust.

Certification rules for AI in metrology are still emerging. For example, NISTIR 8312 (2023) outlines protocols for testing AI systems, but formal international standards are still under development. The consensus is that AI should support, not replace, human expertise—especially for critical checks and approvals.

The aim is to use AI to help, without losing the trust and safety that legal metrology is built on. As these regulatory frameworks evolve, collaboration between manufacturers, regulators, and technical experts will be essential to address accountability, transparency, and security in AI-enabled metrology.

Takeaway

AI can help legal metrology by making checks faster and more accurate. But it also brings new risks, especially around mistakes and responsibility. To use AI safely, the industry needs new rules, good training data, and clear checks. By working together, manufacturers, regulators, and technology experts can make sure AI supports trust in measurement, not confusion.

Partnering for Reliable Measurement

At CLOU, we focus on proven, robust solutions for legal metrology—built on decades of engineering expertise and a commitment to transparency. While AI continues to develop, our products and systems prioritise reliability, security, and compliance with the latest standards. We believe that trust in measurement starts with clear processes and dependable technology. For utilities, regulators, and industry partners seeking solid performance without the uncertainties of emerging AI, CLOU remains your reliable partner in the field. Let's keep measurement safe, accurate, and accountable—together.

Leave a Reply

Your email address will not be published. Required fields are marked *

 


All comments are moderated before being published. Inappropriate or off-topic comments may not be approved.