As businesses start to produce and continuously monitor more and more data — whether it’s coming from customer actions, signals generated by industrial equipment, or environmental inputs from sensors — intelligent software can monitor that output and identify anything that doesn’t fit the normal pattern. Whether it’s suspicious activity, customers about to churn, or equipment about to fail, the ability to detect anomalies is critical.

You’ve most likely experienced anomaly detection personally when you’ve entered the wrong password too many times on a website, or gotten an alert from your credit card company protecting you from fraud. Similar approaches are applied during quality control as part of a manufacturing process. For many years, businesses have used statistical methods for managing overall quality control. Anomaly detection brings that a step further by analyzing inputs — sensor readings, image analysis, or equipment readout — in real time to identify what part of the process might be out of compliance or close to failing. Finally, inventory management is another example, allowing retail and other businesses to be alerted ahead of a trend in which supplies are running out quickly, piling up, or are missing or stolen.

Machine learning is a powerful new tool that can be applied to allow anomaly detection to work with more precision and accuracy — a must in today’s complex business environments where data sets can be massive. For example, machine learning can create a validated model of the data which can then be used to predict the value of the next data point. If that data point differs substantially, it can be flagged as an anomaly and acted upon appropriately.

With the assistance of machine learning, anomaly detection has dramatically improved in recent years, offering businesses unprecedented insights into their operations. Of all the benefits offered, the following stand out¹:

  1. Because insights are data driven, they are almost always now in real time, which gives businesses more time to act on them, often ahead of a major failure or financial impact.
  2. Accuracy has greatly improved, as the volume of data increases and software becomes smarter over time.
  3. With the advent of cloud storage, data volume is no longer an issue, which directly feeds into improved processing power and larger historical data sets, leading to overall better outcomes.
  4. It’s no longer ‘good enough’ to get notified when a certain piece of data is out of spec. Systems are now much more comprehensive and can handle levels of decision making, often combining feedback from multiple data points before recommending or even performing an action. This level of analysis is impossible without being fully automated and without a rich and carefully orchestrated data feed.
  5. Data models can be purely empirical, obviating the need to create high fidelity physical models. That is, we do not necessarily need to know exactly why something is about to happen from first principles, but we can deduce it from the data pattern.

So how does it work?  Well, first off we have to start using the data. In some estimates, over 70% of business data sits idly and is never used to its fullest potential². In typical enterprises, critical data may be locked in individual silos, or managed by groups who aren’t incentivized to share that data with other groups. Some data may also be locked in legacy systems and lack the modern data hooks that make it easier to harvest and reuse. Another common challenge is identifying the data that’s available. There is rarely a central warehouse or organization that audits and categorizes all the data available.

‘Training’ the algorithm involved, as it is known, is challenging because the anomaly is, by definition, rare. If it was a known or expected outcome, we could assign it a class and be able to predict it with some degree of accuracy, all within the realm of traditional statistical analysis. Because it’s unknown, there is often not other data that can be used to understand what the anomaly is or why it occurred.

Looking for the unknown might sound almost philosophical, but in anomaly detection, it’s the main objective. When an anomaly is detected, it’s either correct as a true negative or true positive, or a false positive or false negative. An example of a false positive is system noise, where the data produces an outlier, but the process remains normal. With a false negative, the process becomes abnormal but doesn’t produce an outlier that is strong enough to be detected against the normal level of noise.

More advanced implementations can take anomaly detection a step further by focusing on predictive maintenance outcomes. While predictive maintenance can encompass a variety of processes and disciplines, at a high level it can be broken down into three basic parts: sensing, analysis, and action. Let’s take a wind turbine as an example.  An inexpensive sensor is installed to monitor various outputs during the lifetime of the turbine, constantly gathering real-time data on its performance, often over a period of years. A sensor like this can monitor things such as temperature, pressure, vibration, sound levels or rotation speed. If the sensor is attached to a battery, it can measure battery voltage.

While the benefits of anomaly detection are proven and clear, businesses struggle implementing this on their own. So what solutions does Peaxy offer?

Peaxy offers predictive asset management for industrial equipment that turns operational data into insights that optimize productivity. Our Peaxy Lifecycle Intelligence (PLI) product is a modular, scalable, cloud-based asset management solution aligned with the needs of the value-driven enterprise. By rapidly turning operating data into financial insights, PLI lets operators minimize O&M costs and optimize performance, improving the lifetime value of industrial assets. PLI serves a wide range of use cases involving precision-engineered equipment, grid-scale battery installations, aviation components, gas turbines, steam turbines, wind turbines, compressors and propulsion systems.

Our newest offering, PLI for Batteries works across a range of battery technologies.

Peaxy Lifecycle Intelligence (PLI) for Batteries is a complete predictive battery analytics platform that leverages machine learning to deliver dramatic performance improvements across R&D, manufacturing and field operations.

PLI for Batteries is the first cloud-based battery analytics software platform to deliver a unified data vision for battery development, manufacturing and deployment. Using a proprietary process, our enterprise-grade solution securely captures and stores the entire data value chain to create a single source of truth for serialized battery data, laying the foundation for high-fidelity digital twins and machine-learning driven insights.

Features include:

  • Degradation curves for each serial number in real time
  • Optimization of charge / discharge regimes
  • Tracking of ambient profiling and operating profiles to help tune optimal charge / discharge cycles
  • We help operators work within the boundaries of complex battery warranty regimes
  • We help lessors make sure they protect the long-term residual value of battery assets

 

⁽¹⁾ https://dzone.com/articles/why-real-time-ai-based-anomaly-detection-is-a-no-b

⁽²⁾ https://go.forrester.com/blogs/hadoop-is-datas-darling-for-a-reason/