If you were asked to define predictive analytics, what would you say? If you think about it, it can really mean a variety of things, all having different possible outcomes. In this article, we’ll take a look at some of the ways predictive analytics can add value, and more closely at how its promise can be realized in the energy storage and battery spaces.

In heavy industry utilizing rotating equipment such as gas turbines, predictive analytics is often used in the context of predictive maintenance. With real-time data accompanied by specialized sensor data (including oil pressure, concentration of metal particles and temperature sensors), the condition of an asset can be continually assessed. As a result, maintenance schedules can be dynamically updated, and either brought forward to avoid an anticipated failure, or pushed out due to lower than expected wear on components or an asset being used less than planned.

With batteries, and other industrial equipment that requires infrequent maintenance, we often talk about predictive failure analysis as part of predictive analytics. Like predictive maintenance, this kind of analysis relies heavily on the availability of real-time data, coupled with specialized algorithms to predict when future failures are likely to occur. The lead time this kind of analysis gives to operators, and the level of confidence in the predictions, is highly dependent on the amount, quality and richness of the data as well as choosing the most effective algorithms.

Getting the data right can be a significant challenge. Field deployed Energy Storage Systems (ESS) are often broken down into energy blocks, strings, modules and cells. A block is generally made up of a number strings, with each string consisting of a number of modules or batteries. Depending on the battery chemistry used, the availability of data can differ greatly. For example, lithium-ion batteries may generate cell level data, while other battery chemistries may stop at module level data.

At the module level, it’s common to find just four or five data registers used to store distinct pieces of information, including voltage, temperature and the state of charge. At the string and block level, data representing the aggregate of the underlying structures is calculated and stored. For example, at the block level (the aggregate of all strings and therefore modules) data registers often include, but are not limited to:

  • Minimum and maximum SoC

  • Total current

  • Total voltage

  • Alarms (warnings and faults)

  • String enabled state

How can predictive analytics be used to predict future battery failures?

There are many possible answers depending on the approach. For example, a basic approach may rely solely on system-generated events such as warnings and faults. When used in conjunction with real-time operational data, this approach can reliably predict and warn operators of impending failures, and quickly save time and money when compared to the effort of developing complex data models. The rules used to decide when to alert operators are also semi-transferable between sites, resulting in further cost savings.

The main drawback, however, is that this approach provides a short lead time prior to receiving an alert about a predicted failure, often giving operators only enough time to react in haste, thus increasing the risk of downtime.

As ESS’s grow in complexity, deciding which events out of several hundred should be considered indicative of a failure becomes even more difficult. As new events are incorporated into the management software, the rules need to be constantly updated and tested for effectiveness.

An alternative approach that can give operators days or even weeks to address potential problems is to calculate a “Remaining Useful Life” (RUL) value per battery. An RUL value could, for example, predict the number of days or the number of charge/discharge cycles until a fault is expected to occur.

How do you calculate RUL for an Energy Storage System?

The first step in creating an RUL model is a data science exercise, and often not enough attention is given at this point. In order to be usable, historical field data needs to be validated, cleansed and analyzed. Depending upon the attributes available in the original data set, additional features may need to be derived to increase the richness of data. This may include calculating the state of health, remaining capacity, depth of discharge (DoD), etc.

Once a complete optimized data set with the required attributes is created, an RUL model can be created using one or more algorithms; for example, random forest regressor or K-nearest neighbor. To predict a live RUL value, real-time operational data is fed through the same algorithm used to create the original model. When the predicted RUL value drops below a specified threshold, an alert is raised giving operators advanced notice of an impending failure. To increase confidence and accuracy, the RUL model should be continually updated as the amount of saved historical data increases.

With the recent launch of new modules in Peaxy Lifecycle Intelligence, the process of creating, testing and deploying data models in support of predictive analytics is significantly streamlined. Our Machine Learning Manager handles the optimization and creation of data models. All live data is then routed through our Machine Learning Runner module. Every data sample passes through a pipeline that validates its quality and performs any pre-processing steps before it’s passed to the appropriate algorithm to calculate the RUL. These steps can all be configured using a point and click interface, which drastically reduces the time spent preparing the data.