I get the feeling that the answer to this question is “it’s up to the hardware manufacturer”. So be it, but I’m curious to know what has been done.
The LoRaWAN MAC Battery Level reporting capability reports battery level as a value from 1 to 254, which is typically “scaled” to 0-100% for use in device management displays. It’s generally up to the hardware/firmware of a given device to decide what value to write into this “register”. I could find nothing in the LoRaWAN specs that define the meaning in any more detail.
So my question is what is typically being measured here? Of course there are many ways to measure remaining battery life, often highly dependent on the battery chemistry being used. But my main question is whether 0% (or 1 in the MAC byte) is generally meant to be “the battery is completely drained” or “the battery is now at the lowest point that it can support normal operation of the device”? I realize that a device with a “completely drained battery” can’t even send this message, but if “completely drained” is the intended meaning of 0% (1), then it essentially means that the end user must just “know” what battery percentage is the “lowest operational value” for every difference device (of course, the software can be configured for this, too).
TTN doesn’t support this in v2, the current deployment:
but the measurement of battery value is rather vital to many of our nodes so hopefully it will be incorporated in to v3 at some stage.
I’m not sure the detail on what was expected to put in to the battery level value was overly considered as it is rather generic - as you say, the chemistry and the devices usage will all massively vary so I suspect it’s up to the manufacturer of the device to decide based on what it is and what they recommend for power. And then we have a situation where until more network servers support this functionality, they won’t include it. And for some devices, you’d like a whole lot of other information too.
I generally deploy with Energizer Lithium batteries - they have a long shelf life, that is they self-discharge slower than the node uses them and are less temperature sensitive - so won’t freeze (not an issue in the UK, global warming is real).
I transmit battery V with each test device payload with a couple of decimal places and I’ve got several algorithms looking for the blip at the half way point and then the beginning of the end where the reported voltage is about to drop off a cliff. I’m trying to factor in some consideration of usage based on the message count.
For deployed devices I schedule different payloads on different ports depending on use case / requirements. So generally device status once a day, for battery it’s a single byte scaled over the working range (1.8 to 1.4 in the example above) to give some level of over-accuracy.
It only takes a few minutes on the bench with a variable power supply & a multimeter to determine lowest operational voltage for purchased devices.
I’ve not yet had a mass deployment reach end of battery life but no one is talking to me about changing the batteries as the cost of many of the devices is sufficiently low they are adding new features and so the old device will be replaced with new one rather than have it’s batteries changed. In some instances this replacement may occur well before end of battery life.
For solar, test nodes have an INA3221 on them so we can measure the solar output, the flow in or out of the battery and the device usage. This gives more data than I can cope with so I haven’t come to any particularly conclusions except that 100mA LiPo battery on a simple device seems sufficient even in the UK and the deployed devices with that size battery work OK unless foliage get in the way.
So given time, as a small scale device maker, I could incorporate a 1-254 battery level based on rough lookup tables, but mostly I’d be inclined to know other things as well so would stick to the device status payload scheme, one of my favourites being the microswitch on the back of some cases that tells me if it’s been removed …
@descartes : You bring up some excellent points (something you’ve clearly thought a lot about). Unfortunately, I think the LoRaWAN spec for this didn’t put much thought into it, instead leaving the implementation details up to the individual device makers.
As you touch on, the ideal situation would be for the device to “know” its battery chemistry/etc. and use that 1-254 value as an indicator of real useful remaining battery life, using whatever means it has available. But my experience so far is that many device makers are just measuring the voltage (under no specific load) and plugging in a ratio of the “fully charged” voltage vs the measured voltage. Of course that has little bearing on how much useful life the battery still has in it.
What we’ve done is try to “normalize” this information, so that applications that use devices from various sources can present something to the end user that is useful (after all, the only question the user really has is “when do I need to replace the battery, or the device”?).
PS - I’d be curious to learn more about your devices.
I’d argue that leaving things up to the device manufacturer’s actually is putting thought into it, in comparison to areas where the spec mandates things that are so difficult to implement correctly that it is rare that any device actually does.
Consider if they mandated a percentage of actual battery life: you’d have devices where that was well implemented and for consistent usage the percentage linearly decreased, but also devices where it wasn’t well implemented and the “percentage” had an exponential knee or the device failed with “30% remaining” or ran for months at “0%”, and you’d have to keep track.
Then even a percentage doesn’t tell you much about remaining service life in units of time, without considering the battery capacity and estimated or historic rate of usage.
But it’s probably better to go a step further and simply embed whatever you find useful in an application packet that you were going to send anyway; mac-level isn’t really where this belongs to begin with.
Sure. I understand how complex this is. And whether the “battery level” is reported in a device-specific packet or at the MAC level makes no difference. I’m just trying to collect information from various device makers and/or from the experience people have with various devices, so I can understand what the range of variants is with how battery level is reported in general.
The ideal objective is to end up with a “gas gauge” for the end user. There is absolutely no doubt what “E” means on a gas gauge, regardless of the kind of car you’re driving. And for both ICEs and EVs, many modern cars have “estimated miles remaining until empty” (which is calculated based on driving habits and past data). I think that should also be possible with IoT devices. Now maybe those calculations are better done “on the server” (to save calculation time/power on the device), but without consistent usable data coming in, it makes those calculations near impossible.
That’s my lament here, is that regardless of how battery level is reported, it’s often not defined in any useful way and often has little to do with how much “time you have until empty” (I’ve seen devices that fail at 80% and ones that run fine at 40%, just as an example; but knowing this requires a lot of deep knowledge about not just the battery but also the hardware of the device–not something that can be easily figured out.
The end goal here is for the user’s benefit. So that an application with 3 or 30 different kinds of devices can all present a consistent “gas gauge”.
PS - What we’re in the process of doing is to use lab equipment to characterize a sampling of each device, so we can incorporate the “gas gauge” into the application. For now, it seems to be the only viable way to get this information.
One of the pitfalls of the MAC level command is that the system has to send a downlink to request it and then wait for a response. If you have a many nodes in an area covered by a gateway, even with some judicious scheduling you could then miss device uplinks whilst the gateway is requesting a particular device to report in. Whereas if you have built it in to the firmware, you know what to expect.
There are numerous spreadsheets of varying complexity that allow you to calculate alleged battery life but they assume each battery pack is a perfect sphere. And the challenge of accelerating battery consumption is that it will actually reduce the battery capacity but it then becomes tricky to know by how much. All depending how long you want to wait, drain at 1mA as per the graph above and you could be getting results in as soon as 4 months time but then all you are doing is confirming the manufacturer wasn’t telling fibs.
Then there is the tricky question of transmission durations due to deployment. If many devices are close to a gateway, your average power life span will be much higher than if you have devices at varying distances or indeed all quite remote. And if gateways are added or removed, it will remove the profile of the deployment. I think it is feasible to pick out from the meta data what the transmissions times are and project the devices expected lifespans. If you need to you could account for previous transmission durations.
At present I’m just focusing on the battery voltage and when we hit 1.5V, use the right proportions of how long it has lasted so far against the graph to get a prediction.
I’m also, after a few people ignored the project plan, flagging up when a device starts life at alkaline battery voltage levels. The lithium start out significantly higher at ~1.8V so it’s a reliable indicator but it does settle down quite quickly to ~1.7V.
Another consideration is the battery voltage is affected by temperature:
This device is on the shelf next to my desk in an office that, when the sun shines, it gets a bit warm. You can see that it’s only hundreds of a volt variation so I’m putting in a moving average and factoring out the days with a big temperature difference - if you get sun on a device mid-winter, the next cloudy day you don’t want the system going in to battery alert meltdown as the voltage appears to have dropped dramatically.
On the contrary, there is a large variation in what it means. But a driver will start to gain some experience with a particular car: subtract how much fuel they then end up putting in from the size of the thank listed in the owner’s manual and they start to know what “E” actually means on that vehicle.
Really on the server is the only plausible place to do it, if you hope for any accuracy.
Experience has shown over and over that when complex algorithms end up embedded in devices, they end up wrong. If you hope for accuracy, the only real choice is to report raw data and run the algorithm to interpret it on the server where you can tune and revise it, and even go back and look at historic data under a new evaluation.
It is not consistency you need, but rather documentation so that you can choose and refine the algorithm.
Closed algorithms all but inevitably end up wrong, too…