About the (specific) hybrid solution you described:
In my opinion, this “hybrid” thing, is not the best idea, for the following reasons: (1) ABP should only be used for specific use cases, and has quite some limitations. Of course that doesn’t really matter because the devices don’t implement LoRaWAN anyway. (2) Sending payloads with an application payload on port 0 will cause really strange things to happen on LoRaWAN compliant servers, because they will try to interpret them as MAC commands. (3) Removing the option of downlink from the devices makes that you will never be able to send instructions to devices.
No downlink means that you can no longer apply Adaptive Data Rate. No ADR means that you can’t tell the device that it can transmit at a lower power to save energy. No downlink also means that you can’t optimize the application itself, as you won’t be able to tell the device that it has to send more often if you want more detailed telemetry (assuming it’s a sensor) or less frequent if you want to save some more energy.
Then on plain LoRa vs LoRaWAN:
LoRa is great and in some cases you don’t need LoRaWAN. Nothing stops you from deploying a solution that uses plain LoRa and drops the WAN. You don’t have to change anything in gateways, because they simply catch LoRa packets from the air and forward them to a server without knowing what’s in it. Gateways don’t even understand LoRaWAN. If you don’t need the benefits of LoRaWAN you can simply use the LoRa radio protocol, build your own MAC layer on top of that and implement it in your devices and your server. Your deployment will probably look exactly the same: a star-of-stars topology where many end devices send to a couple of gateways that all forward to one server.
That may work just fine for some use cases, but I think LoRaWAN gives you quite some things that will make your solution a lot more scalable than when you would use plain LoRa.
Most importantly, you don’t have to re-invent the wheel, but instead build your solution on top of existing networks. That can be TTN’s free and open community network, or a paid network deployed by telecom operators all over the world. If you want a private network, there are many commercial and open source network servers around, so you won’t have to build something from the ground up. Being fully LoRaWAN compliant (and using OTAA) means that you can switch to a different LoRaWAN network operator (or from/to your own private network) whenever you want. This also gives you more flexibility in development, because you can start building on top of TTN’s free public network, then switching to an operator that gives you better SLAs, and later to a private server behind the firewall of your client.
And finally, the battery life:
I don’t think those receive windows have such a big impact on the energy consumption. The windows are just long enough to detect the presence of a preamble for a downlink message. If you’re close enough to the gateway (which you probably are in your use case) it shouldn’t be too hard to detect this. Maybe @telkamp or @matthijs can give some more details about the energy consumption of the RX windows.