This is also what I’m unsure of, and I’d love to know what the community thinks. Unfortunately there’s no “LoRaWAN stack” indicator in the join-request message; we have no statistics about this.
Yep. Even if we were routing traffic back from V3 to V2, we wouldn’t make the RX1 delay of 1 second that is used in V2. Gateways would really need to be connected to the cluster where the end device is, to make that short roundtrip.
Also we discussed this internally again today, and what’s also going to be super helpful is a gateway map that shows to which cluster (and version) the gateway is connected.
For now, it’s not necessary to migrate gateways from V2 to V3, so I would suggest waiting with that. New gateways can go to V3 though.
@Johan, thinking further on this today if a device and associated app are migrated to V3 will it break any associated integration(s) done in V2. e.g. if a temp sensor has been running for say 2 years dumping data to something like Cayenne for presentation and showing historic trends/analysis will the same device and application on V3 then continue to show in the dashboard or will there be a whole other set of migrations and restarts/reconfigs needed on associated integrations (poss doubling migration workload?) I know of users where they are not so interested in hour by hour changes but look at long term averaging and month by month or even qtr by qtr or seanally adjusted changes to understand what is happening. A sudden break and start again from no data from historic context would be an issue - think land slip monitoring, water-table or climate change analysis, long term performance of solar panel monitoring, etc.
Is the change of the RX delay also relevant for OTAA devices? I´m having problems with my first device / application in V3 and think it might be because the device gets no downlink.
As a little very dirty upgrading strategy for that case, export the devices in V2 and importing that in V3 and wait for the change of the gateway to V3. After the gateway is migrated to V3 only the Application site has changed to V3 MQTT Servers. As long as the gateway is on V2 the devices can send their data and after the gateway migration the devices sending the data over V3 to the application server.
Over the past year or so we have migrated all our private network customers from V2 to V3 backends, and we now run all those customer apps and gateways on a fault tolerant multi-tenant V3 cluster.
I have to say that I have a lot of confidence in V3 and I’m glad to be finally leaving the V2 world behind.
But this migration is going to be challenging for many of us for all the reasons people are mentioning in this thread. The key issue being that application owners and gateway owners are often different people, who may not know each other, and have different priorities.
I’d like to provide my migration learnings to application/device owners in case they’re of use. I’m assuming here that you want to avoid physically visiting every device to reboot it and want to avoid device downtime.
For the purposes of migration, devices fall under a few different categories
Well behaved OTAA devices with Network Link Check (or similar). These devices will soon realise that they no longer have a valid connection to the network and will reissue an OTAA join-request on their own and come up on the new network without any intervention, and an outage of only a few packets. When you buy a device, this is the kind you want.
OTAA devices without any form of link check, but can be rebooted or forced to rejoin via downlink. You will be able to get these devices migrated by reconfiguring them as “ABP” devices on V2 (so that they won’t receive a join-accept from the old network) and sending a reboot command to them to force them to rejoin the new network.
OTAA devices without any link check and no way to remotely reboot them. In this case, you need to preserve the session keys when you register them in V3. By doing this, the device doesn’t know that it’s even changed backends and continues sending uplinks as it did before. But now you have other issues to worry about (which is why Johan suggested not to use this method). One issue we’ve found (on dynamic band plans like AS923) is that the new network will try and issue new channels to the device as it doesn’t know it already has them. This will lead to the device rejecting those MAC commands, causing potential issues. So these channels have to be “hard coded” when the device is registered. You will probably want to undo this hard code at a later time and reboot the device. This is all time consuming and complex if you aren’t familiar with V3 (The CLI is your friend!).
ABP devices. These are mostly like the previous category. However, if your DevAddrs are not in the TTN range (26xxxx) then they probably won’t traverse Packet Broker, so you may need your gateways to be on the same network as your devices. If you have this problem then you probably want to rethink your device strategy as the road ahead will be a bumpy ride for you.
Edit: Below Johan indicates that “the V2 issued DevAddr is not routed to V3”. This means that devices which fall into categories 3 & 4 below are going to need to communicate with gateways on the V3 network.
I may have missed something, and others have probably developed better migration strategies, so feel free to add your experiences too.
Cheers @Maj - still worried about this and the amount of ‘lift & shift’ required across my own and clients estate…and did the Integrations & data continuity per my above post survive in your cases?
We tend to use external integration, so other than accounting for the change in JSON structure and field names, everything continued as normal beyond that point. ie, customers can’t tell when the migration happened.
You would need to configure the same integrations on V3 as you have on V2. Presumably if they are configured the same way the end result will be the same.
I have brought about 10 gateways into life, four applications, as far as the migration is concerned, I see a lot of confusion and uncertainty. As usual, my view might be wrong.
However, let me be very clear:
If the data from old applications and devices at least will not be migrated via the database or if you don’t offer an easy migration path (I’d even pay 50 EUR), i think I am going to quit TTN altogether.
My applications and devices exist as a series of datapoints in the “old” system.
I expect that I can either transfer or copy these datapoints to the new console or I expect a download (zip file) that I then might upload into the v3 console, simple as that.
I expect an information about the steps I have to take (first copy the data and then delete the old data - or download the old data, delete it and then upload into the new system.
Honestly, I don’t like the idea to install a software to do this: These are just data points. Please provide format conversion etc. as download / export from the v2 system.
I tried the “ttnctl-darwin-amd64.zip” for Mac, is unzips to a 19,5 MB binary, In my books, script files are much smaller. I have unzipped it now 4 times and only once it looked like a shell script.
Why don’t you install the shell script on your machines to provide downloadable JSON for everyone?
Hate to keep raising problems @johan - guess @Maj will think me a ‘wingeing Pom’ to borrow a cricketing term! - but playing devil’s advocate here and trying to concieve solutions/mitigation strategies… The user may not have control of the GWs or own them, also simply migrating the GW’s to V3 if they do have control simply switches off other V2 users nodes/applications covered by them - not very community spirited
How can we co-ordinate these activities and transitions. Some users - especially many end users - may not even be aware of pending transition and behind the scenes activity and just receive a service or the output from a dashboard integration perhaps on a shared url link… then suddenly one day it stops working/updating…
in @esairq 's case above clearly he didnt get the telegram saying do x before doing y, and likely is only the 1st of many - we need to quickly come up with a very simple (so people like me can do it!) step by step guide and publish it professionally and well signposted or the community is going to get very fragmented & very messy very quickly (and the Forum likely wont be a nice place to be for a while! )
The moderators I’m sure will curate and clean up and guide where they can but this could get overwhelming I fear…
From what I am getting on back channels 3 main issues look to be biggest concern - timing with RO from April even if actual sunset much later - everyone dealing with pandemic and thinking this is too much too quickly. And 2nd is the lack of V2 to V3 dataflow atleast during transition phase…may not be technically possible or a big lift to fix but can something be done before chaos lets loose. Many now getting the transition message and looking far out to sea the Tsunami is already building And that leads to 3rd concern that GW’s that are relied on may switch before users ready/have time/can access nodes to effect change or update…many indeed wont even know if what they have in their hands or have deployed in the field falls into (which) one of the 4 categories of device you called out depending if build or bought of the shelf and if a screwdiver build or s/w compiled themselves etc. They simply may not know where to start!
Whilst we cant cover all the self builds and hacks people have put together & deployed on TTN perhaps we can post a traffic light system for popular off the shelf systems - Green - will transition with no issue, Amber may need a reset (OTA or power cycle) or a minor intervention, some ttnctl cmd line hacking or whatever, Red - oops you’ve got a potential problem/may be stuffed and need to work hard! And apply that to the likes of say (just for examples) Laird RS1xx T&H, Dragino LHT65, ERS-xyz, MCF88-abc, DecentLabs-ghi etc…(Maybe start with labeling the items listed in the TTN/TTI Marketplace?!) That way we can focus efforts and manage by exception/scale of problem…? Perhaps this is something we can recruit the product manufacturers/suppliers into doing?
Instructions appear to be written for migration to thethingsindustries.com.
There is no mentioning of The Things Network or thethings.network URLs.
“Configure V2 CLI
…
3. Create a file called .ttnctl.yml in your home directory
…
For Windows users make sure to use the --config flag with the full path to the configuration file: ttnctl.exe --config <fullpath\>/config.yml
…”
The title says “configure V2 CLI” but the config file appears to contain V3 related settings.
Why use a different config file name for Windows? The different name is confusing (only the leading dot should be removed for Windows).
The contents specified for .ttnctl.yml will not work for Windows.
Windows requires backslash (not forward slash) as path separator and the default user folder of a Windows user is C:\Users\<user_name>.
What is folder /home/<user_name>/.ttnctl used for?
Storing data that is internally used by the ttnctl program?
I ran ttnctl on Windows (without first creating a config file due to incomplete instructions). As a result a .ttnctl folder was automatically created in my current folder, which can actually be any folder. This is probably not the preferred way.
One needs to specify <domain_id> in several places.
Nowhere is mentioned what value must be used for <domain_id>.
Can we simply use URL’s like [*.]<domain_id>.thethings.industries and only need to use some <domain_id> specific for The Things Network V3, or do we need completely different URL’s?
This is unclear because the (eu) URL for The Things Network V3 Console is eu1.cloud.thethings.network/console which does not match any of the <domain_id>.thethings.industries URLs specified for the config file.
The config file mentions “handler-id: <domain_id>-handler”.
Is this a V3 handler-id? “Configure V2 CLI” gives the impression it should be a V2 handler and the IDs generated by the command ttnctl discover handler are very different.
Hi @johan, I will be pending when the deployment of nam1 is ready, to start the migration of the devices and gateways here on Colombia, could you explain the effect of RX delay change on the operation of the devices?
I have managed to deploy the RAK7258 gateway on V3. It is showing as connected but no live data is shown when in the backend of the RAK7258 I can see incoming data.
This is indeed a real issue. The reason why all gateway coverage of V2 is available in V3, is really to remove the immediate need to reconfigure gateways. There is no reason now to migrate gateways.
If we need to clarify this further, I’m happy to do so, and I’d like to know in which way you suggest.
The increased RX1 delay will make the downlink path more reliable. If gateways have a high latency backhaul or if you want to have application-layer downlink responses in the RX1 or RX2 window, this will now most likely all work. When devices join The Things Network V3, they will automatically get the new RX1 delay.