TTN3 components on dedicated server and in a cluster

Hello, everyone,

I would like to use TTN3 productively. I went through the following instructions https://thethingsstack.io/v3.8.6/getting-started/installation/ and managed to install TTN3. What I don’t like about this installation is that all components are installed on only one server. However, I would like to have a separate dedicated server for each component, like this:

  • join server
  • application server
  • gateway server
  • network server
  • identity server
  • redis server
  • cockroach server

For the databases (redis & cockroach) this is quite simple as I can just install them on sperate server and specify the ip address and the port in the according yml file. However, I do not know how to do it for all the other components.

Additionally I could see in the documentation (https://thethingsstack.io/v3.8.6/getting-started/installation/) that it is possible to run TTN3 in a cluster. Therefore keys have to be generated and shared. But I don’t know where and how I have to generate this key.

Do any of you have any experience and can give me tips that help me to run TTN3 with dedicated servers ?

Thank you very much.

Christoph

The Things Stack comes with basic support for running services on different servers. You can find the configuration options for that on https://thethingsstack.io/latest/reference/configuration/the-things-stack/#cluster-options.

High Availability (multiple instances of each component) is not included in the open source edition of The Things Stack. If you need this, you can take a look at The Things Enterprise Stack.

1 Like

Hi @christophmerscher, were you able to make a setup as you described it?

Hi everyone, currently I have installed The Things Stack on 3 local VMs by following the instructions to install with Docker [here](https://www.thethingsindustries.com/docs/the-things-stack/host/docker/), with each machine having their own private IP address. I would like to set the 3 machines (let’s name them 107, 108 and 109) to work together in a cluster on-premise, with 107 being the NS, 108 being AS, and 109 being JS. I came across this post and was wondering how I could do it with either the cluster options, or editing the ttn-lw-stack-docker YAML file.

What I have tried so far:

  • I have tried editing the ttn-lw-stack-docker YAML config file’s Console Component URLs sections with 107 as the NS, 108 as the AS, and 109 as the JS. I thought this worked initially and was able to add the gateway into the web console. However, when I tried to add an end device (a sensor), the console screamed an error saying Network Server address mismatch (this happened when I tried adding the device on the AS console), and likewise Application Server address mismatch (when I tried adding the device on the NS console).
    image

  • I have also tried adding the Cluster options configuration into the ttn-lw-stack-docker YAML file, but I couldn’t really see it affecting anything.

Thus my reply here, hoping someone would guide me to use the configurations correctly. Providing examples of how to edit the configurations would be extremely great help! I have been stuck on this for the past 2 weeks and tried to read up on everything I could but to no avail.

P.S. I do have to say that some documentation of the configurations is a little bit lacking in information, maybe providing examples of how to edit or use the configurations would be great, but that’s not of the utmost importance here HAHA!

No, let’s name them NS, AS and JS - sooooo much easier.

So effectively you have three full copies of the stack with addresses pointing all which ways - there is no world where you should have the console on more than one IP address for the whole setup.

A handful have tried, never heard of anyone yet taking TTS OS and turning it in to a cluster in this way and architecturally speaking I’m not sure how this would work. Some have moved the big bits that are highly cluster-able on to different machines - like Postgres and Redis - but overall you are way way in to serious SysOps territory which is a whole subject unto itself. Most peeps here use TTN/TTI and have their hands full with LoRaWAN alone.

And with all standard X-Y questions you’ve not told us what you want to achieve, just asking how to fix your setup - why do you need to cluster the separate components?

Hi descartes, thanks for your prompt reply! I have seen your replies all round the forum, and am aware that you would be able to provide great answers for my question.

Of course, that wouldn’t be a problem, just for easier reference indeed. Mistakes on my part.

Yes, that is correct, I have 3 full copies of the stack running on 3 different machines. I, too found it illogical that I would have the console on all 3 IP addresses for the whole setup, but I wanted to just try out if I could set them up according to my expectations first, so here we are now. The 3 copies are technically pointing to each other (somewhat?), in the ttn-lw-stack-docker YAML file, my NS, AS and JS are of the 3 different addresses, while the other servers such as IS, GCS, etc are of their own machine’s IP address (Do let me know if you would like to see the YAML files!)

Yes, my bad on this part. Eventually, I would like my entire set-up to look like the image below, just that the JS, NS and AS are on 3 different machines, while all working together as a whole setup. This is so that I would be able to see the key exchanges (would I be able to?) from JS to both NS and AS, and technically achieve the separation of the 3 servers.
image

Once again, thank you for replying so promptly! Do let me know what other information I am missing, I’ll reply ASAP with the missing info.

Why?

Why?

Is this homework?

If you want to peer under the hood, just add some logging to the source and run it on one machine.

Unfortunately, it isn’t homework (homework shouldn’t be THIS difficult), my workplace is thinking of using TTS on-premise and splitting the 3 servers into 3 machines to be able to split them apart according to how the architecture diagram would look like.

I have tried using DEBUG level logging, but it doesn’t really show the key exchanges, or I might have missed it (could more likely be this case, unfortunately).

If there is no way of doing this, then I guess I’ll just report as such. Thanks for your help!

ROFL - money saving does not compute with this.

Use TTS OS as documented for testing / very small scale don’t care if it goes boom = yes

Use commercially? Not a good idea at all. Once you price in sysops 24/7 support, renting a TTI instance for a mere $/€/£190 for 1,000 devices is insanely good value. No upgrade drama. Comes with an SLA. No hardware to rent, certainly not three machines.

If your setup has a glitch, will you be posting on here asking for help getting it running again?

My normal solution when I bork a TTS OS setup is to use the backups & the exported devices data to rebuild. Downtime is measured in hours, not minutes. Or never every upgrade and run it on mirrored disks.

Running a high performing database or some such is enough work. Two databases and a complex piece of code, nah, life is too short.

@rish1 can advise on getting going.

Just an observation on this sort of situation.

Setting up an incoming processor of uplinks from various sources, it just wouldn’t post the data in to the ThingsBoard.

I could do it just fine from every workstation in the office, but the processor VM wasn’t able to talk to the ThingsBoard VM.

But actually, they aren’t VM’s, they are LXC containers and they are configured as standard to not allow cross traffic - so after reading all the docs on the processor framework comms & ThingsBoards docs and looking at logs, at lot whilst triggering endless cURL tests, I stepped back a bit and had my Eureka moment. Allow them to talk on the relevant port and it just worked.

This sort of stuff is a sweat - this config is fine in the office now I have it working. But a good idea out in the real world - I’d recommend separate VMs on Linode or something.