tinc-boot - full-mesh network without pain







Automatic, secure, distributed, with transitive connections (that is, forwarding messages when there is no direct access between subscribers), without a single point of failure, peer, time-tested, low resource consumption, full-mesh VPN network with the ability to "punch" NAT - is it possible?







Right answers:









Introductory Skip Link







Tinc description



Unfortunately, a little information was published about Tinc VPN on Habré, but a couple of relevant articles can still be found:









From the English-language articles can be distinguished:









The original source is better to consider the original Tinc man documentation







So (a free reprint from the official site), Tinc VPN is a service ( tincd



daemon) that ensures the functioning of a private network by tunneling and encrypting traffic between nodes. Source code is open and available under the GPL2 license. Like the classic (OpenVPN) solution, the created virtual network is available at the IP level (OSI 3), which means that, in the general case, making changes to the applications is not required.







Key Features:









There are two branches of tinc development: 1.0.x (in almost all repositories) and 1.1 (eternal beta). The article uses version 1.0.x everywhere.







Tinc 1.1x provides several new key features: perfect forward security, simplified client connectivity (actually replacing tinc-boot



) and a generally more thoughtful design.



However, at the moment, a stable version - 1.0.x is indicated and highlighted on the official website, so when using all the advantages of the 1.1 branch, you should evaluate all the advantages and disadvantages of using a non-final version.

From my point of view, one of the strongest possibilities is to forward messages when direct connection is not possible. At the same time, routing tables are built automatically. Even nodes without a public address can pass traffic through themselves.













Consider the situation with three servers (China, Russia, Singapore) and three clients (Russia, China and the Philippines):









Using the traffic exchange between Shanghai and Moscow as an example, consider the Tinc scenarios (approximately):







  1. Native situation: Moscow <-> russia-srv <-> china-srv <-> Shanghai
  2. ILV closed connection to China: Moscow <-> russia-srv <-> Manila <-> Singapore <-> Shanghai
  3. (after 2) in case of server failure in Singapore, traffic is transferred to the server in China and vice versa.


Whenever possible, Tinc attempts to establish a direct connection between the two nodes behind NAT by punching.







A brief introduction to tinc configuration



Tinc is positioned as an easy-to-configure service. However, something went wrong - to create a new node, it is minimally necessary:









In addition to this, when connecting to an existing network, you must obtain the existing host keys and provide your own.







Ie: for the second node













For the third













When using two-way synchronization (for example, unison



), the number of additional operations increases to N pieces, where N is the number of public nodes.







We must pay tribute to the developers of Tinc - for inclusion in the network, just exchange keys

with only one of the nodes (bootnode). After starting the service and connecting to the participant, tinc will get the topology

network and will be able to work with all subscribers.



However , if the boot host has become unavailable, and tinc has restarted, then there is no way

will connect to the virtual network.

Moreover, the enormous possibilities of tinc, together with the academic documentation of this (well described, but few examples), provide an extensive field for making mistakes.







Reasons to create tinc-boot



If we generalize the problems described above and formulate them as tasks, then we get:







  1. the ability to create a new site with minimal effort is needed;

    • potentially, it is necessary to make it possible to give the average specialist (enikey) one small line to create a new node and connect to the network;
  2. it is necessary to provide automatic distribution of keys between all active nodes;
  3. it is necessary to provide a simplified key exchange procedure between the bootnod and the new client.


bootnode - a node with a public address (see above);

Due to the requirements of claim 2, it can be argued that after the key exchange between the bootnode and the new node, and after

connecting the host to the network, the distribution of the new key will happen automatically.







It is these tasks that tinc-boot performs.







tinc-boot is a self-contained, apart from tinc



, open source application that provides:









Architecture



The tinc-boot



executable file consists of four components: a bootnode server, a key distribution management server and RPC management commands for it, as well as a node generation module.







Node Generation Module



The node generation module ( tinc-boot gen



) creates all the necessary files for tinc to run successfully.







Simplified, its algorithm can be described as follows:







  1. Define the host name, network, IP parameters, port, subnet mask, etc.
  2. Normalize them (tinc has a limit on some values) and create the missing ones
  3. Check parameters
  4. If necessary, install tinc-boot on the system (disableable)
  5. Create tinc-up



    , tinc-down



    , subnet-up



    , subnet-down



  6. Create tinc.conf



    configuration tinc.conf



  7. Create hosts/



  8. Perform key generation
  9. Perform key exchange with bootnode

    1. Encrypt and sign your own host file with a public key, a random initialization vector (nounce) and host name using xchacha20poly1305, where the encryption key is the result of the sha256 function from the token
    2. Send data via HTTP protocol to bootnode
    3. Decipher the received answer and the X-Node



      header containing the name of the boot node using the original nounce and the same algorithm
    4. If successful, save the received key in hosts/



      and add a ConnectTo



      entry to the configuration file (i.e. a recommendation where to connect)
    5. Otherwise, use the following address in the list of the boot node and repeat from step 2
  10. Show recommendations for starting a service


Conversion via SHA-256 is used only to normalize the key to 32 bytes

For the very first node (that is, when there is nothing to specify as the boot address), step 9 is skipped. Flag --standalone



.







Example 1 - creating the first public site







The public address is 1.2.3.4









sudo tinc-boot gen --standalone -a 1.2.3.4











Example 1 - adding a non-public node to the network







The boot node will be taken from the example above. The host must have tinc-boot bootnode running (described later).







sudo tinc-boot gen --token "MY TOKEN" http://1.2.3.4:8655











Bootstrap module



The tinc-boot bootnode



raises an HTTP server with an API for primary key exchange with new clients.







By default, port 8655



.







Simplified, the algorithm can be described as follows:







  1. Accept a request from a client
  2. Decrypt and verify the request using xchacha20poly1305, using the initialization vector passed during the request, and where the encryption key is the result of the sha256 function from the token
  3. Check name
  4. Save file if there is no file with the same name yet
  5. Encrypt and sign your own host file and name using the algorithm described above
  6. Return to item 1


Together, the primary key exchange process is as follows:













Example 1 - starting the download node







It is assumed that the initial initialization of the node was carried out ( tinc-boot gen



)







tinc-boot bootnode --token "MY TOKEN"











Example 2 - starting the download node as a service







tinc-boot bootnode --service --token "MY TOKEN"











Key distribution module



The key distribution module ( tinc-boot monitor



) raises an HTTP server with an API for exchanging keys with other nodes inside the VPN . It is fixed to the address issued by the network (the default port is 1655



, there will be no conflicts with several networks, since each network has / must have its own address).







The module starts and works completely automatically: you do not need to work with it in manual mode.







This module starts automatically when the network is up (in the tinc-up



script) and automatically stops when it stops (in the tinc-down



script).







Supports operations:









In addition, every minute (by default) and when a new configuration file is received, indexing of the saved nodes is made for new public nodes. When nodes with the Address



flag are detected, an entry is added to the tinc.conf



configuration file to recommend connection when restarting.







Key Distribution Module (Management)



Commands for requesting ( tinc-boot watch



) and canceling the request ( tinc-boot forget



) of the configuration file from other nodes are executed automatically when a new node is detected ( subnet-up



script) and stopped ( subnet-down



script), respectively.







In the process of stopping the service, the tinc-down



script is tinc-down



in which the tinc-boot kill



command stops the key distribution module.







Instead of total



This utility was created under the influence of cognitive dissonance between the genius of Tinc developers and the linearly growing complexity of setting up new nodes.







The main ideas in the development process were:









A little chronology:









During development, I actively tested on real servers and clients (the picture from the tinc work description above is taken from real life). Now the system works flawlessly, and all third-party VPN services are now disabled.







The application code is written in GO and is open under the MPL 2.0 license. The license (free translation) allows commercial (if suddenly someone needs) use without opening the source product. The only requirement is that the changes must be transferred to the project.







Pool requests are welcome.







useful links






All Articles