Automatic, secure, distributed, with transitive connections (that is, forwarding messages when there is no direct access between subscribers), without a single point of failure, peer, time-tested, low resource consumption, full-mesh VPN network with the ability to "punch" NAT - is it possible?
Right answers:
Unfortunately, a little information was published about Tinc VPN on Habré, but a couple of relevant articles can still be found:
From the English-language articles can be distinguished:
The original source is better to consider the original Tinc man documentation
So (a free reprint from the official site), Tinc VPN is a service ( tincd
daemon) that ensures the functioning of a private network by tunneling and encrypting traffic between nodes. Source code is open and available under the GPL2 license. Like the classic (OpenVPN) solution, the created virtual network is available at the IP level (OSI 3), which means that, in the general case, making changes to the applications is not required.
Key Features:
There are two branches of tinc development: 1.0.x (in almost all repositories) and 1.1 (eternal beta). The article uses version 1.0.x everywhere.
Tinc 1.1x provides several new key features: perfect forward security, simplified client connectivity (actually replacingtinc-boot
) and a generally more thoughtful design.
However, at the moment, a stable version - 1.0.x is indicated and highlighted on the official website, so when using all the advantages of the 1.1 branch, you should evaluate all the advantages and disadvantages of using a non-final version.
From my point of view, one of the strongest possibilities is to forward messages when direct connection is not possible. At the same time, routing tables are built automatically. Even nodes without a public address can pass traffic through themselves.
Consider the situation with three servers (China, Russia, Singapore) and three clients (Russia, China and the Philippines):
Using the traffic exchange between Shanghai and Moscow as an example, consider the Tinc scenarios (approximately):
Whenever possible, Tinc attempts to establish a direct connection between the two nodes behind NAT by punching.
Tinc is positioned as an easy-to-configure service. However, something went wrong - to create a new node, it is minimally necessary:
tinc.conf
hosts/
tinc-up
tinc-down
In addition to this, when connecting to an existing network, you must obtain the existing host keys and provide your own.
Ie: for the second node
For the third
When using two-way synchronization (for example, unison
), the number of additional operations increases to N pieces, where N is the number of public nodes.
We must pay tribute to the developers of Tinc - for inclusion in the network, just exchange keys
with only one of the nodes (bootnode). After starting the service and connecting to the participant, tinc will get the topology
network and will be able to work with all subscribers.
However , if the boot host has become unavailable, and tinc has restarted, then there is no way
will connect to the virtual network.
Moreover, the enormous possibilities of tinc, together with the academic documentation of this (well described, but few examples), provide an extensive field for making mistakes.
If we generalize the problems described above and formulate them as tasks, then we get:
bootnode - a node with a public address (see above);
Due to the requirements of claim 2, it can be argued that after the key exchange between the bootnode and the new node, and after
connecting the host to the network, the distribution of the new key will happen automatically.
It is these tasks that tinc-boot performs.
tinc-boot is a self-contained, apart from tinc
, open source application that provides:
The tinc-boot
executable file consists of four components: a bootnode server, a key distribution management server and RPC management commands for it, as well as a node generation module.
The node generation module ( tinc-boot gen
) creates all the necessary files for tinc to run successfully.
Simplified, its algorithm can be described as follows:
tinc-up
tinc-down
subnet-up
subnet-down
tinc.conf
tinc.conf
hosts/
X-Node
hosts/
ConnectTo
Conversion via SHA-256 is used only to normalize the key to 32 bytes
For the very first node (that is, when there is nothing to specify as the boot address), step 9 is skipped. Flag --standalone
.
Example 1 - creating the first public site
The public address is 1.2.3.4
sudo tinc-boot gen --standalone -a 1.2.3.4
-a
Example 1 - adding a non-public node to the network
The boot node will be taken from the example above. The host must have tinc-boot bootnode running (described later).
sudo tinc-boot gen --token "MY TOKEN" http://1.2.3.4:8655
--token
The tinc-boot bootnode
raises an HTTP server with an API for primary key exchange with new clients.
By default, port 8655
.
Simplified, the algorithm can be described as follows:
Together, the primary key exchange process is as follows:
Example 1 - starting the download node
It is assumed that the initial initialization of the node was carried out ( tinc-boot gen
)
tinc-boot bootnode --token "MY TOKEN"
--token
Example 2 - starting the download node as a service
tinc-boot bootnode --service --token "MY TOKEN"
--service
--service
tinc-boot-dnet.service
--token
The key distribution module ( tinc-boot monitor
) raises an HTTP server with an API for exchanging keys with other nodes inside the VPN . It is fixed to the address issued by the network (the default port is 1655
, there will be no conflicts with several networks, since each network has / must have its own address).
The module starts and works completely automatically: you do not need to work with it in manual mode.
This module starts automatically when the network is up (in the tinc-up
script) and automatically stops when it stops (in the tinc-down
script).
Supports operations:
GET /
POST /rpc/watch?node=<>&subnet=<>
POST /rpc/forget?node=<>
POST /rpc/kill
In addition, every minute (by default) and when a new configuration file is received, indexing of the saved nodes is made for new public nodes. When nodes with the Address
flag are detected, an entry is added to the tinc.conf
configuration file to recommend connection when restarting.
Commands for requesting ( tinc-boot watch
) and canceling the request ( tinc-boot forget
) of the configuration file from other nodes are executed automatically when a new node is detected ( subnet-up
script) and stopped ( subnet-down
script), respectively.
In the process of stopping the service, the tinc-down
script is tinc-down
in which the tinc-boot kill
command stops the key distribution module.
This utility was created under the influence of cognitive dissonance between the genius of Tinc developers and the linearly growing complexity of setting up new nodes.
The main ideas in the development process were:
A little chronology:
tinc-boot
curl -L https://github.com/reddec/tinc-boot/releases/latest/download/tinc-boot_linux_amd64.tar.gz | sudo tar -xz -C /usr/local/bin/ tinc-boot
curl -L https://github.com/reddec/tinc-boot/releases/latest/download/tinc-boot_linux_amd64.tar.gz | sudo tar -xz -C /usr/local/bin/ tinc-boot
During development, I actively tested on real servers and clients (the picture from the tinc work description above is taken from real life). Now the system works flawlessly, and all third-party VPN services are now disabled.
The application code is written in GO and is open under the MPL 2.0 license. The license (free translation) allows commercial (if suddenly someone needs) use without opening the source product. The only requirement is that the changes must be transferred to the project.
Pool requests are welcome.