SaltStack Topologies

The most basic or typical Salt topology consists of a single Master node controlling a group of Minion nodes. The Salt Master runs 2 network services. First is the ZeroMQ PUB system. This service by default runs on port 4505 and can be configured via the publish_port option in the master configuration. Second is the ZeroMQ REP system. This is a separate interface used for all bidirectional communication with minions. By default this system binds to port 4506 and can be configured via the ret_port option in the master.

Single master topology

Alternative topologies

Apart running salt as single Master node setup Salt supports several alternative topologies to run.

Standalone minions

Completely decentralised way of running salt services. The minion does not call the master at all and calculates all the catalogs and modules locally.

Standalone minions topology

Since the Salt minion contains such extensive functionality it can be useful to run it standalone. The master configuration files of formulas and pillars need to be present on the minion nodes. A standalone minion can be used to do a number of things: stand up a master server via States (Salting a Salt Master), use salt-call commands on a system without connectivity to a master or masterless States, run states entirely from files local to the minion.

Active-active or failover multi-masters

Salt has the ability to connect minions to multiple masters. The multi-master system allows for redundancy of Salt masters and facilitates multiple points of communication out to minions. When using a multi-master setup, all masters are running hot, and any active master can be used to send commands out to the minions.

Multi-masters topology

For handling minion metadata on multiple masters a “Pluggable Minion Data Cache” was introduced. The minion data cache contains the Salt Mine data, minion grains, and minion pillar information cached on the Salt Master. By default, Salt uses the localfs cache module, but other external data stores can be used instead. Using a pluggable minion cache modules allows for the data stored on a Salt Master about Salt Minions to be replicated on other Salt Masters the Minion is connected to.

The syndic nodes

An intermediate node type, called Syndic, when used offers greater structural flexibility and scalability in the construction of Salt topologies than topologies constructed only out of Master and Minion node types.

Master of masters with 2 syndic nodes topology

A Syndic node can be thought of as a special passthrough Minion node. A Syndic node consists of a salt-syndic daemon and a salt-master daemon running on the same system. The salt-master daemon running on the Syndic node controls a group of lower level Minion nodes and the salt-syndic daemon connects higher level Master node, sometimes called a Master of Masters.

Multi-masters with multi-syndic nodes

The ultimate setup for HA deployment, recommended for managing production infrastructures spawning accross multiple geographical regions.

Multi-masters of masters with 2 multi-syndic nodes topology

Minion-to-minion (peer) communication

Salt has the capability for Salt minions to publish commands. The intent of this feature is not for Salt minions to act as independent brokers one with another, but to allow Salt minions to pass commands to each other. Along that is has ability to execute runners from the master. This allows for the master to return collective data from runners back to the minions via the peer interface.

The configuration is done under the peer setting in the Salt master configuration file, here are a number of configuration possibilities. The simplest approach is to enable all communication for all minions, this is only recommended for very secure environments.

peer:
    .*:
        - .*

This configuration will allow minions with IDs ending in example.com access to the test, ps, and pkg module functions.

peer:
    .*example.com:
        - test.*
        - ps.*
        - pkg.*