Salt execution and architecture

SaltStack is a high-speed remote execution platform that utilizes a master and minion architecture for infrastructure command and control. While the SaltStack architecture also offers Salt SSH for a more lightweight, agentless SaltStack alternative, the vast majority of SaltStack customers choose to utilize traditional SaltStack for extreme control of complex environments often running at massive scale.

Salt execution flow

The Salt master sends commands and configurations to the Salt minions running on managed systems. The Salt minion is an efficient and self-aware service waiting for instructions. Asynchronous commands and data collection can be pushed or pulled between master and minions and communication is handled via a permanent, encrypted and authenticated connection.

Execution modules

Salt execution modules are the functions called by the salt command. Salt execution modules are different from state modules and cannot be called directly within state files. You must use the module state module to call execution modules within state runs.

Virtual modules

Virtual modules let you override the name of a module in order to use the same name to refer to one of several similar modules. The specific module that is loaded for a virtual name is selected based on the current platform or environment.

For example, packages are managed across platforms using the pkg module. pkg is a virtual module name that is an alias for the specific package manager module that is loaded on a specific system. For example, yumpkg on RHEL/CentOS systems , and aptpkg on Ubuntu.

Commonly used execution modules

grains module controls aspects of the grains data. salt.modules.grains.items return all of the minion’s grains.

salt 'svc01.saltstack.local,svc02.saltstack.local' grains.items

The sys pseudo module comes with a few functions that return data about the available functions on the minion or allows for the minion modules to be refreshed. salt.modules.sys.doc display the inline documentation for all available modules, or for the specified module or function.

salt 'svc0[1-3].saltstack.local' sys.doc pkg

aptpkg is support for advanced packaging tool on Debian systems. salt.modules.aptpkg.install install the passed package.

salt 'svc01.saltstack.local' pkg.install <package name>

pillar module extract the pillar data for this minion. salt.modules.pillar.items calls the master for a fresh pillar and generates the pillar data on the fly.

salt 'svc0*' pillar.items

Salt job management

Since Salt executes jobs running on many systems, it needs to be able to manage jobs runs on many various systems and platforms.

Job functions

Salt has a few functions at the saltutil module for managing jobs. These functions are:

running

Returns the data of all running jobs that are found in the proc directory.

find_job

Returns specific data about a certain job based on job id.
signal_job
Allows for a given jid to be sent a signal.

term_job

Sends a termination signal (SIGTERM, 15) to the process controlling the specified job.

kill_job

Sends a kill signal (SIGKILL, 9) to the process controlling the specified job.

These functions make up the core of the back end used to manage jobs at the minion level.

Job runners

A convenience runner front end and reporting system has been added as well. The jobs runner contains functions to make viewing data easier and cleaner. For example:

salt-run jobs.active

Job scheduling

The scheduling system allows incremental executions on minions or the master. The schedule system exposes the execution of any execution function on minions or any runner on the master.

Scheduling is enabled via the schedule option on either the master or minion config files, or via a minion’s pillar data. Schedules that are impletemented via pillar data, only need is to refresh the minion’s pillar data, for example by using saltutil.refresh_pillar.

Schedules implemented in the master or minion config have to restart the application in order for the schedule to be implemented. The scheduler executes different functions on the master and minions. When running on the master the functions reference runner functions, when running on the minion the functions specify execution functions.

States are executed on the minion, as all states are. You can pass positional arguments and provide a YAML dict of named arguments as shown in following example.

schedule:
  log-load-avg:
    function: cmd.run
    seconds: 3660
    args:
      - 'logger -t salt < /proc/loadavg'
    kwargs:
      stateful: False
      shell: /bin/sh

To set up a highstate to run on a minion every 60 minutes set this in the minion config or pillar:

schedule:
  highstate:
    function: state.highstate
    minutes: 60

The scheduler is also useful for tasks like gathering monitoring data about a minion, this schedule option will gather status data and send it to a MySQL returner database:

schedule:
  uptime:
    function: status.uptime
    seconds: 60
    returner: mysql
  meminfo:
    function: status.meminfo
    minutes: 5
    returner: mysql

Since specifying the returner repeatedly can be tiresome, the schedule_returner option is available to specify one or a list of global returners to be used by the minions when scheduling.

The minion proc system

Salt minions maintain a proc directory in the Salt cache directory. The proc maintains files named after the executed job ID. These files contain the information about the current running jobs on the minion and allow for jobs to be looked up. This is located in the proc directory under the cachedir, with a default configuration it is under /var/cache/salt/proc.

Lab: Managing Salt jobs

Start long running task on svc01 node, the installation of TeX packages is suitable long-running job. First check the active jobs on master:

cfg01# salt-run jobs.active

Then on svc01 node run following command:

cfg01# salt 'svc01*' pkg.install texlive-latex-extra

Now run the active function on salt mastart node. The active function runs saltutil.running on all minions and formats the return data about all running jobs in a much more usable and compact format. The active function will also compare jobs that have returned and jobs that are still running, making it easier to see what systems have completed a job and what systems are still being waited on.

cfg01# salt-run jobs.active
20160211143447009667:
    ----------
    Arguments:
        - texlive-latex-extra
    Function:
        pkg.install
    Returned:
    Running:
        |_
          ----------
          svc01.saltstack.local:
              4473
    StartTime:
        2016, Feb 11 14:34:47.009667
    Target:
        svc01*
    Target-type:
        glob
    User:
        root

Before finding a historic job, it may be required to find the job id. list_jobs will parse the cached execution data and display all of the job data for jobs that have already, or partially returned.

cfg01# salt-run jobs.list_jobs

When jobs are executed the return data is sent back to the master and cached. By default it is cached for 24 hours, but this can be configured via the keep_jobs option in the master configuration. Using the lookup_jid runner will display the same return data that the initial job invocation with the salt command would display.

cfg01# salt-run jobs.lookup_jid <JOB_ID>

Lab: Set up job schedule

Add schedule configuration to the master.

cfg01# vim /etc/salt/master.d/schedule.conf

Following content:

schedule:
  uptime:
    function: status.uptime
    seconds: 60

And restart the Salt master to apply the changes.

cfg01# service salt-master restart

Wait for a few minutes and run list_jobs command to view list of jobs:

cfg01# salt-run jobs.list_jobs
    20160211151056045362:
        ----------
        Arguments:
        Function:
            status.uptime
        StartTime:
            2016, Feb 11 15:10:56.045362
        Target:
            cfg01.saltstack.local
        Target-type:
            glob
        User:
            root

Lab: Managing minions

The salt.runner.manage are general management functions for Salt, tools like seeing what hosts are up and what hosts are down.

To print a list of all minions that are up according to Salt’s presence detection (no commands will be sent to minions):

cfg01# salt-run manage.present
- cfg01.saltstack.local
- svc01.saltstack.local
- svc02.saltstack.local

Check the version of active minions:

cfg01# salt-run manage.versions
Master:
    2015.8.5
Up to date:
    ----------
    cfg01.saltstack.local:
        2015.8.5
    svc01.saltstack.local:
        2015.8.5
    svc02.saltstack.local:
        2015.8.5