Configuration Management

Often times, in production network we need to know if a specific command is present in a device or not. if present, we need to find out how many of the device has this configuration present.
Such use-cases are the most frequently visited examples in Network operations.
To do this, we can use a inbuilt function of NAPALM module called as “load_config”.

For demonstration – we need to figure out the status(i.e. availability) of specific command “ntp server 1.1.1.1” on all the network router running IOS.

To understand the complete documentation of this function – we can use below command as discussed in previous section

root@mrcissp-master-1:/# salt 'R*' sys.doc net.load_config
net.load_config:

    Applies configuration changes on the device. It can be loaded from a file or from inline string.
    If you send both a filename and a string containing the configuration, the file has higher precedence.

    By default this function will commit the changes. If there are no changes, it does not commit and
    the flag ``already_configured`` will be set as ``True`` to point this out.

    To avoid committing the configuration, set the argument ``test`` to ``True`` and will discard (dry run).

    To keep the changes but not commit, set ``commit`` to ``False``.

    To replace the config, set ``replace`` to ``True``.

    filename
        Path to the file containing the desired configuration.
        This can be specified using the absolute path to the file,
        or using one of the following URL schemes:

        - ``salt://``, to fetch the template from the Salt fileserver.
        - ``http://`` or ``https://``
        - ``ftp://``
        - ``s3://``
        - ``swift://``

        Changed in version 2018.3.0

    text
        String containing the desired configuration.
        This argument is ignored when ``filename`` is specified.

    test: False
        Dry run? If set as ``True``, will apply the config, discard and return the changes. Default: ``False``
        and will commit the changes on the device.

    commit: True
        Commit? Default: ``True``.

    debug: False
        Debug mode. Will insert a new key under the output dictionary, as ``loaded_config`` containing the raw
        configuration loaded on the device.

        New in version 2016.11.2

    replace: False
        Load and replace the configuration. Default: ``False``.

        New in version 2016.11.2

    commit_in: ``None``
        Commit the changes in a specific number of minutes / hours. Example of
        accepted formats: ``5`` (commit in 5 minutes), ``2m`` (commit in 2
        minutes), ``1h`` (commit the changes in 1 hour)`, ``5h30m`` (commit
        the changes in 5 hours and 30 minutes).

        Note:
            This feature works on any platforms, as it does not rely on the
            native features of the network operating system.

        Note:
            After the command is executed and the ``diff`` is not satisfactory,
            or for any other reasons you have to discard the commit, you are
            able to do so using the
            :py:func:`net.cancel_commit <salt.modules.napalm_network.cancel_commit>`
            execution function, using the commit ID returned by this function.

        Warning:
            Using this feature, Salt will load the exact configuration you
            expect, however the diff may change in time (i.e., if an user
            applies a manual configuration change, or a different process or
            command changes the configuration in the meanwhile).

        New in version 2019.2.0

    commit_at: ``None``
        Commit the changes at a specific time. Example of accepted formats:
        ``1am`` (will commit the changes at the next 1AM), ``13:20`` (will
        commit at 13:20), ``1:20am``, etc.

        Note:
            This feature works on any platforms, as it does not rely on the
            native features of the network operating system.

        Note:
            After the command is executed and the ``diff`` is not satisfactory,
            or for any other reasons you have to discard the commit, you are
            able to do so using the
            :py:func:`net.cancel_commit <salt.modules.napalm_network.cancel_commit>`
            execution function, using the commit ID returned by this function.

        Warning:
            Using this feature, Salt will load the exact configuration you
            expect, however the diff may change in time (i.e., if an user
            applies a manual configuration change, or a different process or
            command changes the configuration in the meanwhile).

        New in version 2019.2.0

    revert_in: ``None``
        Commit and revert the changes in a specific number of minutes / hours.
        Example of accepted formats: ``5`` (revert in 5 minutes), ``2m`` (revert
        in 2 minutes), ``1h`` (revert the changes in 1 hour)`, ``5h30m`` (revert
        the changes in 5 hours and 30 minutes).

        Note:
            To confirm the commit, and prevent reverting the changes, you will
            have to execute the
            :mod:`net.confirm_commit <salt.modules.napalm_network.confirm_commit>`
            function, using the commit ID returned by this function.

        Warning:
            This works on any platform, regardless if they have or don't have
            native capabilities to confirming a commit. However, please be
            *very* cautious when using this feature: on Junos (as it is the only
            NAPALM core platform supporting this natively) it executes a commit
            confirmed as you would do from the command line.
            All the other platforms don't have this capability natively,
            therefore the revert is done via Salt. That means, your device needs
            to be reachable at the moment when Salt will attempt to revert your
            changes. Be cautious when pushing configuration changes that would
            prevent you reach the device.

            Similarly, if an user or a different process apply other
            configuration changes in the meanwhile (between the moment you
            commit and till the changes are reverted), these changes would be
            equally reverted, as Salt cannot be aware of them.

        New in version 2019.2.0

    revert_at: ``None``
        Commit and revert the changes at a specific time. Example of accepted
        formats: ``1am`` (will commit and revert the changes at the next 1AM),
        ``13:20`` (will commit and revert at 13:20), ``1:20am``, etc.

        Note:
            To confirm the commit, and prevent reverting the changes, you will
            have to execute the
            :mod:`net.confirm_commit <salt.modules.napalm_network.confirm_commit>`
            function, using the commit ID returned by this function.

        Warning:
            This works on any platform, regardless if they have or don't have
            native capabilities to confirming a commit. However, please be
            *very* cautious when using this feature: on Junos (as it is the only
            NAPALM core platform supporting this natively) it executes a commit
            confirmed as you would do from the command line.
            All the other platforms don't have this capability natively,
            therefore the revert is done via Salt. That means, your device needs
            to be reachable at the moment when Salt will attempt to revert your
            changes. Be cautious when pushing configuration changes that would
            prevent you reach the device.

            Similarly, if an user or a different process apply other
            configuration changes in the meanwhile (between the moment you
            commit and till the changes are reverted), these changes would be
            equally reverted, as Salt cannot be aware of them.

        New in version 2019.2.0

    saltenv: ``base``
        Specifies the Salt environment name.

        New in version 2018.3.0

    :return: a dictionary having the following keys:

    * result (bool): if the config was applied successfully. It is ``False`` only in case of failure. In case     there are no changes to be applied and successfully performs all operations it is still ``True`` and so will be     the ``already_configured`` flag (example below)
    * comment (str): a message for the user
    * already_configured (bool): flag to check if there were no changes applied
    * loaded_config (str): the configuration loaded on the device. Requires ``debug`` to be set as ``True``
    * diff (str): returns the config changes applied

    CLI Example:

        salt '*' net.load_config text='ntp peer 192.168.0.1'
        salt '*' net.load_config filename='/absolute/path/to/your/file'
        salt '*' net.load_config filename='/absolute/path/to/your/file' test=True
        salt '*' net.load_config filename='/absolute/path/to/your/file' commit=False

    Example output:

        {
            'comment': 'Configuration discarded.',
            'already_configured': False,
            'result': True,
            'diff': '[edit interfaces xe-0/0/5]+   description "Adding a description";'
        }

We can observe – When executing in test mode (dry run), we do not
apply any changes in the running configuration of the device.

root@mrcissp-master-1:/# salt -C 'G@os:ios' net.load_config text='ntp server 1.1.1.1' test='True'
Router1:
    ----------
    already_configured:
        False
    comment:
        Configuration discarded.
    diff:
        +ntp server 1.1.1.1
    loaded_config:
    result:
        True

Based on the output “already_configured” is set to “False” i.e. this specific command was not found the running configuration of the Router.
Comment: Configuration discarded – Since, we ran above command in Dry run mode therefore configuration was never applied to the network device running config.

We can extend the same example to a set of configuration command present in a file. Let’s assume we want to check if below commands are present in a Router.

  • ntp server 1.1.1.1
  • ntp server 2.2.2.2
  • hostname Router_1
  • router ospf 2

To do this job, we can create a check_config.cfg file in the same directory mentioned by file_root configuration in Master configuration file.

root@mrcissp-master-1:/etc/salt/templates# pwd
/etc/salt/templates
root@mrcissp-master-1:/etc/salt/templates# cat check_config.cfg
ntp server 1.1.1.1
ntp server 2.2.2.2
hostname Router_1
router ospf 2

We can call this file in load_config function as below

root@mrcissp-master-1:/etc/salt/templates# salt 'Router*' net.load_config salt://check_config.cfg test='True'
Router1:
    ----------
    already_configured:
        False
    comment:
        Configuration discarded.
    diff:
        +ntp server 1.1.1.1
        +ntp server 2.2.2.2
        +hostname Router_1
        +router ospf 2
    loaded_config:
    result:
        True

Let's get practical – A real Use-case of Saltstack

Hello everyone, today we’re going to look into Saltstack to deploy a typical campus networks running Cisco IOS. A typical, campus network is a 3 layer architecture with Access Layer, Distribution Layer and Core Layer. To start this – we have prepared below topology for our demonstration purpose. Our objective is to automate the configuration for all the network devices.

In our lab – all nodes are running Cisco IOS image – However, we could have considered NX-OS for core layer but due to limitation of Computing power on my machine – I decided to use Cisco IOS. Following nodes are shown in the above topology :-

  1. Core nodes – “core1” & “core2”
  2. Aggregtion nodes – “agg1” & “agg2”
  3. Access node – “access1”, “access2”
  4. Saltstack Master – “mrcissp-master-1”
  5. Saltstack Minion – “mrcissp-minion-1”
  6. Management switch – “mgmt-switch-1”, “mgmt-switch-2”

Initial required configuration

Before we can automate the configuration or anything, we first need to configure the devices for initial connectivity for the “Saltstack” host able to ssh and push the configuration. Below are the basic configs required to complete this. Refer to below minimal configuration required for respective devices.

“mgmt-switch-1” config
hostname mgmt-1
!
interface FastEthernet1/0
 switchport mode access
 switchport access vlan 100
 no shut
 duplex full
 speed 100
!
interface FastEthernet1/1
 switchport mode access
 switchport access vlan 100
 no shut
 duplex full
 speed 100
!
interface FastEthernet1/2
 switchport mode access
 switchport access vlan 100
 no shut
 duplex full
 speed 100
!
interface FastEthernet1/3
 switchport mode access
 switchport access vlan 100
 no shut
 duplex full
 speed 100
!
interface FastEthernet1/4
 switchport mode access
 switchport access vlan 100
 no shut
 duplex full
 speed 100
!
interface FastEthernet1/5
 switchport mode access
 switchport access vlan 100
 no shut
 duplex full
 speed 100
!
interface FastEthernet1/6
 switchport mode access
 switchport access vlan 100
 no shut
 duplex full
 speed 100
!
interface FastEthernet1/7
 switchport mode access
 switchport access vlan 100
 no shut
 duplex full
 speed 100
!
interface FastEthernet0/0
 ip address dhcp
 no shut
!
interface Vlan100
 ip address 192.168.100.1 255.255.255.0
!
exit
ip scp server enable
service password-encryption
ip domain name mrcissplab.com
ip ssh version 2
crypto key generate rsa
1024 
username mrcissp privilege 15 password 0 Nvidia@123
line vty 0 4
 login local
 transport input ssh
!
end
wr
! 
“access1” config
conf t
hostname access1
!
interface FastEthernet1/10
 switchport mode access
 switchport access vlan 100
 no shut
 duplex full
 speed 100
!
interface Vlan100
 ip address 192.168.100.11 255.255.255.0
!
exit
ip scp server enable
service password-encryption
ip domain name mrcissplab.com
ip ssh version 2
crypto key generate rsa
1024 
username mrcissp privilege 15 password 0 Nvidia@123
line vty 0 4
 login local
 transport input ssh
!
end
wr
! 

Similarly, other devices have been configured with below IP addressing details.

192.168.100.31 core1
192.168.100.32 core2
192.168.100.21 agg1
192.168.100.22 agg2
192.168.100.11 access1
192.168.100.12 access2
192.168.100.1 mgmt-switch-1
192.168.100.101 mrcissp-minion-1
192.168.100.100 mrcissp-master-1

Please note – primary purpose of keeping management switch is to provide the basic connectivity. Hence, we can remove these switches visually for better understanding of this Lab. Refer to below updated logical topology of this Lab.

For Initial configuration of “salt-master” and “salt-minion” refer to below previous post in this series.

  1. Installation & configuration of salt
  2. Salt Nomenclature

Let’s Automate

Requirement 1: SNMP configuration i.e. SNMP v2 community strings are same across all devices.

Solution: Since, it is a global config and don’t have any dependency on anything except vendor OEM & OS type. Refer to below flow for a standard way of automating configuration with Saltstack.

Command Set for “SNMP” configuration on Cisco IOS

Below are standard set of command to configure SNMP v2 for any Cisco IOS device.

snmp-server community private rw
snmp-server community public ro
snmp-server location Saltstack demonstration in GNS3
snmp-server contact Gaurav@mrcissp
snmp-server host 192.168.100.200 public
snmp-server trap link ietf
snmp-server enable traps ospf
snmp-server enable traps

Identity User input & create a new pillar “snmp.sls” file

root@mrcissp-master-1:/# cat /etc/salt/pillar/snmp.sls
snmp:
  rw_string: private
  ro_string: public
  location: 'Saltstack demonstration in GNS3'
  contact: 'Gaurav@mrcissp'
  host: '192.168.100.200'

Apply defined pillar file to Top.sls file

root@mrcissp-master-1:/etc/salt/pillar# cat top.sls
base:
  '*':
    - snmp
  'core1':
    - core1
  'core2':
    - core2
  'agg1':
    - agg1
  'agg2':
    - agg2
  'access1':
    - access1
  'access2':
    - access2

The ‘*’ tells Salt to provide the content from snmp.sls to all minions. Hence, this configuration would be applied to all the Minions/proxy minions.

Refresh updated pillar to minions & verify

The next step is refreshing the pillar data, executing salt ‘*’ saltutil.refresh_pillar. We can also use the pillar.get execution function to check that the data has been loaded correctly. It can be observed that pillar “snmp” is loaded to all minions as expected.

root@mrcissp-master-1:/etc/salt/pillar# salt '*' saltutil.refresh_pillar
access2:
    True
access1:
    True
agg2:
    True
core1:
    True
core2:
    True
agg1:
    True

root@mrcissp-master-1:/etc/salt/pillar# salt '*' pillar.items
core2:
    ----------
    proxy:
        ----------
        driver:
            ios
        host:
            core2
        passwd:
            Nvidia@123
        proxytype:
            napalm
        username:
            mrcissp
    snmp:
        ----------
        contact:
            Gaurav@mrcissp
        host:
            192.168.100.200
        location:
            Saltstack demonstration in GNS3
        ro_string:
            public
        rw_string:
            private
agg2:
    ----------
    proxy:
        ----------
        driver:
            ios
        host:
            agg2
        passwd:
            Nvidia@123
        proxytype:
            napalm
        username:
            mrcissp
    snmp:
        ----------
        contact:
            Gaurav@mrcissp
        host:
            192.168.100.200
        location:
            Saltstack demonstration in GNS3
        ro_string:
            public
        rw_string:
            private
access2:
    ----------
    proxy:
        ----------
        driver:
            ios
        host:
            access2
        passwd:
            Nvidia@123
        proxytype:
            napalm
        username:
            mrcissp
    snmp:
        ----------
        contact:
            Gaurav@mrcissp
        host:
            192.168.100.200
        location:
            Saltstack demonstration in GNS3
        ro_string:
            public
        rw_string:
            private
core1:
    ----------
    proxy:
        ----------
        driver:
            ios
        host:
            core1
        passwd:
            Nvidia@123
        proxytype:
            napalm
        username:
            mrcissp
    snmp:
        ----------
        contact:
            Gaurav@mrcissp
        host:
            192.168.100.200
        location:
            Saltstack demonstration in GNS3
        ro_string:
            public
        rw_string:
            private
access1:
    ----------
    proxy:
        ----------
        driver:
            ios
        host:
            access1
        passwd:
            Nvidia@123
        proxytype:
            napalm
        username:
            mrcissp
    snmp:
        ----------
        contact:
            Gaurav@mrcissp
        host:
            192.168.100.200
        location:
            Saltstack demonstration in GNS3
        ro_string:
            public
        rw_string:
            private
agg1:
    ----------
    proxy:
        ----------
        driver:
            ios
        host:
            agg1
        passwd:
            Nvidia@123
        proxytype:
            napalm
        username:
            mrcissp
    snmp:
        ----------
        contact:
            Gaurav@mrcissp
        host:
            192.168.100.200
        location:
            Saltstack demonstration in GNS3
        ro_string:
            public
        rw_string:
            private

Create “jinja2” template “snmp_config.jinja” for command structure

For more details on “jinja” refer to my below previous post.

  • Understanding “jinja2” – 1
  • Understanding “jinja2” – 2
root@mrcissp-master-1:/etc/salt/pillar# cat /etc/salt/templates/snmp_config.jinja
snmp-server community {{ pillar['snmp']['rw_string'] }} rw
snmp-server community {{ pillar['snmp']['ro_string'] }} ro
snmp-server location {{ pillar['snmp']['location'] }}
snmp-server contact {{ pillar['snmp']['contact'] }}
snmp-server host {{ pillar['snmp']['host'] }} public
snmp-server trap link ietf
snmp-server enable traps ospf
snmp-server enable traps

Render this template on all of our network devices

To apply or test – we can render the defined template. For this – we can use “net.load_template” module as demonstrated below

root@mrcissp-master-1:/etc/salt/pillar# salt 'core1' net.load_template salt://snmp_config.jinja test='True'
core1:
    ----------
    already_configured:
        False
    comment:
        Configuration discarded.
    diff:
        +snmp-server community private rw
        +snmp-server community public ro
        +snmp-server location Saltstack demonstration in GNS3
        +snmp-server contact Gaurav@mrcissp
        +snmp-server host 192.168.100.200 public
        +snmp-server trap link ietf
        +snmp-server enable traps ospf
        +snmp-server enable traps
    loaded_config:
    result:
        True

root@mrcissp-master-1:/etc/salt/pillar# salt 'core1' net.load_template salt://snmp_config.jinja
core1:
    ----------
    already_configured:
        False
    comment:
    diff:
        +snmp-server community private rw
        +snmp-server community public ro
        +snmp-server location Saltstack demonstration in GNS3
        +snmp-server contact Gaurav@mrcissp
        +snmp-server host 192.168.100.200 public
        +snmp-server trap link ietf
        +snmp-server enable traps ospf
        +snmp-server enable traps
    loaded_config:
    result:
        True

In the above command – we used salt ‘core1’ net.load_template salt://snmp_config.jinja command with test=’true’ to see the configuration changes which is about to reflect on “core1” network device because of this template. We can additionally observe that “comment” field resulting as “configuration discarded” that means configuration was not applied to network device.

In the second command – we did not set any field & configuration was applied to the network device “core1”. To confirm this, we can look at the running configuration of “core1”.

<< Before applying template >> 
core1#
core1#sh run | in snmp
core1#

<< After Rendering template >> 
core1#sh run | in snmp
snmp-server community private RW
snmp-server community public RO
snmp-server trap link ietf
snmp-server location Saltstack demonstration in GNS3
snmp-server contact Gaurav@mrcissp
snmp-server enable traps snmp authentication linkdown linkup coldstart warmstart
snmp-server enable traps vrrp
snmp-server enable traps ds1
snmp-server enable traps tty
snmp-server enable traps eigrp
snmp-server enable traps xgcp
snmp-server enable traps flash insertion removal
snmp-server enable traps ds3
snmp-server enable traps envmon
snmp-server enable traps icsudsu
snmp-server enable traps isdn call-information
snmp-server enable traps isdn layer2
snmp-server enable traps isdn chan-not-avail
snmp-server enable traps isdn ietf
snmp-server enable traps ds0-busyout
snmp-server enable traps ds1-loopback
snmp-server enable traps atm subif
snmp-server enable traps bgp
snmp-server enable traps bstun
snmp-server enable traps bulkstat collection transfer
snmp-server enable traps cnpd
snmp-server enable traps config-copy
snmp-server enable traps config
snmp-server enable traps dial
snmp-server enable traps dlsw
snmp-server enable traps dsp card-status
snmp-server enable traps entity
snmp-server enable traps event-manager
snmp-server enable traps frame-relay
snmp-server enable traps frame-relay subif
snmp-server enable traps hsrp
snmp-server enable traps ipmobile
snmp-server enable traps ipmulticast
snmp-server enable traps mpls ldp
snmp-server enable traps mpls traffic-eng
snmp-server enable traps mpls vpn
snmp-server enable traps msdp
snmp-server enable traps mvpn
snmp-server enable traps ospf state-change
snmp-server enable traps ospf errors
snmp-server enable traps ospf retransmit
snmp-server enable traps ospf lsa
snmp-server enable traps ospf cisco-specific state-change nssa-trans-change
snmp-server enable traps ospf cisco-specific state-change shamlink interface-old
snmp-server enable traps ospf cisco-specific state-change shamlink neighbor
snmp-server enable traps ospf cisco-specific errors
snmp-server enable traps ospf cisco-specific retransmit
snmp-server enable traps ospf cisco-specific lsa
snmp-server enable traps pim neighbor-change rp-mapping-change invalid-pim-message
snmp-server enable traps pppoe
snmp-server enable traps cpu threshold
snmp-server enable traps rsvp
snmp-server enable traps rtr
snmp-server enable traps stun
snmp-server enable traps syslog
snmp-server enable traps l2tun session
snmp-server enable traps vsimaster
snmp-server enable traps vtp
snmp-server enable traps director server-up server-down
snmp-server enable traps isakmp policy add
snmp-server enable traps isakmp policy delete
snmp-server enable traps isakmp tunnel start
snmp-server enable traps isakmp tunnel stop
snmp-server enable traps ipsec cryptomap add
snmp-server enable traps ipsec cryptomap delete
snmp-server enable traps ipsec cryptomap attach
snmp-server enable traps ipsec cryptomap detach
snmp-server enable traps ipsec tunnel start
snmp-server enable traps ipsec tunnel stop
snmp-server enable traps ipsec too-many-sas
snmp-server enable traps rf
snmp-server enable traps voice poor-qov
snmp-server enable traps voice fallback
snmp-server enable traps dnis
snmp-server host 192.168.100.200 public

There is another important field – which will help to understand with better clarity i.e. debug=’true’

root@mrcissp-master-1:/etc/salt/pillar# salt 'core1' net.load_template salt://snmp_config.jinja test='True' debug='True'
core1:
    ----------
    already_configured:
        False
    comment:
        Configuration discarded.
    diff:
        +snmp-server enable traps ospf
        +snmp-server enable traps
    loaded_config:
        snmp-server community private rw
        snmp-server community public ro
        snmp-server location Saltstack demonstration in GNS3
        snmp-server contact Gaurav@mrcissp
        snmp-server host 192.168.100.200 public
        snmp-server trap link ietf
        snmp-server enable traps ospf
        snmp-server enable traps
    result:
        True

We can observe that – Salt has loaded the complete configuration and figured out the actual difference from the running-configuration of device.

Requirement 2: Syslog configuration i.e. Syslog server configuration must be as below
For Core device’s – 192.168.100.101 & 192.168.100.102
Aggregation device’s – 192.168.100.103 & 192.168.100.104
Access Device’s – 192.168.100.105 & 192.168.100.106

Solution: Here we are trying to add more complexity because expected configuration is dependent on the role type of switch. By default – salt is not aware of this role type. Therefore, we need to define the customer roles to our proxy-minions. Refer to below image for complete procedure

Define & configure the appropriate role to network devices

To define role – we are using “grains” module and “set” function.

root@mrcissp-master-1:/etc/salt/pillar# salt 'core*' grains.set 'role' core_switch
core1:
    ----------
    changes:
        ----------
        role:
            core_switch
    comment:
    result:
        True
core2:
    ----------
    changes:
        ----------
        role:
            core_switch
    comment:
    result:
        True

root@mrcissp-master-1:/etc/salt/pillar# salt 'agg*' grains.set 'role' agg_switch
agg2:
    ----------
    changes:
        ----------
        role:
            agg_switch
    comment:
    result:
        True
agg1:
    ----------
    changes:
        ----------
        role:
            agg_switch
    comment:
    result:
        True

root@mrcissp-master-1:/etc/salt/pillar# salt 'access*' grains.set 'role' access_switch
access1:
    ----------
    changes:
        ----------
        role:
            access_switch
    comment:
    result:
        True
access2:
    ----------
    changes:
        ----------
        role:
            access_switch
    comment:
    result:
        True

Configuration set for logging server

logging host 192.168.100.101
logging facility local7
logging trap 4

Pillar file “syslog.sls” file

Please go through this pillar file carefully. This is written in a different fashion from the previous “snmp.sls” file. Here, we are leveraging the capability of “jinja” again in a Pillar file. As per the role of the network device :- appropriate logging host would be selected for.

syslog:
  {% if grains['role'] == 'core_switch' %}
  host:
    - 192.168.100.101
    - 192.168.100.102
  {% elif grains['role'] == 'agg_switch' %}
  host:
    - 192.168.100.103
    - 192.168.100.104
  {% elif grains['role'] == 'access_switch' %}
  host:
    - 192.168.100.105
    - 192.168.100.106
  {% endif %}
  facility: local7
  logging_level: 4

Mapping Pillar file to top.sls file

base:
  '*':
    - snmp
    - syslog
  'core1':
    - core1
  'core2':
    - core2
  'agg1':
    - agg1
  'agg2':
    - agg2
  'access1':
    - access1
  'access2':
    - access2
  'access3':
    - access3
  'access4':
    - access4

Please note – again “syslog.sls” file is kept under ‘*’ because syslog configuration should be applied to all network devices.

Refresh pillar & grains data

Below output demonstrate if appropriate syslog server’s are selected as per given requirement.

root@mrcissp-master-1:/etc/salt/pillar# salt '*' saltutil.refresh_pillar
access3:
    True
agg1:
    True
access2:
    True
core2:
    True
core1:
    True
access1:
    True
agg2:
    True

root@mrcissp-master-1:/etc/salt/pillar# salt '*' pillar.items syslog
agg1:
    ----------
    syslog:
        ----------
        facility:
            local7
        host:
            - 192.168.100.103
            - 192.168.100.104
        logging_level:
            4
core2:
    ----------
    syslog:
        ----------
        facility:
            local7
        host:
            - 192.168.100.101
            - 192.168.100.102
        logging_level:
            4
access2:
    ----------
    syslog:
        ----------
        facility:
            local7
        host:
            - 192.168.100.105
            - 192.168.100.106
        logging_level:
            4
core1:
    ----------
    syslog:
        ----------
        facility:
            local7
        host:
            - 192.168.100.101
            - 192.168.100.102
        logging_level:
            4
access1:
    ----------
    syslog:
        ----------
        facility:
            local7
        host:
            - 192.168.100.105
            - 192.168.100.106
        logging_level:
            4
agg2:
    ----------
    syslog:
        ----------
        facility:
            local7
        host:
            - 192.168.100.103
            - 192.168.100.104
        logging_level:
            4

Creating Jinja template for syslog configuration i.e. syslog.jinja

{%- for server in pillar['syslog']['host'] %}
logging host {{ server }}
{%- endfor %}
logging facility {{ pillar['syslog']['facility'] }}
logging trap {{ pillar['syslog']['logging_level'] }}

Rendering Jinja template to a device

root@mrcissp-master-1:/etc/salt/pillar# salt 'core1' net.load_template salt://syslog_config.jinja test='True'
core1:
    ----------
    already_configured:
        False
    comment:
        Configuration discarded.
    diff:
        +logging host 192.168.100.101
        +logging host 192.168.100.102
        +logging facility local7
        +logging trap 4
    loaded_config:
    result:
        True

root@mrcissp-master-1:/etc/salt/pillar# salt 'core1' net.load_template salt://syslog_config.jinja
core1:
    ----------
    already_configured:
        False
    comment:
    diff:
        +logging host 192.168.100.101
        +logging host 192.168.100.102
        +logging facility local7
        +logging trap 4
    loaded_config:
    result:
        True

core1#show run | in logging
logging trap warnings
logging 192.168.100.101
logging 192.168.100.102
 logging synchronous
 logging synchronous

Requirement 3: Access Switch configuration for
1) VLAN creation
2) Uplink switch port – Trunk
3) Downlink switch port – Access

Solution: Again, similar procedure would be followed. Following new files are created for this automation.

  1. access1_vlan.sls – VLAN information for access1 switch.
  2. access1_port.sls – Uplink and Downlink port information for access1 switch
  3. access2_vlan.sls – VLAN information for access2 switch.
  4. access2_port.sls – Uplink and Downlink port information for access2 switch
  5. Update top.sls
  6. access_sw_config.jinja – Jinja template for access switch configuration
root@mrcissp-master-1:/etc/salt/pillar# cat access1_vlan.sls
access_vlan:
  native_vlan: 100
  data_vlan:
    - 10
  voice_vlan:
    - 15

root@mrcissp-master-1:/etc/salt/pillar# cat access1_port.sls
access_port:
  uplink:
    - f1/0
    - f1/1
  downlink:
    f1/2: 10

root@mrcissp-master-1:/etc/salt/pillar# cat access2_vlan.sls
access_vlan:
  native_vlan: 100
  data_vlan:
    - 20
  voice_vlan:
    - 25

root@mrcissp-master-1:/etc/salt/pillar# cat access2_port.sls
access_port:
  uplink:
    - f1/0
    - f1/1
  downlink:
    f1/2: 20

root@mrcissp-master-1:/etc/salt/pillar# cat top.sls
base:
  '*':
    - snmp
    - syslog
  'core1':
    - core1
  'core2':
    - core2
  'agg1':
    - agg1
  'agg2':
    - agg2
  'access1':
    - access1
    - access1_vlan
    - access1_port
  'access2':
    - access2
    - access2_vlan
    - access2_port

Template for access switch configuration – “access_sw_config.jinja”

root@mrcissp-master-1:/etc/salt/pillar# cat /etc/salt/templates/access_sw_config.jinja
{%- for voice_vlan in pillar['access_vlan']['voice_vlan'] %}
vlan {{ voice_vlan }}
name VOICE_VLAN{{voice_vlan}}
exit
{%- endfor %}

{%- for data_vlan in pillar['access_vlan']['data_vlan'] %}
vlan {{ data_vlan }}
name DATA_VLAN{{data_vlan}}
exit
{%- endfor %}

{%- for interface in pillar['access_port']['uplink'] %}
interface {{ interface }}
switchport
switchport mode trunk
switchport trunk encapsulation dot1q
switchport trunk native vlan {{ pillar['access_vlan']['native_vlan'] }}
switchport trunk allowed vlan 1,2,1002-1005
{%- for vlan in pillar['access_vlan']['data_vlan'] %}
switchport trunk allowed vlan add {{ vlan }}
{%- endfor %}
{%- for vlan in pillar['access_vlan']['voice_vlan'] %}
switchport trunk allowed vlan add {{ vlan }}
{%- endfor %}
exit
{%- endfor %}

{%- for interface in pillar['access_port']['downlink'].keys() %}
interface {{ interface }}
switchport
switchport mode access
switchport access vlan {{ pillar['access_port']['downlink'][interface] }}
exit
{%- endfor %}

Rendering this template to access switches

root@mrcissp-master-1:/etc/salt/pillar# salt 'access*' net.load_template salt://access_sw_config.jinja test='true'
access1:
    ----------
    already_configured:
        False
    comment:
        Configuration discarded.
    diff:
        +vlan 15
        +name VOICE_VLAN15
        +exit
        +vlan 10
        +name DATA_VLAN10
        +exit
        +interface f1/0
        +switchport
        +switchport mode trunk
        +switchport trunk encapsulation dot1q
        +switchport trunk native vlan 100
        +switchport trunk allowed vlan 1,2,1002-1005
        +switchport trunk allowed vlan add 10
        +switchport trunk allowed vlan add 15
        +exit
        +interface f1/1
        +switchport
        +switchport mode trunk
        +switchport trunk encapsulation dot1q
        +switchport trunk native vlan 100
        +switchport trunk allowed vlan 1,2,1002-1005
        +switchport trunk allowed vlan add 10
        +switchport trunk allowed vlan add 15
        +exit
        +interface f1/2
        +switchport
        +switchport mode access
        +switchport access vlan 10
        +exit
    loaded_config:
    result:
        True
access2:
    ----------
    already_configured:
        False
    comment:
        Configuration discarded.
    diff:
        +vlan 25
        +name VOICE_VLAN25
        +exit
        +vlan 20
        +name DATA_VLAN20
        +exit
        +interface f1/0
        +switchport
        +switchport mode trunk
        +switchport trunk encapsulation dot1q
        +switchport trunk native vlan 100
        +switchport trunk allowed vlan 1,2,1002-1005
        +switchport trunk allowed vlan add 20
        +switchport trunk allowed vlan add 25
        +exit
        +interface f1/1
        +switchport
        +switchport mode trunk
        +switchport trunk encapsulation dot1q
        +switchport trunk native vlan 100
        +switchport trunk allowed vlan 1,2,1002-1005
        +switchport trunk allowed vlan add 20
        +switchport trunk allowed vlan add 25
        +exit
        +interface f1/2
        +switchport
        +switchport mode access
        +switchport access vlan 20
        +exit
    loaded_config:
    result:
        True

Requirement 4: Aggregation Switch configuration for
1) VLAN creation
2) Uplink switch port – Routed Layer 3
3) Downlink switch port – Trunk
4) Spanning Tree Configuration
5) HSRP configuration
6) OSPF configuration

Solution: Following new files are created for aggregation switch configuration automation.

  1. agg1_data.sls – Minimum information required for agg1 switch.
  2. agg2_data.sls – Minimum information required for agg2 switch.
  3. Update top.sls
  4. agg_sw_config.jinja – Jinja template for aggregation switch configuration
root@mrcissp-master-1:/etc/salt/pillar# cat agg1_data.sls
agg_data:
  native_vlan: 100
  voice_vlan:
    - 15
    - 25
  data_vlan:
    - 10
    - 20
  downlink:
    - f1/0
    - f1/1
    - f1/2
  stp:
    primary:
      - 10
      - 20
    secondary:
      - 15
      - 25
  uplink:
    f1/3:
      ip: 192.168.40.1
      mask: 255.255.255.252
    f1/4:
      ip: 192.168.50.1
      mask: 255.255.255.252
  svi:
    vlan10:
      ip: 192.168.10.2
      mask: 255.255.255.0
      vip: 192.168.10.1
      priority: 200
    vlan15:
      ip: 192.168.15.2
      mask: 255.255.255.0
      vip: 192.168.15.1
      priority: 200
    vlan20:
      ip: 192.168.20.2
      mask: 255.255.255.0
      vip: 192.168.20.1
      priority: 100
    vlan25:
      ip: 192.168.25.2
      mask: 255.255.255.0
      vip: 192.168.25.1
      priority: 100
  ospf:
    process: 1
    area: 0
    networks:
      subnets:
        - 192.168.10.2
        - 192.168.15.2
        - 192.168.20.2
        - 192.168.25.2
        - 192.168.40.1
        - 192.168.50.1
      wild_masks:
        - 0.0.0.255
        - 0.0.0.255
        - 0.0.0.255
        - 0.0.0.255
        - 0.0.0.3
        - 0.0.0.3

root@mrcissp-master-1:/etc/salt/pillar# cat agg2_data.sls
agg_data:
  native_vlan: 100
  voice_vlan:
    - 15
    - 25
  data_vlan:
    - 10
    - 20
  downlink:
    - f1/0
    - f1/1
    - f1/2
  stp:
    primary:
      - 15
      - 25
    secondary:
      - 10
      - 20
  uplink:
    f1/3:
      ip: 192.168.70.1
      mask: 255.255.255.252
    f1/4:
      ip: 192.168.60.1
      mask: 255.255.255.252
  svi:
    vlan10:
      ip: 192.168.10.3
      mask: 255.255.255.0
      vip: 192.168.10.1
      priority: 100
    vlan15:
      ip: 192.168.15.3
      mask: 255.255.255.0
      vip: 192.168.15.1
      priority: 300
    vlan20:
      ip: 192.168.20.3
      mask: 255.255.255.0
      vip: 192.168.20.1
      priority: 200
    vlan25:
      ip: 192.168.25.3
      mask: 255.255.255.0
      vip: 192.168.25.1
      priority: 200
  ospf:
    process: 1
    area: 0
    networks:
      subnets:
        - 192.168.10.3
        - 192.168.15.3
        - 192.168.20.3
        - 192.168.25.3
        - 192.168.60.1
        - 192.168.70.1
      wild_masks:
        - 0.0.0.255
        - 0.0.0.255
        - 0.0.0.255
        - 0.0.0.255
        - 0.0.0.3
        - 0.0.0.3
root@mrcissp-master-1:/etc/salt/pillar#

root@mrcissp-master-1:/etc/salt/pillar# cat top.sls
base:
  '*':
    - snmp
    - syslog
  'core1':
    - core1
  'core2':
    - core2
  'agg1':
    - agg1
    - agg1_data
  'agg2':
    - agg2
    - agg2_data
  'access1':
    - access1
    - access1_vlan
    - access1_port
  'access2':
    - access2
    - access2_vlan
    - access2_port

Jinja template for Aggregation switch configuration “agg_sw_config.jinja”

root@mrcissp-master-1:/etc/salt/pillar# cat /etc/salt/templates/agg_sw_config.jinja
{%- for voice_vlan in pillar['agg_data']['voice_vlan'] %}
vlan {{ voice_vlan }}
name VOICE_VLAN{{voice_vlan}}
exit
{%- endfor %}

{%- for data_vlan in pillar['agg_data']['data_vlan'] %}
vlan {{ data_vlan }}
name DATA_VLAN{{data_vlan}}
exit
{%- endfor %}

{%- for interface in pillar['agg_data']['downlink'] %}
interface {{ interface }}
switchport
switchport mode trunk
switchport trunk encapsulation dot1q
switchport trunk native vlan {{ pillar['agg_data']['native_vlan'] }}
switchport trunk allowed vlan 1,2,1002-1005
{%- for vlan in pillar['agg_data']['data_vlan'] %}
switchport trunk allowed vlan add {{ vlan }}
{%- endfor %}
{%- for vlan in pillar['agg_data']['voice_vlan'] %}
switchport trunk allowed vlan add {{ vlan }}
{%- endfor %}
exit
{%- endfor %}

{%- for stp_role in pillar['agg_data']['stp'].keys() %}
{% if stp_role == 'primary' %}
{%- for vlan in pillar['agg_data']['stp'][stp_role] %}
spanning-tree vlan {{ vlan  }} root primary
{%- endfor %}
{% elif stp_role == 'secondary' %}
{%- for vlan in pillar['agg_data']['stp'][stp_role] %}
spanning-tree vlan {{ vlan  }} root secondary
{%- endfor %}
{%- endif %}
{%- endfor %}


{%- for svi_interface in pillar['agg_data']['svi'].keys() %}
interface {{ svi_interface }}
ip address {{ pillar['agg_data']['svi'][svi_interface]['ip'] }} {{ pillar['agg_data']['svi'][svi_interface]['mask'] }}
standby version 2
standby 1 ip {{ pillar['agg_data']['svi'][svi_interface]['vip'] }}
standby 1 priority {{ pillar['agg_data']['svi'][svi_interface]['priority'] }}
standby 1 preempt
no shutdown
{%- endfor %}

{%- for layer3_interface in pillar['agg_data']['uplink'].keys() %}
interface {{ layer3_interface }}
ip address {{ pillar['agg_data']['uplink'][layer3_interface]['ip'] }} {{ pillar['agg_data']['uplink'][layer3_interface]['mask'] }}
no shutdown
exit
{%- endfor %}

router ospf {{ pillar['agg_data']['ospf']['process'] }}
{% set i = 0 %}
{% set mask = pillar['agg_data']['ospf']['networks']['wild_masks'] %}
{%- for subnet in pillar['agg_data']['ospf']['networks']['subnets'] %}
network {{ subnet }} {{ mask[i] }} area {{ pillar['agg_data']['ospf']['area'] }}
{% set i = i+1 %}
{%- endfor %}
exit

Render template

root@mrcissp-master-1:/etc/salt/pillar# salt 'agg1' net.load_template salt://agg_sw_config.jinja test='true'
agg1:
    ----------
    already_configured:
        False
    comment:
        Configuration discarded.
    diff:
        +vlan 15
        +name VOICE_VLAN15
        +exit
        +vlan 25
        +name VOICE_VLAN25
        +exit
        +vlan 10
        +name DATA_VLAN10
        +exit
        +vlan 20
        +name DATA_VLAN20
        +exit
        +interface f1/0
        +switchport
        +switchport mode trunk
        +switchport trunk encapsulation dot1q
        +switchport trunk native vlan 100
        +switchport trunk allowed vlan 1,2,1002-1005
        +switchport trunk allowed vlan add 10
        +switchport trunk allowed vlan add 20
        +switchport trunk allowed vlan add 15
        +switchport trunk allowed vlan add 25
        +exit
        +interface f1/1
        +switchport
        +switchport mode trunk
        +switchport trunk encapsulation dot1q
        +switchport trunk native vlan 100
        +switchport trunk allowed vlan 1,2,1002-1005
        +switchport trunk allowed vlan add 10
        +switchport trunk allowed vlan add 20
        +switchport trunk allowed vlan add 15
        +switchport trunk allowed vlan add 25
        +exit
        +interface f1/2
        +switchport
        +switchport mode trunk
        +switchport trunk encapsulation dot1q
        +switchport trunk native vlan 100
        +switchport trunk allowed vlan 1,2,1002-1005
        +switchport trunk allowed vlan add 10
        +switchport trunk allowed vlan add 20
        +switchport trunk allowed vlan add 15
        +switchport trunk allowed vlan add 25
        +exit
        +spanning-tree vlan 10 root primary
        +spanning-tree vlan 20 root primary
        +spanning-tree vlan 15 root secondary
        +spanning-tree vlan 25 root secondary
        +ip address 192.168.10.2 255.255.255.0
        +standby version 2
        +standby 1 ip 192.168.10.1
        +standby 1 priority 200
        +standby 1 preempt
        -no shutdown
        +ip address 192.168.20.2 255.255.255.0
        +standby version 2
        +standby 1 ip 192.168.20.1
        +standby 1 priority 100
        +standby 1 preempt
        -no shutdown
        +ip address 192.168.25.2 255.255.255.0
        +standby version 2
        +standby 1 ip 192.168.25.1
        +standby 1 priority 100
        +standby 1 preempt
        -no shutdown
        +ip address 192.168.15.2 255.255.255.0
        +standby version 2
        +standby 1 ip 192.168.15.1
        +standby 1 priority 200
        +standby 1 preempt
        -no shutdown
        +interface f1/4
        +ip address 192.168.50.1 255.255.255.252
        -no shutdown
        +exit
        +interface f1/3
        +ip address 192.168.40.1 255.255.255.252
        -no shutdown
        +exit
        +network 192.168.10.2 0.0.0.255 area 0
        +network 192.168.15.2 0.0.0.255 area 0
        +network 192.168.20.2 0.0.0.255 area 0
        +network 192.168.25.2 0.0.0.255 area 0
        +network 192.168.40.1 0.0.0.255 area 0
        +network 192.168.50.1 0.0.0.255 area 0
        +exit
    loaded_config:
    result:
        True
root@mrcissp-master-1:/etc/salt/pillar#

Requirement 5: Core Switch configuration for
1) Downlink ports – Routed Layer 3 ports
2) OSPF configuration

Solution: Core configuration for our lab is quite similar we configured at Aggregation switch. Therefore, Following new files are created for core switch configuration automation.

  1. core1_data.sls – Minimum information required for core1 switch.
  2. core2_data.sls – Minimum information required for core2 switch.
  3. Update top.sls
  4. core_sw_config.jinja – Jinja template for core switch configuration
root@mrcissp-master-1:/etc/salt/pillar# cat core1_data.sls
core_data:
  downlink:
    f1/3:
      ip: 192.168.40.2
      mask: 255.255.255.252
    f1/4:
      ip: 192.168.60.2
      mask: 255.255.255.252
    f1/5:
      ip: 192.168.80.1
      mask: 255.255.255.252
  ospf:
    process: 1
    area: 0
    networks:
      subnets:
        - 192.168.60.2
        - 192.168.70.2
        - 192.168.80.1
      wild_masks:
        - 0.0.0.3
        - 0.0.0.3
        - 0.0.0.3

root@mrcissp-master-1:/etc/salt/pillar# cat core2_data.sls
core_data:
  downlink:
    f1/3:
      ip: 192.168.70.2
      mask: 255.255.255.252
    f1/4:
      ip: 192.168.50.2
      mask: 255.255.255.252
    f1/5:
      ip: 192.168.80.2
      mask: 255.255.255.252
  ospf:
    process: 1
    area: 0
    networks:
      subnets:
        - 192.168.50.2
        - 192.168.70.2
        - 192.168.80.2
      wild_masks:
        - 0.0.0.3
        - 0.0.0.3
        - 0.0.0.3

root@mrcissp-master-1:/etc/salt/pillar# cat top.sls
base:
  '*':
    - snmp
    - syslog
  'core1':
    - core1
    - core1_data
  'core2':
    - core2
    - core2_data
  'agg1':
    - agg1
    - agg1_data
  'agg2':
    - agg2
    - agg2_data
  'access1':
    - access1
    - access1_vlan
    - access1_port
  'access2':
    - access2
    - access2_vlan
    - access2_port

Jinja template for core switch configuration “core_sw_config.jinja”

root@mrcissp-master-1:/etc/salt/pillar# cat /etc/salt/templates/core_sw_config.jinja
{%- for layer3_interface in pillar['core_data']['downlink'].keys() %}
interface {{ layer3_interface }}
ip address {{ pillar['core_data']['downlink'][layer3_interface]['ip'] }} {{ pillar['core_data']['downlink'][layer3_interface]['mask'] }}
no shutdown
exit
{%- endfor %}

router ospf {{ pillar['core_data']['ospf']['process'] }}
{% set i = 0 %}
{% set mask = pillar['core_data']['ospf']['networks']['wild_masks'] %}
{%- for subnet in pillar['core_data']['ospf']['networks']['subnets'] %}
network {{ subnet }} {{ mask[i] }} area {{ pillar['core_data']['ospf']['area'] }}
{% set i = i+1 %}
{%- endfor %}
exit

Rendering template

root@mrcissp-master-1:/etc/salt/pillar# salt 'core1' net.load_template salt://core_sw_config.jinja  test='True' debug='true'
core1:
    ----------
    already_configured:
        False
    comment:
        Configuration discarded.
    diff:
        +interface f1/4
        +ip address 192.168.60.2 255.255.255.252
        -no shutdown
        +exit
        +interface f1/5
        +ip address 192.168.80.1 255.255.255.252
        -no shutdown
        +exit
        +interface f1/3
        +ip address 192.168.40.2 255.255.255.252
        -no shutdown
        +exit
        +network 192.168.60.2 0.0.0.3 area 0
        +network 192.168.70.2 0.0.0.3 area 0
        +network 192.168.80.1 0.0.0.3 area 0
        +exit
    loaded_config:

        interface f1/4
        ip address 192.168.60.2 255.255.255.252
        no shutdown
        exit
        interface f1/5
        ip address 192.168.80.1 255.255.255.252
        no shutdown
        exit
        interface f1/3
        ip address 192.168.40.2 255.255.255.252
        no shutdown
        exit

        router ospf 1


        network 192.168.60.2 0.0.0.3 area 0

        network 192.168.70.2 0.0.0.3 area 0

        network 192.168.80.1 0.0.0.3 area 0

        exit
    result:
        True
root@mrcissp-master-1:/etc/salt/pillar#

Summary

Now, we have created all the required template for our organization campus deployment. It would be very easy to bring up a new branch or a building in few minutes. The only task we need to do for automation is to create required “input_files” for respective network devices & provide basic connectivity. It is very useful when you have to maintain hundreds of switches, routers, and donot want to spend precious time in such repetitive tasks.

Saltstack is a good tool to automate networks (yet to get it matured enough) and there is a lot of advantages in using it like, increase efficiency and effectiveness and reduce human errors in deploying and maintaining large networks. Very useful when you have to maintain hundreds of switches, and you have a lot of network projects.

Thank you so much for visiting.

If you need help or have comments/feedback regarding the topic, please don’t hesitate to drop a comment.

Understanding “Salt” CLI syntax

Salt is very well structured. A very important functionality is represented by the execution modules. They are the main entry point into the Salt world. The execution modules are Python modules, and are very easy to read (and eventually write) by anyone with basic Python programming knowledge. Everything is linear, which makes them flexible and easy to understand; in general, they consist only of simple functions.

For complete list of modules – refer to Salt Module Index. It’s respective python repository can be found at Github

Refer to below image for “salt” cli syntax.

In this post, we will look at the most frequently used module

Grains Module

Function: “items”
Purpose: To collect all grains from the managed system.  

root@mrcissp-master-1:/# salt '*' grains.items
Router1:
    ----------
    cpuarch:
        x86_64
    dns:
        ----------
        domain:
        ip4_nameservers:
            - 8.8.8.8
        ip6_nameservers:
        nameservers:
            - 8.8.8.8
        options:
        search:
        sortlist:
    fqdns:
    gpus:
    host:
        192.168.200.1
    hostname:
        R1
    hwaddr_interfaces:
        ----------
        eth0:
            d2:32:87:c9:f2:8f
    id:
        Router1
    interfaces:
        - GigabitEthernet0/0
        - GigabitEthernet0/1
        - GigabitEthernet0/2
        - GigabitEthernet0/3
        - Loopback0
    kernel:
        proxy
    kernelrelease:
        proxy
    kernelversion:
        proxy
    locale_info:
        ----------
    machine_id:
        578962dbb63ae45b159330245dd26e77
    master:
        192.168.100.2
    mem_total:
        0
    model:
        IOSv
    nodename:
        mrcissp-minion-1
    num_gpus:
        0
    optional_args:
        ----------
        config_lock:
            False
        keepalive:
            5
    os:
        ios
    os_family:
        proxy
    osarch:
        x86_64
    osfinger:
        proxy-proxy
    osfullname:
        proxy
    osrelease:
        proxy
    osrelease_info:
        - proxy
    path:
        /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    ps:
        ps -efHww
    pythonexecutable:
        /usr/bin/python
    pythonpath:
        - /usr/local/bin
        - /usr/lib/python2.7
        - /usr/lib/python2.7/plat-x86_64-linux-gnu
        - /usr/lib/python2.7/lib-tk
        - /usr/lib/python2.7/lib-old
        - /usr/lib/python2.7/lib-dynload
        - /usr/local/lib/python2.7/dist-packages
        - /usr/lib/python2.7/dist-packages
    pythonversion:
        - 2
        - 7
        - 15
        - final
        - 0
    saltpath:
        /usr/local/lib/python2.7/dist-packages/salt
    saltversion:
        2019.2.2
    saltversioninfo:
        - 2019
        - 2
        - 2
        - 0
    serial:
        97277GPG1FLKXDX5WL1G0
    shell:
        /bin/sh
    uptime:
        240
    username:
        mrcissp
    vendor:
        Cisco
    version:
        IOSv Software (VIOS-ADVENTERPRISEK9-M), Version 15.6(2)T, RELEASE SOFTWARE (fc2)
    virtual:
        VMware
    zmqversion:
        4.3.2
wlc1:
    ----------
    cpuarch:
        x86_64
    dns:
        ----------
        domain:
        ip4_nameservers:
            - 8.8.8.8
        ip6_nameservers:
        nameservers:
            - 8.8.8.8
        options:
        search:
        sortlist:
    fqdns:
    gpus:
    hwaddr_interfaces:
        ----------
        eth0:
            d2:32:87:c9:f2:8f
    id:
        wlc1
    kernel:
        proxy
    kernelrelease:
        proxy
    kernelversion:
        proxy
    locale_info:
        ----------
    machine_id:
        578962dbb63ae45b159330245dd26e77
    master:
        192.168.100.2
    mem_total:
        0
    nodename:
        mrcissp-minion-1
    num_gpus:
        0
    os:
        proxy
    os_family:
        proxy
    osarch:
        x86_64
    osfinger:
        proxy-proxy
    osfullname:
        proxy
    osrelease:
        proxy
    osrelease_info:
        - proxy
    path:
        /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    ps:
        ps -efHww
    pythonexecutable:
        /usr/bin/python
    pythonpath:
        - /usr/local/bin
        - /usr/lib/python2.7
        - /usr/lib/python2.7/plat-x86_64-linux-gnu
        - /usr/lib/python2.7/lib-tk
        - /usr/lib/python2.7/lib-old
        - /usr/lib/python2.7/lib-dynload
        - /usr/local/lib/python2.7/dist-packages
        - /usr/lib/python2.7/dist-packages
    pythonversion:
        - 2
        - 7
        - 15
        - final
        - 0
    saltpath:
        /usr/local/lib/python2.7/dist-packages/salt
    saltversion:
        2019.2.2
    saltversioninfo:
        - 2019
        - 2
        - 2
        - 0
    shell:
        /bin/sh
    virtual:
        VMware
    zmqversion:
        4.3.2
mrcissp-minion-1:
    ----------
    SSDs:
    biosreleasedate:
        07/29/2019
    biosversion:
        6.00
    cpu_flags:
        - fpu
        - vme
        - de
        - pse
        - tsc
        - msr
        - pae
        - mce
        - cx8
        - apic
        - sep
        - mtrr
        - pge
        - mca
        - cmov
        - pat
        - pse36
        - clflush
        - mmx
        - fxsr
        - sse
        - sse2
        - ss
        - ht
        - syscall
        - nx
        - pdpe1gb
        - rdtscp
        - lm
        - constant_tsc
        - arch_perfmon
        - nopl
        - xtopology
        - tsc_reliable
        - nonstop_tsc
        - cpuid
        - pni
        - pclmulqdq
        - vmx
        - ssse3
        - fma
        - cx16
        - pcid
        - sse4_1
        - sse4_2
        - x2apic
        - movbe
        - popcnt
        - aes
        - xsave
        - avx
        - f16c
        - rdrand
        - hypervisor
        - lahf_lm
        - 3dnowprefetch
        - cpuid_fault
        - pti
        - ssbd
        - ibrs
        - ibpb
        - stibp
        - tpr_shadow
        - vnmi
        - ept
        - vpid
        - fsgsbase
        - smep
        - arat
        - flush_l1d
        - arch_capabilities
    cpu_model:
        Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
    cpuarch:
        x86_64
    disks:
        - loop1
        - sdb
        - loop6
        - loop4
        - sr0
        - loop2
        - loop0
        - loop7
        - sda
        - loop5
        - loop3
    dns:
        ----------
        domain:
        ip4_nameservers:
            - 8.8.8.8
        ip6_nameservers:
        nameservers:
            - 8.8.8.8
        options:
        search:
        sortlist:
    domain:
    fqdn:
        mrcissp-minion-1
    fqdn_ip4:
        - 127.0.1.1
    fqdn_ip6:
    fqdns:
    gid:
        0
    gpus:
    groupname:
        root
    host:
        mrcissp-minion-1
    hwaddr_interfaces:
        ----------
        eth0:
            d2:32:87:c9:f2:8f
    id:
        mrcissp-minion-1
    init:
        unknown
    ip4_interfaces:
        ----------
        eth0:
            - 192.168.100.3
        lo:
            - 127.0.0.1
    ip6_interfaces:
        ----------
        eth0:
            - fe80::d032:87ff:fec9:f28f
        lo:
            - ::1
    ip_interfaces:
        ----------
        eth0:
            - 192.168.100.3
            - fe80::d032:87ff:fec9:f28f
        lo:
            - 127.0.0.1
            - ::1
    ipv4:
        - 127.0.0.1
        - 192.168.100.3
    ipv6:
        - ::1
        - fe80::d032:87ff:fec9:f28f
    kernel:
        Linux
    kernelrelease:
        4.15.0-55-generic
    kernelversion:
        #60-Ubuntu SMP Tue Jul 2 18:22:20 UTC 2019
    locale_info:
        ----------
        defaultencoding:
            None
        defaultlanguage:
            None
        detectedencoding:
            ANSI_X3.4-1968
        timezone:
            unknown
    localhost:
        mrcissp-minion-1
    lsb_distrib_codename:
        bionic
    lsb_distrib_description:
        Ubuntu 18.04.3 LTS
    lsb_distrib_id:
        Ubuntu
    lsb_distrib_release:
        18.04
    machine_id:
        578962dbb63ae45b159330245dd26e77
    manufacturer:
        VMware, Inc.
    master:
        192.168.100.2
    mdadm:
    mem_total:
        3944
    nodename:
        mrcissp-minion-1
    num_cpus:
        4
    num_gpus:
        0
    os:
        Ubuntu
    os_family:
        Debian
    osarch:
        amd64
    oscodename:
        bionic
    osfinger:
        Ubuntu-18.04
    osfullname:
        Ubuntu
    osmajorrelease:
        18
    osrelease:
        18.04
    osrelease_info:
        - 18
        - 4
    path:
        /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    pid:
        4560
    productname:
        VMware Virtual Platform
    ps:
        ps -efHww
    pythonexecutable:
        /usr/bin/python
    pythonpath:
        - /usr/local/bin
        - /usr/lib/python2.7
        - /usr/lib/python2.7/plat-x86_64-linux-gnu
        - /usr/lib/python2.7/lib-tk
        - /usr/lib/python2.7/lib-old
        - /usr/lib/python2.7/lib-dynload
        - /usr/local/lib/python2.7/dist-packages
        - /usr/lib/python2.7/dist-packages
    pythonversion:
        - 2
        - 7
        - 15
        - final
        - 0
    saltpath:
        /usr/local/lib/python2.7/dist-packages/salt
    saltversion:
        2019.2.2
    saltversioninfo:
        - 2019
        - 2
        - 2
        - 0
    serialnumber:
        VMware-56 4d e4 6c d3 e5 53 d5-0c 20 c1 55 a4 0e b9 4e
    server_id:
        822305722
    shell:
        /bin/sh
    swap_total:
        924
    systemd:
        ----------
        features:
            +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid
        version:
            237
    uid:
        0
    username:
        root
    uuid:
        564de46c-d3e5-53d5-0c20-c155a40eb94e
    virtual:
        VMware
    virtual_subtype:
        Docker
    zfs_feature_flags:
        False
    zfs_support:
        False
    zmqversion:
        4.3.2

Note: When devices is managed by NAPALM i.e. in this case “Router1” – additional grains are collected. Details of these grains can found at Saltstack documentation

We can also observe that not many grains are found on WLC i.e. “wlc1”. Because of following reasons

  1. It is managed by “netmiko”
  2. By default – there is not any existing module for grains collected by “netmiko”

Function: “get”
Purpose: To get the value of a given grain.
Argument: requested grain

root@mrcissp-master-1:/# salt '*' grains.get os
wlc1:
    proxy
Router1:
    ios
mrcissp-minion-1:
    Ubuntu
root@mrcissp-master-1:/# salt '*' grains.get master
wlc1:
    192.168.100.2
Router1:
    192.168.100.2
mrcissp-minion-1:
    192.168.100.2
root@mrcissp-master-1:/# salt '*' grains.get host
Router1:
    192.168.200.1
wlc1:
mrcissp-minion-1:
    mrcissp-minion-1
root@mrcissp-master-1:/#

Pillar Module

Function: “items”
Purpose: To collect all pillar found at a Minion.

root@mrcissp-master-1:/# salt '*' pillar.items
Router1:
    ----------
    proxy:
        ----------
        driver:
            ios
        host:
            192.168.200.1
        passwd:
            Nvidia@123
        proxytype:
            napalm
        username:
            mrcissp
wlc1:
    ----------
    proxy:
        ----------
        device_type:
            cisco_wlc
        ip:
            192.168.241.2
        password:
            Nvidia@123
        proxytype:
            netmiko
        username:
            mrcissp
mrcissp-minion-1:
    ----------

Function: “get”
Purpose: to get a value of given pillar. 

root@mrcissp-master-1:/# salt '*' pillar.get proxy
wlc1:
    ----------
    device_type:
        cisco_wlc
    ip:
        192.168.241.2
    password:
        Nvidia@123
    proxytype:
        netmiko
    username:
        mrcissp
Router1:
    ----------
    driver:
        ios
    host:
        192.168.200.1
    passwd:
        Nvidia@123
    proxytype:
        napalm
    username:
        mrcissp
mrcissp-minion-1:

Test Module

Function: “ping”
Purpose: Verify connectivity from Master to Minion and to check if Minion is configured properly.

root@mrcissp-master-1:/# salt '*' test.ping
wlc1:
    True
Router1:
    True
mrcissp-minion-1:
    True
root@mrcissp-master-1:/#

Netmiko Module

Function: “send_command”
Purpose: Execute command_string on the SSH channel using a pattern-based mechanism. Generally used for show commands. By default this method will keep waiting to receive data until the network device prompt is detected. The current network device prompt will be determined automatically.

root@mrcissp-master-1:/# salt wlc1 netmiko.send_command 'show sysinfo'
wlc1:

    Manufacturer's Name.............................. Cisco Systems Inc.
    Product Name..................................... Cisco Controller
    Product Version.................................. 8.9.111.0
    RTOS Version..................................... 8.9.111.0
    Bootloader Version............................... 8.5.1.85
    Emergency Image Version.......................... 8.9.111.0

    OUI File Last Update Time........................ Tue Feb 06 10:44:07 UTC 2018
    r,aes192-ctr,aes256-ctr,aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,3des-cbc
    Build Type....................................... DATA + WPS

    System Name...................................... Cisco-0c0c.9da2.b501
    System Location..................................
    System Contact...................................
    System ObjectID.................................. 1.3.6.1.4.1.9.1.1631
    IP Address....................................... 192.168.241.2
    IPv6 Address..................................... ::
    System Up Time................................... 0 days 0 hrs 34 mins 4 secs
    System Timezone Location.........................
    System Stats Realtime Interval................... 5
    System Stats Normal Interval..................... 180

    Configured Country............................... US  - United States

    State of 802.11b Network......................... Enabled
    State of 802.11a Network......................... Enabled
    Number of WLANs.................................. 1
    Number of Active Clients......................... 0

    OUI Classification Failure Count................. 0

    Memory Current Usage............................. 52
    Memory Average Usage............................. 52
    CPU Current Usage................................ 0
    CPU Average Usage................................ 0

    Flash Type....................................... Compact Flash Card
    Flash Size....................................... 1073741824

    Burned-in MAC Address............................ 0C:0C:9D:A2:B5:01
    Maximum number of APs supported.................. 200
    System Nas-Id....................................
    WLC MIC Certificate Types........................ SHA1
    Licensing Type................................... RTU
    vWLC config...................................... Small

Net Module (aka napalm_network)

Virtual name of “napalm_network” module
Function: “connected”
Purpose: Verify connectivity from Master to Network devices managed by “napalm” proxy.

root@mrcissp-master-1:/# salt Router1 net.connected
Router1:
    ----------
    out:
        True
root@mrcissp-master-1:/#

Function: “arp”
Purpose: to get arp entries on all interfaces.

root@mrcissp-master-1:/# salt Router1 net.arp
Router1:
    ----------
    comment:
    out:
        |_
          ----------
          age:
              0.0
          interface:
              GigabitEthernet0/0
          ip:
              192.168.100.1
          mac:
              0C:0C:9D:A4:BF:00
        |_
          ----------
          age:
              0.0
          interface:
              GigabitEthernet0/0
          ip:
              192.168.100.2
          mac:
              56:EC:C7:5B:E8:9C
        |_
          ----------
          age:
              65.0
          interface:
              GigabitEthernet0/0
          ip:
              192.168.100.3
          mac:
              D2:32:87:C9:F2:8F
        |_
          ----------
          age:
              1.0
          interface:
              GigabitEthernet0/1
          ip:
              192.168.108.2
          mac:
              00:50:56:E5:45:56
        |_
          ----------
          age:
              0.0
          interface:
              GigabitEthernet0/1
          ip:
              192.168.108.131
          mac:
              0C:0C:9D:A4:BF:01
        |_
          ----------
          age:
              53.0
          interface:
              GigabitEthernet0/1
          ip:
              192.168.108.254
          mac:
              00:50:56:ED:B5:FA
        |_
          ----------
          age:
              0.0
          interface:
              GigabitEthernet0/2
          ip:
              192.168.240.1
          mac:
              0C:0C:9D:A4:BF:02
        |_
          ----------
          age:
              68.0
          interface:
              GigabitEthernet0/2
          ip:
              192.168.240.2
          mac:
              0C:0C:9D:77:06:00
    result:
        True

Napalm Module (aka napalm_mod)

Virtual name of “napalm_mod” module
Function: “call”
Purpose: To execute remote commands on the devices.

root@mrcissp-master-1:/# salt Router1 napalm.call 'cli' ['show version','show ip int br']
Router1:
    ----------
    comment:
    out:
        ----------
        show ip int br:
            Interface                  IP-Address      OK? Method Status                Protocol
            GigabitEthernet0/0         192.168.100.1   YES NVRAM  up                    up
            GigabitEthernet0/1         192.168.108.131 YES DHCP   up                    up
            GigabitEthernet0/2         192.168.240.1   YES NVRAM  up                    up
            GigabitEthernet0/3         unassigned      YES NVRAM  administratively down down
            Loopback0                  192.168.200.1   YES NVRAM  up                    up
        show version:
            Cisco IOS Software, IOSv Software (VIOS-ADVENTERPRISEK9-M), Version 15.6(2)T, RELEASE SOFTWARE (fc2)
            Technical Support: http://www.cisco.com/techsupport
            Copyright (c) 1986-2016 by Cisco Systems, Inc.
            Compiled Tue 22-Mar-16 16:19 by prod_rel_team


            ROM: Bootstrap program is IOSv

            R1 uptime is 1 hour, 10 minutes
            System returned to ROM by reload
            System restarted at 18:45:09 UTC Sun Nov 24 2019
            System image file is "flash0:/vios-adventerprisek9-m"
            Last reload reason: Unknown reason



            This product contains cryptographic features and is subject to United
            States and local country laws governing import, export, transfer and
            use. Delivery of Cisco cryptographic products does not imply
            third-party authority to import, export, distribute or use encryption.
            Importers, exporters, distributors and users are responsible for
            compliance with U.S. and local country laws. By using this product you
            agree to comply with applicable laws and regulations. If you are unable
            to comply with U.S. and local laws, return this product immediately.

            A summary of U.S. laws governing Cisco cryptographic products may be found at:
            http://www.cisco.com/wwl/export/crypto/tool/stqrg.html

            If you require further assistance please contact us by sending email to
            export@cisco.com.

            Cisco IOSv (revision 1.0) with  with 460017K/62464K bytes of memory.
            Processor board ID 97277GPG1FLKXDX5WL1G0
            4 Gigabit Ethernet interfaces
            DRAM configuration is 72 bits wide with parity disabled.
            256K bytes of non-volatile configuration memory.
            2097152K bytes of ATA System CompactFlash 0 (Read/Write)
            0K bytes of ATA CompactFlash 1 (Read/Write)
            1024K bytes of ATA CompactFlash 2 (Read/Write)
            0K bytes of ATA CompactFlash 3 (Read/Write)



            Configuration register is 0x0
    result:
        True
root@mrcissp-master-1:/#

Sys Module

Function: “ list_modules ”
Purpose: To get list of loaded modules with their virtual names.

root@mrcissp-master-1:/# salt '*' sys.list_modules
Router1:
    - aliases
    - alternatives
    - ansible
    - archive
    - artifactory
    - beacons
    - bgp
    - bigip
    - buildout
    - chassis
    - chronos
    - ciscoconfparse
    - cisconso
    - cloud
    - cmd
    - composer
    - config
    - consul
    - container_resource
    - cp
    - cpan
    - cryptdev
    - data
    - ddns
    - defaults
    - devmap
    - disk
    - django
    - dnsmasq
    - dnsutil
    - drbd
    - environ
    - esxcluster
    - esxdatacenter
    - esxi
    - esxvm
    - etcd
    - ethtool
    - event
    - extfs
    - file
    - gem
    - genesis
    - git
    - glassfish
    - gnome
    - google_chat
    - grafana4
    - grains
    - hashutil
    - highstate_doc
    - hipchat
    - hosts
    - http
    - hue
    - incron
    - ini
    - inspector
    - introspect
    - iosconfig
    - jboss7
    - jboss7_cli
    - k8s
    - key
    - keyboard
    - locale
    - locate
    - log
    - logrotate
    - mandrill
    - marathon
    - match
    - mattermost
    - mine
    - minion
    - modjk
    - mount
    - msteams
    - nagios_rpc
    - namecheap_domains
    - namecheap_domains_dns
    - namecheap_domains_ns
    - namecheap_ssl
    - namecheap_users
    - napalm
    - napalm_bgp
    - napalm_formula
    - napalm_net
    - napalm_ntp
    - napalm_route
    - napalm_snmp
    - napalm_users
    - net
    - netaddress
    - netmiko
    - network
    - nexus
    - nova
    - ntp
    - nxos
    - nxos_api
    - openscap
    - openstack_config
    - opsgenie
    - out
    - pagerduty
    - pagerduty_util
    - pam
    - parallels
    - peeringdb
    - pillar
    - pip
    - pkg_resource
    - probes
    - publish
    - pushover
    - pyeapi
    - pyenv
    - random
    - random_org
    - rbenv
    - rest_sample_utils
    - ret
    - route
    - rvm
    - s3
    - s6
    - salt_proxy
    - saltcheck
    - saltutil
    - schedule
    - scp
    - scsi
    - sdb
    - seed
    - serverdensity_device
    - slack
    - slsutil
    - smbios
    - smtp
    - snmp
    - solrcloud
    - sqlite3
    - ssh
    - state
    - status
    - statuspage
    - supervisord
    - sys
    - sysfs
    - syslog_ng
    - system
    - telegram
    - telemetry
    - temp
    - test
    - textfsm
    - timezone
    - uptime
    - users
    - vault
    - vcenter
    - virtualenv
    - vsphere
    - zabbix
    - zenoss
wlc1:
    - aliases
    - alternatives
    - ansible
    - archive
    - artifactory
    - beacons
    - bigip
    - buildout
    - chassis
    - chronos
    - ciscoconfparse
    - cisconso
    - cloud
    - cmd
    - composer
    - config
    - consul
    - container_resource
    - cp
    - cpan
    - cryptdev
    - data
    - ddns
    - defaults
    - devmap
    - disk
    - django
    - dnsmasq
    - dnsutil
    - drbd
    - environ
    - esxcluster
    - esxdatacenter
    - esxi
    - esxvm
    - etcd
    - ethtool
    - event
    - extfs
    - file
    - gem
    - genesis
    - git
    - glassfish
    - gnome
    - google_chat
    - grafana4
    - grains
    - hashutil
    - highstate_doc
    - hipchat
    - hosts
    - http
    - hue
    - incron
    - ini
    - inspector
    - introspect
    - iosconfig
    - jboss7
    - jboss7_cli
    - k8s
    - key
    - keyboard
    - locale
    - locate
    - log
    - logrotate
    - mandrill
    - marathon
    - match
    - mattermost
    - mine
    - minion
    - modjk
    - mount
    - msteams
    - nagios_rpc
    - namecheap_domains
    - namecheap_domains_dns
    - namecheap_domains_ns
    - namecheap_ssl
    - namecheap_users
    - netaddress
    - netmiko
    - network
    - nexus
    - nova
    - nxos
    - nxos_api
    - openscap
    - openstack_config
    - opsgenie
    - out
    - pagerduty
    - pagerduty_util
    - pam
    - parallels
    - peeringdb
    - pillar
    - pip
    - pkg_resource
    - publish
    - pushover
    - pyeapi
    - pyenv
    - random
    - random_org
    - rbenv
    - rest_sample_utils
    - ret
    - rvm
    - s3
    - s6
    - salt_proxy
    - saltcheck
    - saltutil
    - schedule
    - scp
    - scsi
    - sdb
    - seed
    - serverdensity_device
    - slack
    - slsutil
    - smbios
    - smtp
    - solrcloud
    - sqlite3
    - ssh
    - state
    - status
    - statuspage
    - supervisord
    - sys
    - sysfs
    - syslog_ng
    - system
    - telegram
    - telemetry
    - temp
    - test
    - textfsm
    - timezone
    - uptime
    - vault
    - vcenter
    - virtualenv
    - vsphere
    - zabbix
    - zenoss
mrcissp-minion-1:
    - aliases
    - alternatives
    - ansible
    - archive
    - artifactory
    - beacons
    - bigip
    - btrfs
    - buildout
    - ciscoconfparse
    - cloud
    - cmd
    - composer
    - config
    - consul
    - container_resource
    - cp
    - cpan
    - cryptdev
    - data
    - ddns
    - debconf
    - defaults
    - devmap
    - disk
    - django
    - dnsmasq
    - dnsutil
    - drbd
    - environ
    - etcd
    - ethtool
    - event
    - extfs
    - file
    - gem
    - genesis
    - git
    - glassfish
    - gnome
    - google_chat
    - grafana4
    - grains
    - group
    - hashutil
    - highstate_doc
    - hipchat
    - hosts
    - http
    - incron
    - ini
    - inspector
    - introspect
    - iosconfig
    - ip
    - jboss7
    - jboss7_cli
    - k8s
    - kernelpkg
    - key
    - keyboard
    - kmod
    - locale
    - locate
    - log
    - logrotate
    - lowpkg
    - mandrill
    - match
    - mattermost
    - mine
    - minion
    - modjk
    - mount
    - msteams
    - nagios_rpc
    - namecheap_domains
    - namecheap_domains_dns
    - namecheap_domains_ns
    - namecheap_ssl
    - namecheap_users
    - netaddress
    - netmiko
    - network
    - nexus
    - nova
    - nxos_api
    - openscap
    - openstack_config
    - opsgenie
    - out
    - pagerduty
    - pagerduty_util
    - pam
    - parallels
    - peeringdb
    - pillar
    - pip
    - pkg
    - pkg_resource
    - publish
    - pushover
    - pyeapi
    - pyenv
    - random
    - random_org
    - rbenv
    - rest_sample_utils
    - ret
    - rvm
    - s3
    - s6
    - salt_proxy
    - saltcheck
    - saltutil
    - schedule
    - scp
    - scsi
    - sdb
    - seed
    - serverdensity_device
    - service
    - shadow
    - slack
    - slsutil
    - smbios
    - smtp
    - solrcloud
    - sqlite3
    - ssh
    - state
    - status
    - statuspage
    - supervisord
    - sys
    - sysctl
    - sysfs
    - syslog_ng
    - system
    - telegram
    - telemetry
    - temp
    - test
    - textfsm
    - timezone
    - uptime
    - user
    - vault
    - vbox_guest
    - virtualenv
    - vsphere
    - xfs
    - zabbix
    - zenoss

Function: “list_functions”
Purpose: To get list of loaded functions

Function: “doc”
Argument: “module.function”
Purpose: To get the documentation of an appropriate function.

root@mrcissp-master-1:/# salt Router1 sys.doc net.arp
net.arp:

    NAPALM returns a list of dictionaries with details of the ARP entries.

    :param interface: interface name to filter on
    :param ipaddr: IP address to filter on
    :param macaddr: MAC address to filter on
    :return: List of the entries in the ARP table

    CLI Example:

        salt '*' net.arp
        salt '*' net.arp macaddr='5c:5e:ab:da:3c:f0'

    Example output:

        [
            {
                'interface' : 'MgmtEth0/RSP0/CPU0/0',
                'mac'       : '5c:5e:ab:da:3c:f0',
                'ip'        : '172.17.17.1',
                'age'       : 1454496274.84
            },
            {
                'interface': 'MgmtEth0/RSP0/CPU0/0',
                'mac'       : '66:0e:94:96:e0:ff',
                'ip'        : '172.17.17.2',
                'age'       : 1435641582.49
            }
        ]

Targeting Minions

In this post, we will take a look at the common targeting techniques – that can be used over Minions.

Targeting using Minion ID

root@mrcissp-master-1:/# salt Router1 test.ping
Router1:
    True
root@mrcissp-master-1:/# salt wlc1 test.ping
wlc1:
    True
root@mrcissp-master-1:/#

Targeting using List of Minion ID

root@mrcissp-master-1:/# salt -L wlc1,Router1 test.ping
Router1:
    True
wlc1:
    True
root@mrcissp-master-1:/#

Targeting using Grains

e.g. we need to know the software version of all IOS routers in our network.

root@mrcissp-master-1:/# salt -C 'G@os:ios' napalm.call 'cli' ['show version']
Router1:
    ----------
    comment:
    out:
        ----------
        show version:
            Cisco IOS Software, IOSv Software (VIOS-ADVENTERPRISEK9-M), Version 15.6(2)T, RELEASE SOFTWARE (fc2)
            Technical Support: http://www.cisco.com/techsupport
            Copyright (c) 1986-2016 by Cisco Systems, Inc.
            Compiled Tue 22-Mar-16 16:19 by prod_rel_team


            ROM: Bootstrap program is IOSv

            R1 uptime is 1 minute
            System returned to ROM by reload
            System restarted at 05:47:13 UTC Mon Nov 25 2019
            System image file is "flash0:/vios-adventerprisek9-m"
            Last reload reason: Unknown reason



            This product contains cryptographic features and is subject to United
            States and local country laws governing import, export, transfer and
            use. Delivery of Cisco cryptographic products does not imply
            third-party authority to import, export, distribute or use encryption.
            Importers, exporters, distributors and users are responsible for
            compliance with U.S. and local country laws. By using this product you
            agree to comply with applicable laws and regulations. If you are unable
            to comply with U.S. and local laws, return this product immediately.

            A summary of U.S. laws governing Cisco cryptographic products may be found at:
            http://www.cisco.com/wwl/export/crypto/tool/stqrg.html

            If you require further assistance please contact us by sending email to
            export@cisco.com.

            Cisco IOSv (revision 1.0) with  with 460017K/62464K bytes of memory.
            Processor board ID 97277GPG1FLKXDX5WL1G0
            4 Gigabit Ethernet interfaces
            DRAM configuration is 72 bits wide with parity disabled.
            256K bytes of non-volatile configuration memory.
            2097152K bytes of ATA System CompactFlash 0 (Read/Write)
            0K bytes of ATA CompactFlash 1 (Read/Write)
            1024K bytes of ATA CompactFlash 2 (Read/Write)
            0K bytes of ATA CompactFlash 3 (Read/Write)



            Configuration register is 0x0
    result:
        True
root@mrcissp-master-1:/#

Targeting using Pillars

e.g. we need to know the software version of all WLC in our network. Since, WLCs are managed by NAPALM hence appropriate grains are not collected for OS type. Therefore, we cannot Target all WLC’s using grains as discussed above. To do this, we can be sure that all the WLC’s in our network must be managed by “netmiko” proxy pillar. Hence, we can target using Pillar.

root@mrcissp-master-1:/# salt -I 'proxy:device_type:cisco_wlc' netmiko.send_command 'show sysinfo'
wlc1:

    Manufacturer's Name.............................. Cisco Systems Inc.
    Product Name..................................... Cisco Controller
    Product Version.................................. 8.9.111.0
    RTOS Version..................................... 8.9.111.0
    Bootloader Version............................... 8.5.1.85
    Emergency Image Version.......................... 8.9.111.0

    OUI File Last Update Time........................ Tue Feb 06 10:44:07 UTC 2018
    r,aes192-ctr,aes256-ctr,aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,3des-cbc
    Build Type....................................... DATA + WPS

    System Name...................................... Cisco-0c0c.9da2.b501
    System Location..................................
    System Contact...................................
    System ObjectID.................................. 1.3.6.1.4.1.9.1.1631
    IP Address....................................... 192.168.241.2
    IPv6 Address..................................... ::
    System Up Time................................... 0 days 2 hrs 0 mins 1 secs
    System Timezone Location.........................
    System Stats Realtime Interval................... 5
    System Stats Normal Interval..................... 180

    Configured Country............................... US  - United States

    State of 802.11b Network......................... Enabled
    State of 802.11a Network......................... Enabled
    Number of WLANs.................................. 1
    Number of Active Clients......................... 0

    OUI Classification Failure Count................. 0

    Memory Current Usage............................. 52
    Memory Average Usage............................. 52
    CPU Current Usage................................ 0
    CPU Average Usage................................ 1

    Flash Type....................................... Compact Flash Card
    Flash Size....................................... 1073741824

    Burned-in MAC Address............................ 0C:0C:9D:A2:B5:01
    Maximum number of APs supported.................. 200
    System Nas-Id....................................
    WLC MIC Certificate Types........................ SHA1
    Licensing Type................................... RTU
    vWLC config...................................... Small

Compound Targeting

e.g. we need to know the software version of all IOS routers of model IOSv.

root@mrcissp-master-1:/# salt -C 'G@os:ios and G@model:IOSv' napalm.call 'cli' ['show version']
Router1:
    ----------
    comment:
    out:
        ----------
        show version:
            Cisco IOS Software, IOSv Software (VIOS-ADVENTERPRISEK9-M), Version 15.6(2)T, RELEASE SOFTWARE (fc2)
            Technical Support: http://www.cisco.com/techsupport
            Copyright (c) 1986-2016 by Cisco Systems, Inc.
            Compiled Tue 22-Mar-16 16:19 by prod_rel_team


            ROM: Bootstrap program is IOSv

            R1 uptime is 15 minutes
            System returned to ROM by reload
            System restarted at 05:47:13 UTC Mon Nov 25 2019
            System image file is "flash0:/vios-adventerprisek9-m"
            Last reload reason: Unknown reason



            This product contains cryptographic features and is subject to United
            States and local country laws governing import, export, transfer and
            use. Delivery of Cisco cryptographic products does not imply
            third-party authority to import, export, distribute or use encryption.
            Importers, exporters, distributors and users are responsible for
            compliance with U.S. and local country laws. By using this product you
            agree to comply with applicable laws and regulations. If you are unable
            to comply with U.S. and local laws, return this product immediately.

            A summary of U.S. laws governing Cisco cryptographic products may be found at:
            http://www.cisco.com/wwl/export/crypto/tool/stqrg.html

            If you require further assistance please contact us by sending email to
            export@cisco.com.

            Cisco IOSv (revision 1.0) with  with 460017K/62464K bytes of memory.
            Processor board ID 97277GPG1FLKXDX5WL1G0
            4 Gigabit Ethernet interfaces
            DRAM configuration is 72 bits wide with parity disabled.
            256K bytes of non-volatile configuration memory.
            2097152K bytes of ATA System CompactFlash 0 (Read/Write)
            0K bytes of ATA CompactFlash 1 (Read/Write)
            1024K bytes of ATA CompactFlash 2 (Read/Write)
            0K bytes of ATA CompactFlash 3 (Read/Write)



            Configuration register is 0x0
    result:
        True

Salt Nomenclature

Disclaimer: – I am not an expert in Saltstack. I have been spending some good time to understand and unwrap bits of it primarily focused on Network Automation use-cases. This note was written by me (Gaurav Agrawal) in my personal capacity. The opinions expressed in this article are solely my own and do not reflect the view of my employer or my preference towards any of the OEMs.

In the previous post we looked at the Installation & configuration of basic necessary steps required to start with Salt. In this post, we will try to unwrap some of the key salt nomenclature used. This is a very next step on Network automation using salt.

Pillar

Pillar is used to organize the configuration data. e.g. NTP server details, DNS server details, Syslog server details, interface details etc.

Since, most of the network device available today – don’t support of native minion agent on the device – hence we need to create a proxy minion which can SSH to the device and get the required information. By default Cisco IOS device are supported by “NAPALM” proxy and other different OS types such as Cisco AirOS are supported by “netmiko” proxy.

Below example demonstrate to configure pillar for “NAPALM” & “NETMIKO” proxy.

Configuring pillar for NAPALM proxy i.e. router.sls
root@mrcissp-master-1:/# cat /etc/salt/pillar/router.sls
proxy:
  proxytype: napalm
  driver: ios
  host: 192.168.200.1
  username: mrcissp
  passwd: Nvidia@123

Refer to NAPALM proxy module for more details.

Configuring NETMIKO pillar i.e. wlc.sls
root@mrcissp-master-1:/# cat /etc/salt/pillar/wlc.sls
proxy:
  proxytype: netmiko
  device_type: cisco_wlc
  username: mrcissp
  password: Nvidia@123
  ip: 192.168.241.2
root@mrcissp-master-1:/#

Refer to NETMIKO proxy module for more details.

Grains

Grains represents static data(i.e. information which is very unlikely to change or does not change often) collected from devices. To collect all the grains from Minions/Proxy-minion use command [salt ‘*’ grains.items]. Below are the grains discovered on the running Minion i.e. “mrcissp-minion-1“.

root@mrcissp-master-1:/# salt '*' grains.items
mrcissp-minion-1:
    ----------
    SSDs:
    biosreleasedate:
        07/29/2019
    biosversion:
        6.00
    cpu_flags:
        - fpu
        - vme
        - de
        - pse
        - tsc
        - msr
        - pae
        - mce
        - cx8
        - apic
        - sep
        - mtrr
        - pge
        - mca
        - cmov
        - pat
        - pse36
        - clflush
        - mmx
        - fxsr
        - sse
        - sse2
        - ss
        - ht
        - syscall
        - nx
        - pdpe1gb
        - rdtscp
        - lm
        - constant_tsc
        - arch_perfmon
        - nopl
        - xtopology
        - tsc_reliable
        - nonstop_tsc
        - cpuid
        - pni
        - pclmulqdq
        - vmx
        - ssse3
        - fma
        - cx16
        - pcid
        - sse4_1
        - sse4_2
        - x2apic
        - movbe
        - popcnt
        - aes
        - xsave
        - avx
        - f16c
        - rdrand
        - hypervisor
        - lahf_lm
        - 3dnowprefetch
        - cpuid_fault
        - pti
        - ssbd
        - ibrs
        - ibpb
        - stibp
        - tpr_shadow
        - vnmi
        - ept
        - vpid
        - fsgsbase
        - smep
        - arat
        - flush_l1d
        - arch_capabilities
    cpu_model:
        Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
    cpuarch:
        x86_64
    disks:
        - loop1
        - sdb
        - loop6
        - loop4
        - sr0
        - loop2
        - loop0
        - loop7
        - sda
        - loop5
        - loop3
    dns:
        ----------
        domain:
        ip4_nameservers:
            - 8.8.8.8
        ip6_nameservers:
        nameservers:
            - 8.8.8.8
        options:
        search:
        sortlist:
    domain:
    fqdn:
        mrcissp-minion-1
    fqdn_ip4:
        - 127.0.1.1
    fqdn_ip6:
    fqdns:
    gid:
        0
    gpus:
    groupname:
        root
    host:
        mrcissp-minion-1
    hwaddr_interfaces:
        ----------
        eth0:
            2a:ed:fc:79:7f:6f
    id:
        mrcissp-minion-1
    init:
        unknown
    ip4_interfaces:
        ----------
        eth0:
            - 192.168.100.3
        lo:
            - 127.0.0.1
    ip6_interfaces:
        ----------
        eth0:
            - fe80::28ed:fcff:fe79:7f6f
        lo:
            - ::1
    ip_interfaces:
        ----------
        eth0:
            - 192.168.100.3
            - fe80::28ed:fcff:fe79:7f6f
        lo:
            - 127.0.0.1
            - ::1
    ipv4:
        - 127.0.0.1
        - 192.168.100.3
    ipv6:
        - ::1
        - fe80::28ed:fcff:fe79:7f6f
    kernel:
        Linux
    kernelrelease:
        4.15.0-55-generic
    kernelversion:
        #60-Ubuntu SMP Tue Jul 2 18:22:20 UTC 2019
    locale_info:
        ----------
        defaultencoding:
            None
        defaultlanguage:
            None
        detectedencoding:
            ANSI_X3.4-1968
        timezone:
            unknown
    localhost:
        mrcissp-minion-1
    lsb_distrib_codename:
        bionic
    lsb_distrib_description:
        Ubuntu 18.04.3 LTS
    lsb_distrib_id:
        Ubuntu
    lsb_distrib_release:
        18.04
    machine_id:
        578962dbb63ae45b159330245dd26e77
    manufacturer:
        VMware, Inc.
    master:
        192.168.100.2
    mdadm:
    mem_total:
        3944
    nodename:
        mrcissp-minion-1
    num_cpus:
        4
    num_gpus:
        0
    os:
        Ubuntu
    os_family:
        Debian
    osarch:
        amd64
    oscodename:
        bionic
    osfinger:
        Ubuntu-18.04
    osfullname:
        Ubuntu
    osmajorrelease:
        18
    osrelease:
        18.04
    osrelease_info:
        - 18
        - 4
    path:
        /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    pid:
        4538
    productname:
        VMware Virtual Platform
    ps:
        ps -efHww
    pythonexecutable:
        /usr/bin/python
    pythonpath:
        - /usr/local/bin
        - /usr/lib/python2.7
        - /usr/lib/python2.7/plat-x86_64-linux-gnu
        - /usr/lib/python2.7/lib-tk
        - /usr/lib/python2.7/lib-old
        - /usr/lib/python2.7/lib-dynload
        - /usr/local/lib/python2.7/dist-packages
        - /usr/lib/python2.7/dist-packages
    pythonversion:
        - 2
        - 7
        - 15
        - final
        - 0
    saltpath:
        /usr/local/lib/python2.7/dist-packages/salt
    saltversion:
        2019.2.2
    saltversioninfo:
        - 2019
        - 2
        - 2
        - 0
    serialnumber:
        VMware-56 4d e4 6c d3 e5 53 d5-0c 20 c1 55 a4 0e b9 4e
    server_id:
        822305722
    shell:
        /bin/sh
    swap_total:
        924
    systemd:
        ----------
        features:
            +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN -PCRE2 default-hierarchy=hybrid
        version:
            237
    uid:
        0
    username:
        root
    uuid:
        564de46c-d3e5-53d5-0c20-c155a40eb94e
    virtual:
        VMware
    virtual_subtype:
        Docker
    zfs_feature_flags:
        False
    zfs_support:
        False
    zmqversion:
        4.3.2

Additional Master configuration

File Roots

Primarily it is used to isolate the environment e.g. we have a test environment, development environment, production environment served by common master.
Navigate to “Master” configuration file i.e. “nano /etc/salt/master” & add following details. In our test bed – we are referring to “base” environment. 

file_roots:
  base:
    - /etc/salt/pillar
    - /etc/salt/states
    - /etc/salt/reactors
    - /etc/salt/templates
Pillar Roots

Used to map environment with the appropriate directories of pillar “sls” files.
Navigate to “Master” configuration file i.e. “nano /etc/salt/master” & add following details. In our test bed – we are referring to “base” environment.

pillar_roots:
  base:
    - /etc/salt/pillar

Proxy configuration on a Minion

As the proxy minion is a subset of the regular minion, it inherits the same configuration options, as discussed in the minion configuration documentation. But there are additional configuration required for SSH based proxies to work properly.
Navigate to “Minion” proxy configuration file i.e. “nano /etc/salt/proxy” & add following details.

master: 192.168.100.2
pki_dir: /etc/salt/pki/proxy
cachedir: /var/cache/salt/proxy
multiprocessing: False
mine_enabled: True

Note: Multiprocessing is set to FALSE because in our example we are using SSH based proxies to connect with Router R1 and WLC. In case if we have to use SALT for REST based API for NX-OS, we must set this to TRUE. 

Pillar Top File

A very important configuration – Objective of pillar “top.sls” file is to tell a Minion ID to use which SLS file defined in Master.

Note: The top file is another SLS file named top.sls found under one of the paths defined in the file_roots.

  • “ntp_config.sls” could be assigned to all the minion_id’s
  • “syslog_config.sls” could be assigned to all the minion_id’s
  • However, “ap_config” must be assigned to only WLC specific minion_id’s
  • Similarly, “bgp_config” must be assigned to only Router specific minion_id’s

Navigate to “Master” top file i.e. “nano /etc/salt/pillar/top.sls” & add following details. In our test bed – we are referring to “base” environment e.g. Router* represents minion_id’s starting with keyword “Router

base:
  Router*:
    - router
  wlc*:
    - wlc

Starting “salt-proxy”

To start a salt-proxy – use below command

salt-proxy –proxyid=<proxy_minion_id> -l debug 
root@mrcissp-minion-1:/# salt-proxy --proxyid=Router1 -d
root@mrcissp-minion-1:/# salt-proxy --proxyid=wlc1 -d

Once, proxy minion’s are started – we are required to accept their respective key’s

root@mrcissp-master-1:/# salt-key
Accepted Keys:
mrcissp-minion-1
Denied Keys:
Unaccepted Keys:
Router1
wlc1
Rejected Keys:
root@mrcissp-master-1:/# salt-key -A
The following keys are going to be accepted:
Unaccepted Keys:
Router1
wlc1
Proceed? [n/Y] Y
Key for minion Router1 accepted.
Key for minion wlc1 accepted.
root@mrcissp-master-1:/# salt-key
Accepted Keys:
Router1
mrcissp-minion-1
wlc1
Denied Keys:
Unaccepted Keys:
Rejected Keys:

Verification

Verify the connectivity between Master & proxy Minion. To do this, use below command

root@mrcissp-master-1:/# salt '*' test.ping
Router1:
    True
wlc1:
    True
mrcissp-minion-1:
    True
root@mrcissp-master-1:/#

Please remember :- In our case, “Router1” minion is managing R1 with “napalm” proxy. “wlc1” minion is managed by “netmiko” proxy.

Installation and Configuration of Salt with Docker in GNS3

Disclaimer: – I am not an expert in Saltstack. I have been spending some good time to understand and unwrap bits of it primarily focused on Network Automation use-cases. This note was written by me (Gaurav Agrawal) in my personal capacity. The opinions expressed in this article are solely my own and do not reflect the view of my employer or my preference towards any of the OEMs.

This blog would demonstrate our first step to start Network Automation using salt. At the end of this section – one would be familiar on “how to start a basic salt environment”.

Below topics will be discussed in this section.

  1. Crafting “Salt-Master” & “Salt-Minion” docker container.
  2. GNS3 topology preparation
  3. Master Configuration
  4. Minion Configuration
  5. Proxy configuration
  6. Verification

Crafting Docker image for Salt-master and Salt-minion

I am sure – you must be thinking why do I need to build a docker container? – Well, we will demonstrate this lab in GNS3 & by default required containers are not available on GNS3 website marketplace. Hence, we need to create one to fulfill following objective i.e. “Faster, Scalable, efficient”

  1. Changes made within an “Ubuntu Hosts” are not persistent if GNS3 application is reloaded. Hence, every time we must install “salt-master” and “salt-minion” and other respective dependencies. Therefore, it would be a good idea to create an image which will have all its dependencies installed as soon as we create a container.
  2. Traditional methodology is not scalable i.e. imagine a situation if we got a requirement to import 3,5,10…so on containers in one project. Adding the same dependency at each container would be an inefficient use of resources and time-consuming process.

Hence, we decided to build a docker container for “Salt-Master” and “Salt-Minion”.

The only prerequisite for this is to have GNS3 VM installed & running in our local machine. Post-installation it would look like this.

  1. Click on “OK” and select “shell” using UP/DOWN arrow key. This will bring to “GNS3 VM” shell.
  2. Enter “pwd” to determine the present working directory
  3. Enter “sudo su –“ to login as “root” user.
  4. Navigate to the above working directory – In our case i.e. /home/gns3
  5. Ensure that you have internet access to GNS3 VM. Try “ping google.com”. If not, please check VM network adapter settings and add the appropriate “NAT” adapter to the VM.
  6. Create a Dockerfile: Master_Dockerfile with “nano Master_Dockerfile” command. Add below commands –
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y nmap
RUN apt-get update && apt-get install net-tools
RUN apt-get update && apt-get install nano
RUN apt-get install yum -y
RUN apt-get install wget
RUN apt-get update && apt-get install iputils-ping -y
RUN DEBIAN_FRONTEND=noninteractive apt install -y tzdata
ENV TZ=Europe/Minsk
ENV ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get install salt-master -y
RUN apt-get install salt-api -y
RUN apt-get update
RUN apt-get install systemd -y
RUN apt-get install less
RUN apt-get install git -y
RUN apt-get install python3-pip -y
RUN apt-get install python-pip -y
RUN apt-get update
RUN pip install napalm
RUN pip install --upgrade pip

7. save and exit.
8. Next step is to build this Docker container from the respective Docker file. Please execute below command

Docker build -f Master_Dockerfile -t mrcissp-master .

9. Similarly, Create a Dockerfile: Minion_Dockerfile with “nano Minion_Dockerfile” command and add below commands –

FROM ubuntu:18.04
RUN apt-get update && apt-get install -y nmap
RUN apt-get update && apt-get install net-tools
RUN apt-get update && apt-get install nano
RUN apt-get install yum -y
RUN apt-get install wget
RUN apt-get update && apt-get install iputils-ping -y
RUN DEBIAN_FRONTEND=noninteractive apt install -y tzdata
ENV TZ=Europe/Minsk
ENV ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get install salt-minion -y
RUN apt-get install salt-api -y
RUN apt-get update
RUN apt-get install systemd -y
RUN apt-get install less
RUN apt-get install git -y
RUN apt-get install python3-pip -y
RUN apt-get install python-pip -y
RUN apt-get update
RUN pip install napalm
RUN pip install --upgrade pip

10. Kindly save and exit.
11. Again build a Docker container with the respective Docker file. Refer below command

Docker build -f Minion_Dockerfile -t mrcissp-minion .

Importing these custom build containers to GNS3

Follow below steps to import these containers to GNS3 applications.

  1. Navigate to Edit -> Preference -> Docker -> Docker Container -> New

2. Click on “Next”

3. Select the appropriate build i.e. “mrcissp-master:latest” for Master & “mrcissp-minion:latest” for Minion from the drop down menu.

4. Click on Next.
5. Repeat this for the Minion Build. This is how the application window looks like

6. Click on “Apply”

Creating First GNS3 topology

To start our first project with Salt :- below GNS3 topology have been considered for demonstration.

Note: By default the changes made to the docker container files will not be persistent to GNS3 if it reloads. Hence, to maintain persistency below configuration change would be require in our Hosts i.e. “mrcissp-master-1” and “mrcissp-minion-1“.  

  1. Right click on mrcissp-master-1/mrcissp-minion-1, select “configure
  2. From the available tab select “Advanced” and add the below mentioned directory.
/etc
/home
/var

3. Click on Apply

R1 Configuration

!
service password-encryption
!
hostname R1
!
ip domain name mrcissplab.com
ip name-server 8.8.8.8
!
username mrcissp privilege 15 password 7 0525100625454F29485744
!
interface Loopback0
 ip address 192.168.200.1 255.255.255.0
!
interface GigabitEthernet0/0
 ip address 192.168.100.1 255.255.255.0
!
interface GigabitEthernet0/1
 ip address dhcp
!
interface GigabitEthernet0/2
 ip address 192.168.240.1 255.255.255.0
!
router ospf 1
 network 192.168.100.0 0.0.0.255 area 0
 network 192.168.200.0 0.0.0.255 area 0
 network 192.168.240.0 0.0.0.255 area 0
!
ip ssh version 2
!
line vty 0 4
 login local
 transport input ssh
!

Switch Configuration

hostname Switch
!
username mrcissp privilege 15 password 0 mrcissp@123
!
ip domain-name mrcissplab.com
ip name-server 8.8.8.8
!
interface GigabitEthernet0/1
 switchport trunk allowed vlan 241
 switchport trunk encapsulation dot1q
 switchport mode trunk
 no cdp enable
!
interface GigabitEthernet0/0
 no switchport
 ip address 192.168.240.2 255.255.255.0
 negotiation auto
!
Vlan 241
Name MGMT_WLC
!
interface Vlan241
 ip address 192.168.241.1 255.255.255.0
!
router ospf 1
 network 192.168.240.0 0.0.0.255 area 0
 network 192.168.241.0 0.0.0.255 area 0
!
ip route 0.0.0.0 0.0.0.0 192.168.240.1
ip ssh version 2
!
line vty 0 4
 login local
 transport input ssh
!

Salt Master Configuration

The Salt system is amazingly simple and easy to configure, the two components of the Salt system each have a respective configuration file. The salt-master is configured via the master configuration file i.e. /etc/salt/master
Identify the Salt-Master IP address.  i.e. 192.168.100.2

root@mrcissp-master-1:/# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.100.2  netmask 255.255.255.0  broadcast 0.0.0.0
        inet6 fe80::44d:85ff:fe50:8fa3  prefixlen 64  scopeid 0x20<link>
        ether 06:4d:85:50:8f:a3  txqueuelen 1000  (Ethernet)
        RX packets 46  bytes 7327 (7.3 KB)
        RX errors 0  dropped 1  overruns 0  frame 0
        TX packets 12  bytes 936 (936.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Execute “nano /etc/salt/master” & add the IP address to which salt-master will be listening to.

# How often, in seconds, to send keepalives after the first one. Default -1 to
# use OS defaults, typically 75 seconds on Linux, see
# /proc/sys/net/ipv4/tcp_keepalive_intvl.
#tcp_keepalive_intvl: -1

interface: 192.168.100.2

Start salt-master using command “salt-master -d” – where “-d” denotes to run this command in background on “shell terminal”. Also, execute “salt-key” command to verify if Master can hear any minion. Since, we don’t have any minion running as of now. Hence, we don’t see any minion key coming to this Master for authentication.

root@mrcissp-master-1:/# salt-master -d
root@mrcissp-master-1:/# salt-key
Accepted Keys:
Denied Keys:
Unaccepted Keys:
Rejected Keys:

Note: To see details of Public and Private key on Master. Please use “salt-key -F” command.

root@mrcissp-master-1:/# salt-key -F
Local Keys:
master.pem:  1d:a2:06:00:47:c3:e8:93:dc:97:53:a8:07:bb:0d:c2:41:0b:d8:7d:70:ce:9f:32:62:9b:98:11:30:2e:23:cb
master.pub:  88:13:60:51:ef:ee:7a:58:1a:2c:63:a0:08:f4:06:82:9f:df:81:2e:75:7b:fb:96:43:be:6d:bf:bb:e9:6b:07

Salt Minion Configuration

The salt-minion is configured via the minion configuration file i.e. /etc/salt/minion.

Execute “nano /etc/salt/minion” & add the IP address of salt-master used by this Minion

######    Miscellaneous  settings     ######
############################################
# Default match type for filtering events tags: startswith, endswith, find, regex, fnmatch
#event_match_type: startswith
master: 192.168.100.2

Start salt-minion using command “salt-minion -d” – where “-d” denotes to run this command in background on “shell terminal”.

root@mrcissp-minion-1:/# salt-minion -d
/usr/local/lib/python2.7/dist-packages/salt/scripts.py:198: DeprecationWarning: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date.  Salt will drop support for Python 2.7 in the Sodium release or later.
root@mrcissp-minion-1:/#

Execute “salt-key” command to verify if Master can hear any minion now. As we can see – Minion with minion ID “mrcissp-minion-1” is seen at master but it’s keys are not accepted by Master.

root@mrcissp-master-1:/# salt-key
Accepted Keys:
Denied Keys:
Unaccepted Keys:
mrcissp-minion-1
Rejected Keys:
root@mrcissp-master-1:/#

To accept key from respective minion – execute “salt-key -A “minion_id”. We can observe that – authentication key for mrcissp-minion-1 has been accepted.

root@mrcissp-master-1:/# salt-key -A mrcissp-minion-1
The following keys are going to be accepted:
Unaccepted Keys:
mrcissp-minion-1
Proceed? [n/Y] Y
Key for minion mrcissp-minion-1 accepted.
root@mrcissp-master-1:/#

Verify the connectivity b/w Master & Minion

Next step is to verify the connectivity between Master & Minion. To do this, use below command

root@mrcissp-master-1:/# salt '*' test.ping
mrcissp-minion-1:
    True
root@mrcissp-master-1:/#

True: means – Master can communicate with minion
False: means – Master cannot communicate with minion.

Refer to my next blog to understand important nomenclature used in Salt.