Hello everyone, today we’re going to look into Saltstack to deploy a typical campus networks running Cisco IOS. A typical, campus network is a 3 layer architecture with Access Layer, Distribution Layer and Core Layer. To start this – we have prepared below topology for our demonstration purpose. Our objective is to automate the configuration for all the network devices.

In our lab – all nodes are running Cisco IOS image – However, we could have considered NX-OS for core layer but due to limitation of Computing power on my machine – I decided to use Cisco IOS. Following nodes are shown in the above topology :-
- Core nodes – “core1” & “core2”
- Aggregtion nodes – “agg1” & “agg2”
- Access node – “access1”, “access2”
- Saltstack Master – “mrcissp-master-1”
- Saltstack Minion – “mrcissp-minion-1”
- Management switch – “mgmt-switch-1”, “mgmt-switch-2”
Initial required configuration
Before we can automate the configuration or anything, we first need to configure the devices for initial connectivity for the “Saltstack” host able to ssh and push the configuration. Below are the basic configs required to complete this. Refer to below minimal configuration required for respective devices.
“mgmt-switch-1” config
hostname mgmt-1 ! interface FastEthernet1/0 switchport mode access switchport access vlan 100 no shut duplex full speed 100 ! interface FastEthernet1/1 switchport mode access switchport access vlan 100 no shut duplex full speed 100 ! interface FastEthernet1/2 switchport mode access switchport access vlan 100 no shut duplex full speed 100 ! interface FastEthernet1/3 switchport mode access switchport access vlan 100 no shut duplex full speed 100 ! interface FastEthernet1/4 switchport mode access switchport access vlan 100 no shut duplex full speed 100 ! interface FastEthernet1/5 switchport mode access switchport access vlan 100 no shut duplex full speed 100 ! interface FastEthernet1/6 switchport mode access switchport access vlan 100 no shut duplex full speed 100 ! interface FastEthernet1/7 switchport mode access switchport access vlan 100 no shut duplex full speed 100 ! interface FastEthernet0/0 ip address dhcp no shut ! interface Vlan100 ip address 192.168.100.1 255.255.255.0 ! exit ip scp server enable service password-encryption ip domain name mrcissplab.com ip ssh version 2 crypto key generate rsa 1024 username mrcissp privilege 15 password 0 Nvidia@123 line vty 0 4 login local transport input ssh ! end wr !
“access1” config
conf t hostname access1 ! interface FastEthernet1/10 switchport mode access switchport access vlan 100 no shut duplex full speed 100 ! interface Vlan100 ip address 192.168.100.11 255.255.255.0 ! exit ip scp server enable service password-encryption ip domain name mrcissplab.com ip ssh version 2 crypto key generate rsa 1024 username mrcissp privilege 15 password 0 Nvidia@123 line vty 0 4 login local transport input ssh ! end wr !
Similarly, other devices have been configured with below IP addressing details.
192.168.100.31 core1 192.168.100.32 core2 192.168.100.21 agg1 192.168.100.22 agg2 192.168.100.11 access1 192.168.100.12 access2 192.168.100.1 mgmt-switch-1 192.168.100.101 mrcissp-minion-1 192.168.100.100 mrcissp-master-1
Please note – primary purpose of keeping management switch is to provide the basic connectivity. Hence, we can remove these switches visually for better understanding of this Lab. Refer to below updated logical topology of this Lab.

For Initial configuration of “salt-master” and “salt-minion” refer to below previous post in this series.
Let’s Automate
Requirement 1: SNMP configuration i.e. SNMP v2 community strings are same across all devices.
Solution: Since, it is a global config and don’t have any dependency on anything except vendor OEM & OS type. Refer to below flow for a standard way of automating configuration with Saltstack.

Command Set for “SNMP” configuration on Cisco IOS
Below are standard set of command to configure SNMP v2 for any Cisco IOS device.
snmp-server community private rw snmp-server community public ro snmp-server location Saltstack demonstration in GNS3 snmp-server contact Gaurav@mrcissp snmp-server host 192.168.100.200 public snmp-server trap link ietf snmp-server enable traps ospf snmp-server enable traps
Identity User input & create a new pillar “snmp.sls” file
root@mrcissp-master-1:/# cat /etc/salt/pillar/snmp.sls snmp: rw_string: private ro_string: public location: 'Saltstack demonstration in GNS3' contact: 'Gaurav@mrcissp' host: '192.168.100.200'
Apply defined pillar file to Top.sls file
root@mrcissp-master-1:/etc/salt/pillar# cat top.sls base: '*': - snmp 'core1': - core1 'core2': - core2 'agg1': - agg1 'agg2': - agg2 'access1': - access1 'access2': - access2
The ‘*’ tells Salt to provide the content from snmp.sls to all minions. Hence, this configuration would be applied to all the Minions/proxy minions.
Refresh updated pillar to minions & verify
The next step is refreshing the pillar data, executing salt ‘*’ saltutil.refresh_pillar. We can also use the pillar.get execution function to check that the data has been loaded correctly. It can be observed that pillar “snmp” is loaded to all minions as expected.
root@mrcissp-master-1:/etc/salt/pillar# salt '*' saltutil.refresh_pillar access2: True access1: True agg2: True core1: True core2: True agg1: True root@mrcissp-master-1:/etc/salt/pillar# salt '*' pillar.items core2: ---------- proxy: ---------- driver: ios host: core2 passwd: Nvidia@123 proxytype: napalm username: mrcissp snmp: ---------- contact: Gaurav@mrcissp host: 192.168.100.200 location: Saltstack demonstration in GNS3 ro_string: public rw_string: private agg2: ---------- proxy: ---------- driver: ios host: agg2 passwd: Nvidia@123 proxytype: napalm username: mrcissp snmp: ---------- contact: Gaurav@mrcissp host: 192.168.100.200 location: Saltstack demonstration in GNS3 ro_string: public rw_string: private access2: ---------- proxy: ---------- driver: ios host: access2 passwd: Nvidia@123 proxytype: napalm username: mrcissp snmp: ---------- contact: Gaurav@mrcissp host: 192.168.100.200 location: Saltstack demonstration in GNS3 ro_string: public rw_string: private core1: ---------- proxy: ---------- driver: ios host: core1 passwd: Nvidia@123 proxytype: napalm username: mrcissp snmp: ---------- contact: Gaurav@mrcissp host: 192.168.100.200 location: Saltstack demonstration in GNS3 ro_string: public rw_string: private access1: ---------- proxy: ---------- driver: ios host: access1 passwd: Nvidia@123 proxytype: napalm username: mrcissp snmp: ---------- contact: Gaurav@mrcissp host: 192.168.100.200 location: Saltstack demonstration in GNS3 ro_string: public rw_string: private agg1: ---------- proxy: ---------- driver: ios host: agg1 passwd: Nvidia@123 proxytype: napalm username: mrcissp snmp: ---------- contact: Gaurav@mrcissp host: 192.168.100.200 location: Saltstack demonstration in GNS3 ro_string: public rw_string: private
Create “jinja2” template “snmp_config.jinja” for command structure
For more details on “jinja” refer to my below previous post.
- Understanding “jinja2” – 1
- Understanding “jinja2” – 2
root@mrcissp-master-1:/etc/salt/pillar# cat /etc/salt/templates/snmp_config.jinja snmp-server community {{ pillar['snmp']['rw_string'] }} rw snmp-server community {{ pillar['snmp']['ro_string'] }} ro snmp-server location {{ pillar['snmp']['location'] }} snmp-server contact {{ pillar['snmp']['contact'] }} snmp-server host {{ pillar['snmp']['host'] }} public snmp-server trap link ietf snmp-server enable traps ospf snmp-server enable traps
Render this template on all of our network devices
To apply or test – we can render the defined template. For this – we can use “net.load_template” module as demonstrated below
root@mrcissp-master-1:/etc/salt/pillar# salt 'core1' net.load_template salt://snmp_config.jinja test='True' core1: ---------- already_configured: False comment: Configuration discarded. diff: +snmp-server community private rw +snmp-server community public ro +snmp-server location Saltstack demonstration in GNS3 +snmp-server contact Gaurav@mrcissp +snmp-server host 192.168.100.200 public +snmp-server trap link ietf +snmp-server enable traps ospf +snmp-server enable traps loaded_config: result: True root@mrcissp-master-1:/etc/salt/pillar# salt 'core1' net.load_template salt://snmp_config.jinja core1: ---------- already_configured: False comment: diff: +snmp-server community private rw +snmp-server community public ro +snmp-server location Saltstack demonstration in GNS3 +snmp-server contact Gaurav@mrcissp +snmp-server host 192.168.100.200 public +snmp-server trap link ietf +snmp-server enable traps ospf +snmp-server enable traps loaded_config: result: True
In the above command – we used salt ‘core1’ net.load_template salt://snmp_config.jinja command with test=’true’ to see the configuration changes which is about to reflect on “core1” network device because of this template. We can additionally observe that “comment” field resulting as “configuration discarded” that means configuration was not applied to network device.
In the second command – we did not set any field & configuration was applied to the network device “core1”. To confirm this, we can look at the running configuration of “core1”.
<< Before applying template >> core1# core1#sh run | in snmp core1# << After Rendering template >> core1#sh run | in snmp snmp-server community private RW snmp-server community public RO snmp-server trap link ietf snmp-server location Saltstack demonstration in GNS3 snmp-server contact Gaurav@mrcissp snmp-server enable traps snmp authentication linkdown linkup coldstart warmstart snmp-server enable traps vrrp snmp-server enable traps ds1 snmp-server enable traps tty snmp-server enable traps eigrp snmp-server enable traps xgcp snmp-server enable traps flash insertion removal snmp-server enable traps ds3 snmp-server enable traps envmon snmp-server enable traps icsudsu snmp-server enable traps isdn call-information snmp-server enable traps isdn layer2 snmp-server enable traps isdn chan-not-avail snmp-server enable traps isdn ietf snmp-server enable traps ds0-busyout snmp-server enable traps ds1-loopback snmp-server enable traps atm subif snmp-server enable traps bgp snmp-server enable traps bstun snmp-server enable traps bulkstat collection transfer snmp-server enable traps cnpd snmp-server enable traps config-copy snmp-server enable traps config snmp-server enable traps dial snmp-server enable traps dlsw snmp-server enable traps dsp card-status snmp-server enable traps entity snmp-server enable traps event-manager snmp-server enable traps frame-relay snmp-server enable traps frame-relay subif snmp-server enable traps hsrp snmp-server enable traps ipmobile snmp-server enable traps ipmulticast snmp-server enable traps mpls ldp snmp-server enable traps mpls traffic-eng snmp-server enable traps mpls vpn snmp-server enable traps msdp snmp-server enable traps mvpn snmp-server enable traps ospf state-change snmp-server enable traps ospf errors snmp-server enable traps ospf retransmit snmp-server enable traps ospf lsa snmp-server enable traps ospf cisco-specific state-change nssa-trans-change snmp-server enable traps ospf cisco-specific state-change shamlink interface-old snmp-server enable traps ospf cisco-specific state-change shamlink neighbor snmp-server enable traps ospf cisco-specific errors snmp-server enable traps ospf cisco-specific retransmit snmp-server enable traps ospf cisco-specific lsa snmp-server enable traps pim neighbor-change rp-mapping-change invalid-pim-message snmp-server enable traps pppoe snmp-server enable traps cpu threshold snmp-server enable traps rsvp snmp-server enable traps rtr snmp-server enable traps stun snmp-server enable traps syslog snmp-server enable traps l2tun session snmp-server enable traps vsimaster snmp-server enable traps vtp snmp-server enable traps director server-up server-down snmp-server enable traps isakmp policy add snmp-server enable traps isakmp policy delete snmp-server enable traps isakmp tunnel start snmp-server enable traps isakmp tunnel stop snmp-server enable traps ipsec cryptomap add snmp-server enable traps ipsec cryptomap delete snmp-server enable traps ipsec cryptomap attach snmp-server enable traps ipsec cryptomap detach snmp-server enable traps ipsec tunnel start snmp-server enable traps ipsec tunnel stop snmp-server enable traps ipsec too-many-sas snmp-server enable traps rf snmp-server enable traps voice poor-qov snmp-server enable traps voice fallback snmp-server enable traps dnis snmp-server host 192.168.100.200 public
There is another important field – which will help to understand with better clarity i.e. debug=’true’
root@mrcissp-master-1:/etc/salt/pillar# salt 'core1' net.load_template salt://snmp_config.jinja test='True' debug='True' core1: ---------- already_configured: False comment: Configuration discarded. diff: +snmp-server enable traps ospf +snmp-server enable traps loaded_config: snmp-server community private rw snmp-server community public ro snmp-server location Saltstack demonstration in GNS3 snmp-server contact Gaurav@mrcissp snmp-server host 192.168.100.200 public snmp-server trap link ietf snmp-server enable traps ospf snmp-server enable traps result: True
We can observe that – Salt has loaded the complete configuration and figured out the actual difference from the running-configuration of device.
Requirement 2: Syslog configuration i.e. Syslog server configuration must be as below
For Core device’s – 192.168.100.101 & 192.168.100.102
Aggregation device’s – 192.168.100.103 & 192.168.100.104
Access Device’s – 192.168.100.105 & 192.168.100.106
Solution: Here we are trying to add more complexity because expected configuration is dependent on the role type of switch. By default – salt is not aware of this role type. Therefore, we need to define the customer roles to our proxy-minions. Refer to below image for complete procedure

Define & configure the appropriate role to network devices
To define role – we are using “grains” module and “set” function.
root@mrcissp-master-1:/etc/salt/pillar# salt 'core*' grains.set 'role' core_switch core1: ---------- changes: ---------- role: core_switch comment: result: True core2: ---------- changes: ---------- role: core_switch comment: result: True root@mrcissp-master-1:/etc/salt/pillar# salt 'agg*' grains.set 'role' agg_switch agg2: ---------- changes: ---------- role: agg_switch comment: result: True agg1: ---------- changes: ---------- role: agg_switch comment: result: True root@mrcissp-master-1:/etc/salt/pillar# salt 'access*' grains.set 'role' access_switch access1: ---------- changes: ---------- role: access_switch comment: result: True access2: ---------- changes: ---------- role: access_switch comment: result: True
Configuration set for logging server
logging host 192.168.100.101 logging facility local7 logging trap 4
Pillar file “syslog.sls” file
Please go through this pillar file carefully. This is written in a different fashion from the previous “snmp.sls” file. Here, we are leveraging the capability of “jinja” again in a Pillar file. As per the role of the network device :- appropriate logging host would be selected for.
syslog: {% if grains['role'] == 'core_switch' %} host: - 192.168.100.101 - 192.168.100.102 {% elif grains['role'] == 'agg_switch' %} host: - 192.168.100.103 - 192.168.100.104 {% elif grains['role'] == 'access_switch' %} host: - 192.168.100.105 - 192.168.100.106 {% endif %} facility: local7 logging_level: 4
Mapping Pillar file to top.sls file
base: '*': - snmp - syslog 'core1': - core1 'core2': - core2 'agg1': - agg1 'agg2': - agg2 'access1': - access1 'access2': - access2 'access3': - access3 'access4': - access4
Please note – again “syslog.sls” file is kept under ‘*’ because syslog configuration should be applied to all network devices.
Refresh pillar & grains data
Below output demonstrate if appropriate syslog server’s are selected as per given requirement.
root@mrcissp-master-1:/etc/salt/pillar# salt '*' saltutil.refresh_pillar access3: True agg1: True access2: True core2: True core1: True access1: True agg2: True root@mrcissp-master-1:/etc/salt/pillar# salt '*' pillar.items syslog agg1: ---------- syslog: ---------- facility: local7 host: - 192.168.100.103 - 192.168.100.104 logging_level: 4 core2: ---------- syslog: ---------- facility: local7 host: - 192.168.100.101 - 192.168.100.102 logging_level: 4 access2: ---------- syslog: ---------- facility: local7 host: - 192.168.100.105 - 192.168.100.106 logging_level: 4 core1: ---------- syslog: ---------- facility: local7 host: - 192.168.100.101 - 192.168.100.102 logging_level: 4 access1: ---------- syslog: ---------- facility: local7 host: - 192.168.100.105 - 192.168.100.106 logging_level: 4 agg2: ---------- syslog: ---------- facility: local7 host: - 192.168.100.103 - 192.168.100.104 logging_level: 4
Creating Jinja template for syslog configuration i.e. syslog.jinja
{%- for server in pillar['syslog']['host'] %} logging host {{ server }} {%- endfor %} logging facility {{ pillar['syslog']['facility'] }} logging trap {{ pillar['syslog']['logging_level'] }}
Rendering Jinja template to a device
root@mrcissp-master-1:/etc/salt/pillar# salt 'core1' net.load_template salt://syslog_config.jinja test='True' core1: ---------- already_configured: False comment: Configuration discarded. diff: +logging host 192.168.100.101 +logging host 192.168.100.102 +logging facility local7 +logging trap 4 loaded_config: result: True root@mrcissp-master-1:/etc/salt/pillar# salt 'core1' net.load_template salt://syslog_config.jinja core1: ---------- already_configured: False comment: diff: +logging host 192.168.100.101 +logging host 192.168.100.102 +logging facility local7 +logging trap 4 loaded_config: result: True core1#show run | in logging logging trap warnings logging 192.168.100.101 logging 192.168.100.102 logging synchronous logging synchronous
Requirement 3: Access Switch configuration for
1) VLAN creation
2) Uplink switch port – Trunk
3) Downlink switch port – Access
Solution: Again, similar procedure would be followed. Following new files are created for this automation.
- access1_vlan.sls – VLAN information for access1 switch.
- access1_port.sls – Uplink and Downlink port information for access1 switch
- access2_vlan.sls – VLAN information for access2 switch.
- access2_port.sls – Uplink and Downlink port information for access2 switch
- Update top.sls
- access_sw_config.jinja – Jinja template for access switch configuration
root@mrcissp-master-1:/etc/salt/pillar# cat access1_vlan.sls access_vlan: native_vlan: 100 data_vlan: - 10 voice_vlan: - 15 root@mrcissp-master-1:/etc/salt/pillar# cat access1_port.sls access_port: uplink: - f1/0 - f1/1 downlink: f1/2: 10 root@mrcissp-master-1:/etc/salt/pillar# cat access2_vlan.sls access_vlan: native_vlan: 100 data_vlan: - 20 voice_vlan: - 25 root@mrcissp-master-1:/etc/salt/pillar# cat access2_port.sls access_port: uplink: - f1/0 - f1/1 downlink: f1/2: 20 root@mrcissp-master-1:/etc/salt/pillar# cat top.sls base: '*': - snmp - syslog 'core1': - core1 'core2': - core2 'agg1': - agg1 'agg2': - agg2 'access1': - access1 - access1_vlan - access1_port 'access2': - access2 - access2_vlan - access2_port
Template for access switch configuration – “access_sw_config.jinja”
root@mrcissp-master-1:/etc/salt/pillar# cat /etc/salt/templates/access_sw_config.jinja {%- for voice_vlan in pillar['access_vlan']['voice_vlan'] %} vlan {{ voice_vlan }} name VOICE_VLAN{{voice_vlan}} exit {%- endfor %} {%- for data_vlan in pillar['access_vlan']['data_vlan'] %} vlan {{ data_vlan }} name DATA_VLAN{{data_vlan}} exit {%- endfor %} {%- for interface in pillar['access_port']['uplink'] %} interface {{ interface }} switchport switchport mode trunk switchport trunk encapsulation dot1q switchport trunk native vlan {{ pillar['access_vlan']['native_vlan'] }} switchport trunk allowed vlan 1,2,1002-1005 {%- for vlan in pillar['access_vlan']['data_vlan'] %} switchport trunk allowed vlan add {{ vlan }} {%- endfor %} {%- for vlan in pillar['access_vlan']['voice_vlan'] %} switchport trunk allowed vlan add {{ vlan }} {%- endfor %} exit {%- endfor %} {%- for interface in pillar['access_port']['downlink'].keys() %} interface {{ interface }} switchport switchport mode access switchport access vlan {{ pillar['access_port']['downlink'][interface] }} exit {%- endfor %}
Rendering this template to access switches
root@mrcissp-master-1:/etc/salt/pillar# salt 'access*' net.load_template salt://access_sw_config.jinja test='true' access1: ---------- already_configured: False comment: Configuration discarded. diff: +vlan 15 +name VOICE_VLAN15 +exit +vlan 10 +name DATA_VLAN10 +exit +interface f1/0 +switchport +switchport mode trunk +switchport trunk encapsulation dot1q +switchport trunk native vlan 100 +switchport trunk allowed vlan 1,2,1002-1005 +switchport trunk allowed vlan add 10 +switchport trunk allowed vlan add 15 +exit +interface f1/1 +switchport +switchport mode trunk +switchport trunk encapsulation dot1q +switchport trunk native vlan 100 +switchport trunk allowed vlan 1,2,1002-1005 +switchport trunk allowed vlan add 10 +switchport trunk allowed vlan add 15 +exit +interface f1/2 +switchport +switchport mode access +switchport access vlan 10 +exit loaded_config: result: True access2: ---------- already_configured: False comment: Configuration discarded. diff: +vlan 25 +name VOICE_VLAN25 +exit +vlan 20 +name DATA_VLAN20 +exit +interface f1/0 +switchport +switchport mode trunk +switchport trunk encapsulation dot1q +switchport trunk native vlan 100 +switchport trunk allowed vlan 1,2,1002-1005 +switchport trunk allowed vlan add 20 +switchport trunk allowed vlan add 25 +exit +interface f1/1 +switchport +switchport mode trunk +switchport trunk encapsulation dot1q +switchport trunk native vlan 100 +switchport trunk allowed vlan 1,2,1002-1005 +switchport trunk allowed vlan add 20 +switchport trunk allowed vlan add 25 +exit +interface f1/2 +switchport +switchport mode access +switchport access vlan 20 +exit loaded_config: result: True
Requirement 4: Aggregation Switch configuration for
1) VLAN creation
2) Uplink switch port – Routed Layer 3
3) Downlink switch port – Trunk
4) Spanning Tree Configuration
5) HSRP configuration
6) OSPF configuration
Solution: Following new files are created for aggregation switch configuration automation.
- agg1_data.sls – Minimum information required for agg1 switch.
- agg2_data.sls – Minimum information required for agg2 switch.
- Update top.sls
- agg_sw_config.jinja – Jinja template for aggregation switch configuration
root@mrcissp-master-1:/etc/salt/pillar# cat agg1_data.sls agg_data: native_vlan: 100 voice_vlan: - 15 - 25 data_vlan: - 10 - 20 downlink: - f1/0 - f1/1 - f1/2 stp: primary: - 10 - 20 secondary: - 15 - 25 uplink: f1/3: ip: 192.168.40.1 mask: 255.255.255.252 f1/4: ip: 192.168.50.1 mask: 255.255.255.252 svi: vlan10: ip: 192.168.10.2 mask: 255.255.255.0 vip: 192.168.10.1 priority: 200 vlan15: ip: 192.168.15.2 mask: 255.255.255.0 vip: 192.168.15.1 priority: 200 vlan20: ip: 192.168.20.2 mask: 255.255.255.0 vip: 192.168.20.1 priority: 100 vlan25: ip: 192.168.25.2 mask: 255.255.255.0 vip: 192.168.25.1 priority: 100 ospf: process: 1 area: 0 networks: subnets: - 192.168.10.2 - 192.168.15.2 - 192.168.20.2 - 192.168.25.2 - 192.168.40.1 - 192.168.50.1 wild_masks: - 0.0.0.255 - 0.0.0.255 - 0.0.0.255 - 0.0.0.255 - 0.0.0.3 - 0.0.0.3 root@mrcissp-master-1:/etc/salt/pillar# cat agg2_data.sls agg_data: native_vlan: 100 voice_vlan: - 15 - 25 data_vlan: - 10 - 20 downlink: - f1/0 - f1/1 - f1/2 stp: primary: - 15 - 25 secondary: - 10 - 20 uplink: f1/3: ip: 192.168.70.1 mask: 255.255.255.252 f1/4: ip: 192.168.60.1 mask: 255.255.255.252 svi: vlan10: ip: 192.168.10.3 mask: 255.255.255.0 vip: 192.168.10.1 priority: 100 vlan15: ip: 192.168.15.3 mask: 255.255.255.0 vip: 192.168.15.1 priority: 300 vlan20: ip: 192.168.20.3 mask: 255.255.255.0 vip: 192.168.20.1 priority: 200 vlan25: ip: 192.168.25.3 mask: 255.255.255.0 vip: 192.168.25.1 priority: 200 ospf: process: 1 area: 0 networks: subnets: - 192.168.10.3 - 192.168.15.3 - 192.168.20.3 - 192.168.25.3 - 192.168.60.1 - 192.168.70.1 wild_masks: - 0.0.0.255 - 0.0.0.255 - 0.0.0.255 - 0.0.0.255 - 0.0.0.3 - 0.0.0.3 root@mrcissp-master-1:/etc/salt/pillar# root@mrcissp-master-1:/etc/salt/pillar# cat top.sls base: '*': - snmp - syslog 'core1': - core1 'core2': - core2 'agg1': - agg1 - agg1_data 'agg2': - agg2 - agg2_data 'access1': - access1 - access1_vlan - access1_port 'access2': - access2 - access2_vlan - access2_port
Jinja template for Aggregation switch configuration “agg_sw_config.jinja”
root@mrcissp-master-1:/etc/salt/pillar# cat /etc/salt/templates/agg_sw_config.jinja {%- for voice_vlan in pillar['agg_data']['voice_vlan'] %} vlan {{ voice_vlan }} name VOICE_VLAN{{voice_vlan}} exit {%- endfor %} {%- for data_vlan in pillar['agg_data']['data_vlan'] %} vlan {{ data_vlan }} name DATA_VLAN{{data_vlan}} exit {%- endfor %} {%- for interface in pillar['agg_data']['downlink'] %} interface {{ interface }} switchport switchport mode trunk switchport trunk encapsulation dot1q switchport trunk native vlan {{ pillar['agg_data']['native_vlan'] }} switchport trunk allowed vlan 1,2,1002-1005 {%- for vlan in pillar['agg_data']['data_vlan'] %} switchport trunk allowed vlan add {{ vlan }} {%- endfor %} {%- for vlan in pillar['agg_data']['voice_vlan'] %} switchport trunk allowed vlan add {{ vlan }} {%- endfor %} exit {%- endfor %} {%- for stp_role in pillar['agg_data']['stp'].keys() %} {% if stp_role == 'primary' %} {%- for vlan in pillar['agg_data']['stp'][stp_role] %} spanning-tree vlan {{ vlan }} root primary {%- endfor %} {% elif stp_role == 'secondary' %} {%- for vlan in pillar['agg_data']['stp'][stp_role] %} spanning-tree vlan {{ vlan }} root secondary {%- endfor %} {%- endif %} {%- endfor %} {%- for svi_interface in pillar['agg_data']['svi'].keys() %} interface {{ svi_interface }} ip address {{ pillar['agg_data']['svi'][svi_interface]['ip'] }} {{ pillar['agg_data']['svi'][svi_interface]['mask'] }} standby version 2 standby 1 ip {{ pillar['agg_data']['svi'][svi_interface]['vip'] }} standby 1 priority {{ pillar['agg_data']['svi'][svi_interface]['priority'] }} standby 1 preempt no shutdown {%- endfor %} {%- for layer3_interface in pillar['agg_data']['uplink'].keys() %} interface {{ layer3_interface }} ip address {{ pillar['agg_data']['uplink'][layer3_interface]['ip'] }} {{ pillar['agg_data']['uplink'][layer3_interface]['mask'] }} no shutdown exit {%- endfor %} router ospf {{ pillar['agg_data']['ospf']['process'] }} {% set i = 0 %} {% set mask = pillar['agg_data']['ospf']['networks']['wild_masks'] %} {%- for subnet in pillar['agg_data']['ospf']['networks']['subnets'] %} network {{ subnet }} {{ mask[i] }} area {{ pillar['agg_data']['ospf']['area'] }} {% set i = i+1 %} {%- endfor %} exit
Render template
root@mrcissp-master-1:/etc/salt/pillar# salt 'agg1' net.load_template salt://agg_sw_config.jinja test='true' agg1: ---------- already_configured: False comment: Configuration discarded. diff: +vlan 15 +name VOICE_VLAN15 +exit +vlan 25 +name VOICE_VLAN25 +exit +vlan 10 +name DATA_VLAN10 +exit +vlan 20 +name DATA_VLAN20 +exit +interface f1/0 +switchport +switchport mode trunk +switchport trunk encapsulation dot1q +switchport trunk native vlan 100 +switchport trunk allowed vlan 1,2,1002-1005 +switchport trunk allowed vlan add 10 +switchport trunk allowed vlan add 20 +switchport trunk allowed vlan add 15 +switchport trunk allowed vlan add 25 +exit +interface f1/1 +switchport +switchport mode trunk +switchport trunk encapsulation dot1q +switchport trunk native vlan 100 +switchport trunk allowed vlan 1,2,1002-1005 +switchport trunk allowed vlan add 10 +switchport trunk allowed vlan add 20 +switchport trunk allowed vlan add 15 +switchport trunk allowed vlan add 25 +exit +interface f1/2 +switchport +switchport mode trunk +switchport trunk encapsulation dot1q +switchport trunk native vlan 100 +switchport trunk allowed vlan 1,2,1002-1005 +switchport trunk allowed vlan add 10 +switchport trunk allowed vlan add 20 +switchport trunk allowed vlan add 15 +switchport trunk allowed vlan add 25 +exit +spanning-tree vlan 10 root primary +spanning-tree vlan 20 root primary +spanning-tree vlan 15 root secondary +spanning-tree vlan 25 root secondary +ip address 192.168.10.2 255.255.255.0 +standby version 2 +standby 1 ip 192.168.10.1 +standby 1 priority 200 +standby 1 preempt -no shutdown +ip address 192.168.20.2 255.255.255.0 +standby version 2 +standby 1 ip 192.168.20.1 +standby 1 priority 100 +standby 1 preempt -no shutdown +ip address 192.168.25.2 255.255.255.0 +standby version 2 +standby 1 ip 192.168.25.1 +standby 1 priority 100 +standby 1 preempt -no shutdown +ip address 192.168.15.2 255.255.255.0 +standby version 2 +standby 1 ip 192.168.15.1 +standby 1 priority 200 +standby 1 preempt -no shutdown +interface f1/4 +ip address 192.168.50.1 255.255.255.252 -no shutdown +exit +interface f1/3 +ip address 192.168.40.1 255.255.255.252 -no shutdown +exit +network 192.168.10.2 0.0.0.255 area 0 +network 192.168.15.2 0.0.0.255 area 0 +network 192.168.20.2 0.0.0.255 area 0 +network 192.168.25.2 0.0.0.255 area 0 +network 192.168.40.1 0.0.0.255 area 0 +network 192.168.50.1 0.0.0.255 area 0 +exit loaded_config: result: True root@mrcissp-master-1:/etc/salt/pillar#
Requirement 5: Core Switch configuration for
1) Downlink ports – Routed Layer 3 ports
2) OSPF configuration
Solution: Core configuration for our lab is quite similar we configured at Aggregation switch. Therefore, Following new files are created for core switch configuration automation.
- core1_data.sls – Minimum information required for core1 switch.
- core2_data.sls – Minimum information required for core2 switch.
- Update top.sls
- core_sw_config.jinja – Jinja template for core switch configuration
root@mrcissp-master-1:/etc/salt/pillar# cat core1_data.sls core_data: downlink: f1/3: ip: 192.168.40.2 mask: 255.255.255.252 f1/4: ip: 192.168.60.2 mask: 255.255.255.252 f1/5: ip: 192.168.80.1 mask: 255.255.255.252 ospf: process: 1 area: 0 networks: subnets: - 192.168.60.2 - 192.168.70.2 - 192.168.80.1 wild_masks: - 0.0.0.3 - 0.0.0.3 - 0.0.0.3 root@mrcissp-master-1:/etc/salt/pillar# cat core2_data.sls core_data: downlink: f1/3: ip: 192.168.70.2 mask: 255.255.255.252 f1/4: ip: 192.168.50.2 mask: 255.255.255.252 f1/5: ip: 192.168.80.2 mask: 255.255.255.252 ospf: process: 1 area: 0 networks: subnets: - 192.168.50.2 - 192.168.70.2 - 192.168.80.2 wild_masks: - 0.0.0.3 - 0.0.0.3 - 0.0.0.3 root@mrcissp-master-1:/etc/salt/pillar# cat top.sls base: '*': - snmp - syslog 'core1': - core1 - core1_data 'core2': - core2 - core2_data 'agg1': - agg1 - agg1_data 'agg2': - agg2 - agg2_data 'access1': - access1 - access1_vlan - access1_port 'access2': - access2 - access2_vlan - access2_port
Jinja template for core switch configuration “core_sw_config.jinja”
root@mrcissp-master-1:/etc/salt/pillar# cat /etc/salt/templates/core_sw_config.jinja {%- for layer3_interface in pillar['core_data']['downlink'].keys() %} interface {{ layer3_interface }} ip address {{ pillar['core_data']['downlink'][layer3_interface]['ip'] }} {{ pillar['core_data']['downlink'][layer3_interface]['mask'] }} no shutdown exit {%- endfor %} router ospf {{ pillar['core_data']['ospf']['process'] }} {% set i = 0 %} {% set mask = pillar['core_data']['ospf']['networks']['wild_masks'] %} {%- for subnet in pillar['core_data']['ospf']['networks']['subnets'] %} network {{ subnet }} {{ mask[i] }} area {{ pillar['core_data']['ospf']['area'] }} {% set i = i+1 %} {%- endfor %} exit
Rendering template
root@mrcissp-master-1:/etc/salt/pillar# salt 'core1' net.load_template salt://core_sw_config.jinja test='True' debug='true' core1: ---------- already_configured: False comment: Configuration discarded. diff: +interface f1/4 +ip address 192.168.60.2 255.255.255.252 -no shutdown +exit +interface f1/5 +ip address 192.168.80.1 255.255.255.252 -no shutdown +exit +interface f1/3 +ip address 192.168.40.2 255.255.255.252 -no shutdown +exit +network 192.168.60.2 0.0.0.3 area 0 +network 192.168.70.2 0.0.0.3 area 0 +network 192.168.80.1 0.0.0.3 area 0 +exit loaded_config: interface f1/4 ip address 192.168.60.2 255.255.255.252 no shutdown exit interface f1/5 ip address 192.168.80.1 255.255.255.252 no shutdown exit interface f1/3 ip address 192.168.40.2 255.255.255.252 no shutdown exit router ospf 1 network 192.168.60.2 0.0.0.3 area 0 network 192.168.70.2 0.0.0.3 area 0 network 192.168.80.1 0.0.0.3 area 0 exit result: True root@mrcissp-master-1:/etc/salt/pillar#
Summary
Now, we have created all the required template for our organization campus deployment. It would be very easy to bring up a new branch or a building in few minutes. The only task we need to do for automation is to create required “input_files” for respective network devices & provide basic connectivity. It is very useful when you have to maintain hundreds of switches, routers, and donot want to spend precious time in such repetitive tasks.
Saltstack is a good tool to automate networks (yet to get it matured enough) and there is a lot of advantages in using it like, increase efficiency and effectiveness and reduce human errors in deploying and maintaining large networks. Very useful when you have to maintain hundreds of switches, and you have a lot of network projects.
Thank you so much for visiting.
If you need help or have comments/feedback regarding the topic, please don’t hesitate to drop a comment.