目 录CONTENT

文章目录

SaltStack数据系统

ZiChen D
2021-11-02 / 0 评论 / 0 点赞 / 343 阅读 / 32,243 字 / 正在检测是否收录...

SaltStack数据系统

SaltStack有两大数据系统,分别是:

  • Grains
  • Pillar

SaltStack数据系统组件

SaltStack组件之Grains

Grains是SaltStack的一个组件,其存放着minion启动时收集到的信息。

Grains是SaltStack组件中非常重要的组件之一,因为我们在做配置部署的过程中会经常使用它,Grains是SaltStack记录minion的一些静态信息的组件。可简单理解为Grains记录着每台minion的一些常用属性,比如CPU、内存、磁盘、网络信息等。我们可以通过grains.items查看某台minion的所有Grains信息。

Grains的功能:

  • 收集资产信息

Grains应用场景:

  • 信息查询
  • 在命令行下进行目标匹配
  • 在top file中进行目标匹配
  • 在模板中进行目标匹配

模板中进行目标匹配请看:官方文档

信息查询实例:

列出所有grains的key和value

[root@master base]# salt 'node1' grains.items
node1:
    ----------
    biosreleasedate:
        07/29/2019
    biosversion:
        6.00
    cpu_flags:
        - fpu
        - vme
        - de
        - pse
        - tsc
        - msr
        - pae
        - mce
        - cx8
        - apic
        - sep
        - mtrr
        - pge
        - mca
        - cmov
        - pat
        - pse36
        - clflush
        - mmx
        - fxsr
        - sse
        - sse2
        - ss
        - ht
        - syscall
        - nx
        - pdpe1gb
        - rdtscp
        - lm
        - constant_tsc
        - arch_perfmon
        - nopl
        - xtopology
        - tsc_reliable
        - nonstop_tsc
        - cpuid
        - pni
        - pclmulqdq
        - vmx
        - ssse3
        - fma
        - cx16
        - pcid
        - sse4_1
        - sse4_2
        - x2apic
        - movbe
        - popcnt
        - tsc_deadline_timer
        - aes
        - xsave
        - avx
        - f16c
        - rdrand
        - hypervisor
        - lahf_lm
        - abm
        - 3dnowprefetch
        - invpcid_single
        - ssbd
        - ibrs
        - ibpb
        - stibp
        - ibrs_enhanced
        - tpr_shadow
        - vnmi
        - ept
        - vpid
        - fsgsbase
        - tsc_adjust
        - bmi1
        - avx2
        - smep
        - bmi2
        - invpcid
        - mpx
        - rdseed
        - adx
        - smap
        - clflushopt
        - xsaveopt
        - xsavec
        - xsaves
        - arat
        - pku
        - ospke
        - md_clear
        - flush_l1d
        - arch_capabilities
    cpu_model:
        Intel(R) Core(TM) i7-10870H CPU @ 2.20GHz
    cpuarch:
        x86_64
    cwd:
        /
    disks:
        - sr0
    dns:
        ----------
        domain:
        ip4_nameservers:
            - 192.168.159.2
            - 114.114.114.114
        ip6_nameservers:
        nameservers:
            - 192.168.159.2
            - 114.114.114.114
        options:
        search:
        sortlist:
    domain:
    efi:
        False
    efi-secure-boot:
        False
    fqdn:
        node1
    fqdn_ip4:
        - 192.168.159.14
    fqdn_ip6:
        - fe80::2300:4f66:9d65:8a7
    fqdns:
        - node1
    gid:
        0
    gpus:
        |_
          ----------
          model:
              SVGA II Adapter
          vendor:
              vmware
    groupname:
        root
    host:
        node1
    hwaddr_interfaces:
        ----------
        ens160:
            00:0c:29:83:08:01
        lo:
            00:00:00:00:00:00
    id:
        node1
    init:
        systemd
    ip4_gw:
        192.168.159.2
    ip4_interfaces:
        ----------
        ens160:
            - 192.168.159.14
        lo:
            - 127.0.0.1
    ip6_gw:
        False
    ip6_interfaces:
        ----------
        ens160:
            - fe80::2300:4f66:9d65:8a7
        lo:
            - ::1
    ip_gw:
        True
    ip_interfaces:
        ----------
        ens160:
            - 192.168.159.14
            - fe80::2300:4f66:9d65:8a7
        lo:
            - 127.0.0.1
            - ::1
    ipv4:
        - 127.0.0.1
        - 192.168.159.14
    ipv6:
        - ::1
        - fe80::2300:4f66:9d65:8a7
    kernel:
        Linux
    kernelparams:
        |_
          - BOOT_IMAGE
          - (hd0,msdos1)/vmlinuz-4.18.0-193.el8.x86_64
        |_
          - root
          - /dev/mapper/rhel-root
        |_
          - ro
          - None
        |_
          - crashkernel
          - auto
        |_
          - resume
          - /dev/mapper/rhel-swap
        |_
          - rd.lvm.lv
          - rhel/root
        |_
          - rd.lvm.lv
          - rhel/swap
        |_
          - rhgb
          - None
        |_
          - quiet
          - None
    kernelrelease:
        4.18.0-193.el8.x86_64
    kernelversion:
        #1 SMP Fri Mar 27 14:35:58 UTC 2020
    locale_info:
        ----------
        defaultencoding:
            UTF-8
        defaultlanguage:
            zh_CN
        detectedencoding:
            UTF-8
        timezone:
            CST
    localhost:
        node1
    lsb_distrib_codename:
        Red Hat Enterprise Linux 8.2 (Ootpa)
    lsb_distrib_id:
        Red Hat Enterprise Linux
    lsb_distrib_release:
        8.2
    lvm:
        ----------
        rhel:
            - home
            - root
            - swap
    machine_id:
        ef82286d98f4498baef20a6381cef497
    manufacturer:
        VMware, Inc.
    master:
        192.168.159.13
    mdadm:
    mem_total:
        3752
    nodename:
        node1
    num_cpus:
        4
    num_gpus:
        1
    os:
        RedHat
    os_family:
        RedHat
    osarch:
        x86_64
    oscodename:
        Ootpa
    osfinger:
        Red Hat Enterprise Linux-8
    osfullname:
        Red Hat Enterprise Linux
    osmajorrelease:
        8
    osrelease:
        8.2
    osrelease_info:
        - 8
        - 2
    path:
        /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
    pid:
        1736
    productname:
        VMware Virtual Platform
    ps:
        ps -efHww
    pythonexecutable:
        /usr/bin/python3.6
    pythonpath:
        - /usr/bin
        - /usr/lib64/python36.zip
        - /usr/lib64/python3.6
        - /usr/lib64/python3.6/lib-dynload
        - /usr/lib64/python3.6/site-packages
        - /usr/lib/python3.6/site-packages
    pythonversion:
        - 3
        - 6
        - 8
        - final
        - 0
    saltpath:
        /usr/lib/python3.6/site-packages/salt
    saltversion:
        3004
    saltversioninfo:
        - 3004
    selinux:
        ----------
        enabled:
            False
        enforced:
            Disabled
    serialnumber:
        VMware-56 4d 09 ed ff f2 07 e5-0e 99 a1 db ce 83 08 01
    server_id:
        1797241226
    shell:
        /bin/sh
    ssds:
        - nvme0n1
    swap_total:
        8075
    systemd:
        ----------
        features:
            +PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy
        version:
            239
    systempath:
        - /usr/local/sbin
        - /usr/local/bin
        - /usr/sbin
        - /usr/bin
    transactional:
        False
    uid:
        0
    username:
        root
    uuid:
        ed094d56-f2ff-e507-0e99-a1dbce830801
    virtual:
        VMware
    zfs_feature_flags:
        False
    zfs_support:
        False
    zmqversion:
        4.3.4

只查询所有的grains的key

[root@master base]# salt 'node1' grains.ls
node1:
    - biosreleasedate
    - biosversion
    - cpu_flags
    - cpu_model
    - cpuarch
    - cwd
    - disks
    - dns
    - domain
    - efi
    - efi-secure-boot
    - fqdn
    - fqdn_ip4
    - fqdn_ip6
    - fqdns
    - gid
    - gpus
    - groupname
    - host
    - hwaddr_interfaces
    - id
    - init
    - ip4_gw
    - ip4_interfaces
    - ip6_gw
    - ip6_interfaces
    - ip_gw
    - ip_interfaces
    - ipv4
    - ipv6
    - kernel
    - kernelparams
    - kernelrelease
    - kernelversion
    - locale_info
    - localhost
    - lsb_distrib_codename
    - lsb_distrib_id
    - lsb_distrib_release
    - lvm
    - machine_id
    - manufacturer
    - master
    - mdadm
    - mem_total
    - nodename
    - num_cpus
    - num_gpus
    - os
    - os_family
    - osarch
    - oscodename
    - osfinger
    - osfullname
    - osmajorrelease
    - osrelease
    - osrelease_info
    - path
    - pid
    - productname
    - ps
    - pythonexecutable
    - pythonpath
    - pythonversion
    - saltpath
    - saltversion
    - saltversioninfo
    - selinux
    - serialnumber
    - server_id
    - shell
    - ssds
    - swap_total
    - systemd
    - systempath
    - transactional
    - uid
    - username
    - uuid
    - virtual
    - zfs_feature_flags
    - zfs_support
    - zmqversion

获取IP地址

[root@master base]# salt '*' grains.get fqdn_ip4
master:
    - 192.168.159.13
node1:
    - 192.168.159.14
node2:
    - 192.168.159.15

[root@master base]# salt '*' grains.get ip4_interfaces
master:
    ----------
    ens160:
        - 192.168.159.13
    lo:
        - 127.0.0.1
node1:
    ----------
    ens160:
        - 192.168.159.14
    lo:
        - 127.0.0.1
node2:
    ----------
    ens160:
        - 192.168.159.15
    lo:
        - 127.0.0.1

[root@master ~]# salt '*' grains.get ip4_interfaces:ens160
master:
    - 192.168.159.13
node2:
    - 192.168.159.15
node1:
    - 192.168.159.14

目标匹配实例:
用Grains来匹配minion:

//在所有redhat系统中执行命令
[root@master ~]# salt -G 'os:RedHat' cmd.run 'uptime'
master:
     05:33:41 up 43 min,  2 users,  load average: 0.21, 0.18, 0.12
node2:
     21:33:42 up 1 min,  2 users,  load average: 0.10, 0.04, 0.01
node1:
     21:33:42 up 43 min,  2 users,  load average: 0.10, 0.09, 0.08

在top file里面使用Grains:

[root@master ~]# vim /srv/salt/base/top.sls 
[root@master ~]# cat /srv/salt/
base/ dev/  prod/ test/ 
[root@master ~]# cat /srv/salt/base/top.sls 
base: 
  'os:RedHat':
    - match: grain
    - web.nginx.nginx

base:
  'os:CentOS':				//这里多加了一个CentOS是为了方便查看
    - match: grain
    - web.apache.apache

//先关闭主机上的nginx
//ping一下
[root@master ~]# salt '*' test.ping
master:
    True
node1:		//node1ping通
    True	
node2:		//node2ping通
    True
node3:		//node3这里因为关机了,所以ping不通
    Minion did not return. [No response]
    The minions may not have all finished running and any remaining minions will return upon completion. To look up the return data for this job later, run the following command:
    
    salt-run jobs.lookup_jid 20211102213844509477
ERROR: Minions returned with non-zero exit code

//执行
[root@master ~]# salt '*' state.highstate
master:
----------
          ID: states
    Function: no.None
      Result: False
     Comment: No Top file or master_tops data matches found. Please see master log for details.
     Changes:   

Summary for master
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:   0.000 ms
node2:
----------
          ID: states
    Function: no.None
      Result: False
     Comment: No Top file or master_tops data matches found. Please see master log for details.
     Changes:   

Summary for node2
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:   0.000 ms
node1:
----------
          ID: states
    Function: no.None
      Result: False
     Comment: No Top file or master_tops data matches found. Please see master log for details.
     Changes:   

Summary for node1
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:   0.000 ms
node3:
    Minion did not return. [No response]
    The minions may not have all finished running and any remaining minions will return upon completion. To look up the return data for this job later, run the following command:
    
    salt-run jobs.lookup_jid 20211102214827008903
ERROR: Minions returned with non-zero exit code
[root@master ~]# salt '*' state.highstate
master:
----------
          ID: states
    Function: no.None
      Result: False
     Comment: No Top file or master_tops data matches found. Please see master log for details.
     Changes:   

Summary for master
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:   0.000 ms
node2:
----------
          ID: states
    Function: no.None
      Result: False
     Comment: No Top file or master_tops data matches found. Please see master log for details.
     Changes:   

Summary for node2
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:   0.000 ms
node1:
----------
          ID: states
    Function: no.None
      Result: False
     Comment: No Top file or master_tops data matches found. Please see master log for details.
     Changes:   

Summary for node1
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:   0.000 ms

执行完之后,会发现每一个都带有一个错误的信息,是因为系统不是CentOS
将第二段删除,再执行

[root@master ~]# vim /srv/salt/base/top.sls 
[root@master ~]# cat /srv/salt/base/top.sls 
base: 
  'os:RedHat':
    - match: grain
    - web.nginx.nginx

//执行
[root@master ~]# salt '*' state.highstate
master:
----------
          ID: nginx-install
    Function: pkg.installed
        Name: nginx
      Result: True
     Comment: All specified packages are already installed
     Started: 05:56:33.793509
    Duration: 1088.594 ms
     Changes:   
----------
          ID: nginx-service
    Function: service.running
        Name: nginx
      Result: True
     Comment: The service nginx is already running
     Started: 05:56:34.885222
    Duration: 60.583 ms
     Changes:   

Summary for master
------------
Succeeded: 2
Failed:    0
------------
Total states run:     2
Total run time:   1.149 s
node1:
----------
          ID: nginx-install
    Function: pkg.installed
        Name: nginx
      Result: True
     Comment: All specified packages are already installed
     Started: 21:56:34.490392
    Duration: 1150.039 ms
     Changes:   
----------
          ID: nginx-service
    Function: service.running
        Name: nginx
      Result: True
     Comment: Service nginx is already enabled, and is running
     Started: 21:56:35.641679
    Duration: 193.41 ms
     Changes:   
              ----------
              nginx:
                  True

Summary for node1
------------
Succeeded: 2 (changed=1)
Failed:    0
------------
Total states run:     2
Total run time:   1.343 s
node2:
----------
          ID: nginx-install
    Function: pkg.installed
        Name: nginx
      Result: True
     Comment: All specified packages are already installed
     Started: 21:56:34.596513
    Duration: 1235.485 ms
     Changes:   
----------
          ID: nginx-service
    Function: service.running
        Name: nginx
      Result: True
     Comment: Service nginx is already enabled, and is running
     Started: 21:56:35.833913
    Duration: 174.166 ms
     Changes:   
              ----------
              nginx:
                  True

Summary for node2
------------
Succeeded: 2 (changed=1)
Failed:    0
------------
Total states run:     2
Total run time:   1.410 s

没有错误信息!

自定义Grains的两种方法:

  • minion配置文件,在配置文件中搜索grains
  • 在/etc/salt下生成一个grains文件,在此文件中定义(推荐方式)

修改minion配置文件的方法:(不推荐)

//在minion上修改minion配置文件
# Custom static grains for this minion can be specified here and used in SLS
# files just like all other grains. This example sets 4 custom grains, with
# the 'roles' grain having two values that can be matched against.
grains:				//取消注释
  roles:			//取消注释
    - webserver                 //取消注释
    - memcache                  //取消注释
#  deployment: datacenter4
#  cabinet: 13
#  cab_u: 14-15
#
# Where cache data goes.
[root@node1 ~]# systemctl restart salt-minion.service

[root@master ~]# salt 'node1' grains.items
node1:
    ----------
    biosreleasedate:
        07/29/2019
    biosversion:
        6.00
    cpu_flags:
        - fpu
        - vme
        - de
        - pse
        - tsc
        - msr
        - pae
        - mce
        - cx8
        - apic
        - sep
        - mtrr
        - pge
        - mca
        - cmov
        - pat
        - pse36
        - clflush
        - mmx
        - fxsr
        - sse
        - sse2
        - ss
        - ht
        - syscall
        - nx
        - pdpe1gb
        - rdtscp
        - lm
        - constant_tsc
        - arch_perfmon
        - nopl
        - xtopology
        - tsc_reliable
        - nonstop_tsc
        - cpuid
        - pni
        - pclmulqdq
        - vmx
        - ssse3
        - fma
        - cx16
        - pcid
        - sse4_1
        - sse4_2
        - x2apic
        - movbe
        - popcnt
        - tsc_deadline_timer
        - aes
        - xsave
        - avx
        - f16c
        - rdrand
        - hypervisor
        - lahf_lm
        - abm
        - 3dnowprefetch
        - invpcid_single
        - ssbd
        - ibrs
        - ibpb
        - stibp
        - ibrs_enhanced
        - tpr_shadow
        - vnmi
        - ept
        - vpid
        - fsgsbase
        - tsc_adjust
        - bmi1
        - avx2
        - smep
        - bmi2
        - invpcid
        - mpx
        - rdseed
        - adx
        - smap
        - clflushopt
        - xsaveopt
        - xsavec
        - xsaves
        - arat
        - pku
        - ospke
        - md_clear
        - flush_l1d
        - arch_capabilities
    cpu_model:
        Intel(R) Core(TM) i7-10870H CPU @ 2.20GHz
    cpuarch:
        x86_64
    cwd:
        /
    disks:
        - sr0
    dns:
        ----------
        domain:
        ip4_nameservers:
            - 192.168.159.2
            - 114.114.114.114
        ip6_nameservers:
        nameservers:
            - 192.168.159.2
            - 114.114.114.114
        options:
        search:
        sortlist:
    domain:
    efi:
        False
    efi-secure-boot:
        False
    fqdn:
        node1
    fqdn_ip4:
        - 192.168.159.14
    fqdn_ip6:
        - fe80::2300:4f66:9d65:8a7
    fqdns:
        - node1
    gid:
        0
    gpus:
        |_
          ----------
          model:
              SVGA II Adapter
          vendor:
              vmware
    groupname:
        root
    host:
        node1
    hwaddr_interfaces:
        ----------
        ens160:
            00:0c:29:83:08:01
        lo:
            00:00:00:00:00:00
    id:
        node1
    init:
        systemd
    ip4_gw:
        192.168.159.2
    ip4_interfaces:
        ----------
        ens160:
            - 192.168.159.14
        lo:
            - 127.0.0.1
    ip6_gw:
        False
    ip6_interfaces:
        ----------
        ens160:
            - fe80::2300:4f66:9d65:8a7
        lo:
            - ::1
    ip_gw:
        True
    ip_interfaces:
        ----------
        ens160:
            - 192.168.159.14
            - fe80::2300:4f66:9d65:8a7
        lo:
            - 127.0.0.1
            - ::1
    ipv4:
        - 127.0.0.1
        - 192.168.159.14
    ipv6:
        - ::1
        - fe80::2300:4f66:9d65:8a7
    kernel:
        Linux
    kernelparams:
        |_
          - BOOT_IMAGE
          - (hd0,msdos1)/vmlinuz-4.18.0-193.el8.x86_64
        |_
          - root
          - /dev/mapper/rhel-root
        |_
          - ro
          - None
        |_
          - crashkernel
          - auto
        |_
          - resume
          - /dev/mapper/rhel-swap
        |_
          - rd.lvm.lv
          - rhel/root
        |_
          - rd.lvm.lv
          - rhel/swap
        |_
          - rhgb
          - None
        |_
          - quiet
          - None
    kernelrelease:
        4.18.0-193.el8.x86_64
    kernelversion:
        #1 SMP Fri Mar 27 14:35:58 UTC 2020
    locale_info:
        ----------
        defaultencoding:
            UTF-8
        defaultlanguage:
            zh_CN
        detectedencoding:
            UTF-8
        timezone:
            CST
    localhost:
        node1
    lsb_distrib_codename:
        Red Hat Enterprise Linux 8.2 (Ootpa)
    lsb_distrib_id:
        Red Hat Enterprise Linux
    lsb_distrib_release:
        8.2
    lvm:
        ----------
        rhel:
            - home
            - root
            - swap
    machine_id:
        ef82286d98f4498baef20a6381cef497
    manufacturer:
        VMware, Inc.
    master:
        192.168.159.13
    mdadm:
    mem_total:
        3752
    nodename:
        node1
    num_cpus:
        4
    num_gpus:
        1
    os:
        RedHat
    os_family:
        RedHat
    osarch:
        x86_64
    oscodename:
        Ootpa
    osfinger:
        Red Hat Enterprise Linux-8
    osfullname:
        Red Hat Enterprise Linux
    osmajorrelease:
        8
    osrelease:
        8.2
    osrelease_info:
        - 8
        - 2
    path:
        /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
    pid:
        136723
    productname:
        VMware Virtual Platform
    ps:
        ps -efHww
    pythonexecutable:
        /usr/bin/python3.6
    pythonpath:
        - /usr/bin
        - /usr/lib64/python36.zip
        - /usr/lib64/python3.6
        - /usr/lib64/python3.6/lib-dynload
        - /usr/lib64/python3.6/site-packages
        - /usr/lib/python3.6/site-packages
    pythonversion:
        - 3
        - 6
        - 8
        - final
        - 0
    roles:			//显示出来了
        - webserver
        - memcache
    saltpath:
        /usr/lib/python3.6/site-packages/salt
    saltversion:
        3004
    saltversioninfo:
        - 3004
    selinux:
        ----------
        enabled:
            False
        enforced:
            Disabled
    serialnumber:
        VMware-56 4d 09 ed ff f2 07 e5-0e 99 a1 db ce 83 08 01
    server_id:
        1797241226
    shell:
        /bin/sh
    ssds:
        - nvme0n1
    swap_total:
        8075
    systemd:
        ----------
        features:
            +PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy
        version:
            239
    systempath:
        - /usr/local/sbin
        - /usr/local/bin
        - /usr/sbin
        - /usr/bin
    transactional:
        False
    uid:
        0
    username:
        root
    uuid:
        ed094d56-f2ff-e507-0e99-a1dbce830801
    virtual:
        VMware
    zfs_feature_flags:
        False
    zfs_support:
        False
    zmqversion:
        4.3.4

需要重启服务的方法:

[root@master ~]# salt 'node1' grains.items
node1:
    ----------
    banji:
        linux07052版
    biosreleasedate:
        07/29/2019
    biosversion:
        6.00
    cpu_flags:
        - fpu
        - vme
        - de
        - pse
# 后续省略

不需要重启服务的方法:

[root@node1 ~]# vim /etc/salt/grains
[root@node1 ~]# cat /etc/salt/grains 
banji: linux07052版
dengzichen: wuhanxinxi

[root@master ~]# salt '*' saltutil.sync_grains
master:
node1:
node2:
[root@master ~]# salt '*' grains.get dengzichen
master:
node2:
node1:
    wuhanxinxi

SaltStack组件之Pillar

Pillar也是SaltStack组件中非常重要的组件之一,是数据管理中心,经常配置states在大规模的配置管理工作中使用它。Pillar在SaltStack中主要的作用就是存储和定义配置管理中需要的一些数据,比如软件版本号、用户名密码等信息,它的定义存储格式与Grains类似,都是YAML格式。

在Master配置文件中有一段Pillar settings选项专门定义Pillar相关的一些参数:

#####         Pillar settings        #####
##########################################
# Salt Pillars allow for the building of global data that can be made selectively
# available to different minions based on minion grain filtering. The Salt
# Pillar is laid out in the same fashion as the file server, with environments,
# a top file and sls files. However, pillar data does not need to be in the
# highstate format, and is generally just key/value pairs.
pillar_roots:			//取消注释
  base:				//取消注释
    - /srv/pillar/base          //取消注释并添加到base
#
#ext_pillar:
#  - hiera: /etc/hiera.yaml
#  - cmd_yaml: cat /etc/salt/yaml

[root@master salt]# systemctl restart salt-master

默认Base环境下Pillar的工作目录在/srv/pillar目录下。若你想定义多个环境不同的Pillar工作目录,只需要修改此处配置文件即可。

Pillar的特点:

  • 可以给指定的minion定义它需要的数据
  • 只有指定的人才能看到定义的数据
  • 在master配置文件里设置
//查看pillar的信息
[root@master ~]# salt '*' pillar.items
master:
    ----------
node1:
    ----------
node2:
    ----------

默认pillar是没有任何信息的,如果想查看信息,需要在 master 配置文件上把 pillar_opts的注释取消,并将其值设为 True。

[root@master salt]# pwd
/etc/salt
[root@master salt]# vim master
# The pillar_opts option adds the master configuration file data to a dict in
# the pillar called "master". This is used to set simple configurations in the
# master config file that can then be used on minions.
pillar_opts: True		//设置为True,注意开头字母一定要大写

# The pillar_safe_render_error option prevents the master from passing pillar
# render errors to the minion. This is set on by default because the error could
# contain templating data which would give that minion information it shouldn't

[root@master salt]# systemctl restart salt-master

[root@master salt]# salt '*' pillar.items
node1:
    ----------
    master:
        ----------
        __cli:
            salt-master
        __role:
            master
        allow_minion_key_revoke:
            True
        archive_jobs:
            False
        auth_events:
            True
        auth_mode:
            1
        auto_accept:
            False
        azurefs_update_interval:
            60
        cache:
            localfs
        cache_sreqs:
            True
        cachedir:
            /var/cache/salt/master
        clean_dynamic_modules:
            True
# 省略以下部分

node2:
    ----------
    master:
        ----------
        __cli:
            salt-master
        __role:
            master
        allow_minion_key_revoke:
            True
        archive_jobs:
            False
        auth_events:
            True
        auth_mode:
            1
        auto_accept:
            False
        azurefs_update_interval:
            60
        cache:
            localfs
        cache_sreqs:
            True
        cachedir:
            /var/cache/salt/master
        clean_dynamic_modules:
            True
        cli_summary:
# 省略以下部分

master:
    ----------
    master:
        ----------
        __cli:
            salt-master
        __role:
            master
        allow_minion_key_revoke:
            True
        archive_jobs:
            False
        auth_events:
            True
        auth_mode:
            1
        auto_accept:
            False
        azurefs_update_interval:
            60
        cache:
            localfs
        cache_sreqs:
            True
        cachedir:
            /var/cache/salt/master
        clean_dynamic_modules:
            True
        cli_summary:
            False
        client_acl_verify:
            True
        cluster_mode:
            False
        con_cache:
            False
# 省略以下部分

pillar自定义数据:
在master的配置文件里找pillar_roots可以看到其存放pillar的位置

[root@master salt]# vim master
#####         Pillar settings        #####
##########################################
# Salt Pillars allow for the building of global data that can be made selectively
# available to different minions based on minion grain filtering. The Salt
# Pillar is laid out in the same fashion as the file server, with environments,
# a top file and sls files. However, pillar data does not need to be in the
# highstate format, and is generally just key/value pairs.
pillar_roots:
  base:
    - /srv/pillar/base
  prod:				//添加
    - /srv/pillar/prod
#
#ext_pillar:
#  - hiera: /etc/hiera.yaml
#  - cmd_yaml: cat /etc/salt/yaml

[root@master ~]# mkdir -p /srv/pillar/{base,prod}
[root@master srv]# tree
.
└── pillar
    ├── base
    └── prod

3 directories, 0 files

[root@master srv]# systemctl restart salt-master

//在pillar/base下自定义一个sls文件
[root@master base]# vim apache.sls
[root@master base]# cat apache.sls 
{% if grains['os'] == 'CentOS' %}
package: httpd
{% if grains['os'] == 'RedHat' %}
package: test
{% endif %}

//写一个top.sls文件
[root@master base]# vim top.sls
[root@master base]# cat top.sls 
base:
  'node*'
    - apache                 //因为是同级目录下,所以不用谢路径
[root@master base]# tree
.
├── apache.sls
└── top.sls

0 directories, 2 files

[root@master base]# salt '*' pillar.items
node1:
    ----------
    package:
        httpd
node2:
    ----------
    package:
        httpd
master:
    ----------
//因为使用的CentOS系统

//在salt下修改apache的状态文件,引用pillar的数据
[root@master apache]# pwd
/srv/salt/base/web/apache
[root@master apache]# cat apache.sls 
apache-install:
  pkg.installed:
    - name: {{ pillar['package'] }}

apache-service:
  service.running:
    - name: {{ pillar['package'] }}
    - enable: True


//执行高级状态文件
[root@master base]# salt '*' state.highstate
master:
----------
          ID: states
    Function: no.None
      Result: False
     Comment: No Top file or master_tops data matches found. Please see master log for details.
     Changes:   

Summary for master
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:   0.000 ms
node1:
----------
          ID: states
    Function: no.None
      Result: False
     Comment: No Top file or master_tops data matches found. Please see master log for details.
     Changes:   

Summary for node1
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:   0.000 ms
node2:
----------
          ID: apache-install
    Function: pkg.installed
        Name: httpd
      Result: True
     Comment: All specified packages are already installed
     Started: 12:24:26.938109
    Duration: 1213.834 ms
     Changes:   
----------
          ID: apache-service
    Function: service.running
        Name: httpd
      Result: True
     Comment: The service httpd is already running
     Started: 12:24:28.153904
    Duration: 36.178 ms
     Changes:   

Summary for node2
------------
Succeeded: 2
Failed:    0
------------
Total states run:     2
Total run time:   1.250 s

//尝试给变量加引号
[root@master base]# cat /srv/salt/base/web/apache/apache.sls 
apache-install:
  pkg.installed:
    - name: "{{ pillar['package'] }}"

apache-service:
  service.running:
    - name: "{{ pillar['package'] }}"
    - enable: Tru

//再次执行(执行前先关闭node2上的httpd服务)
[root@master base]# salt '*' state.highstate
node1:
----------
          ID: states
    Function: no.None
      Result: False
     Comment: No Top file or master_tops data matches found. Please see master log for details.
     Changes:   

Summary for node1
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:   0.000 ms
master:
----------
          ID: states
    Function: no.None
      Result: False
     Comment: No Top file or master_tops data matches found. Please see master log for details.
     Changes:   

Summary for master
------------
Succeeded: 0
Failed:    1
------------
Total states run:     1
Total run time:   0.000 ms
node2:
----------
          ID: apache-install
    Function: pkg.installed
        Name: httpd
      Result: True
     Comment: All specified packages are already installed
     Started: 12:26:38.146067
    Duration: 1139.908 ms
     Changes:   
----------
          ID: apache-service
    Function: service.running
        Name: httpd
      Result: True
     Comment: Service httpd is already enabled, and is running
     Started: 12:26:39.287775
    Duration: 239.806 ms
     Changes:   
              ----------
              httpd:
                  True

Summary for node2
------------
Succeeded: 2 (changed=1)
Failed:    0
------------
Total states run:     2
Total run time:   1.380 s
//发现没有差别

Grains与Pillar的区别

名称存储位置类型采集方式应用场景
Grainsminion静态minion启动时采集
可通过刷新避免重启minion服务
1.信息查询
2.在命令行下进行目标匹配
3.在top file中进行目标匹配
4.在模板中进行目标匹配
Pillarmaster动态指定,实时生效1.目标匹配
2.敏感数据配置

错误与解决办法

pillar_roots与file_roots冲突

//错误
# file_roots:
#   base:
#     - /srv/salt/
#   dev:
#     - /srv/salt/dev/services
#     - /srv/salt/dev/states
#   prod:
#     - /srv/salt/prod/services
#     - /srv/salt/prod/states
//file_roots与注释#号之间会有一个空格,取消注释时需要删除掉(整段都需要保持格式),不然会影响到pillar_roots的注释取消,导致服务重启失败

pillar_roots:
  base:
    - /srv/pillar/base

[root@master ~]# systemctl restart salt-master.service 
Job for salt-master.service failed because the control process exited with error code.
See "systemctl status salt-master.service" and "journalctl -xe" for details.

//正确
file_roots:   
  base:
    - /srv/salt/base
  dev:
    - /srv/salt/dev/services
    - /srv/salt/dev/states
  prod:
    - /srv/salt/prod/services
    - /srv/salt/prod/states
//注释取消掉时,将中间的空格一并删除,保持顶行,注意:要一并保持整段格式,不然还是会错

pillar_roots:
  base:
    - /srv/pillar/base

[root@master ~]# systemctl restart salt-master.service 
0

评论区