我正在使用Ansible + Vagrant来创build我的基础设施,或者做一个我希望的模拟。 它安装postgres和创build一个ssh目录来存储每个主机的differents键。
这是我的项目结构:
. ├── ansible.cfg ├── cluster_hosts ├── group_vars │ ├── host_master │ ├── host_pgpool │ ├── host_slave1 │ └── postgresql ├── roles │ ├── postgresql │ │ ├── files │ │ ├── handlers │ │ └── tasks │ │ └── main.yml │ └── ssh_agent │ └── tasks │ └── main.yml └── site.yml
这是cluster_hosts声明:
host_master ansible_ssh_host=192.168.1.10 ansible_ssh_user=vagrant ansible_ssh_pass=vagrant host_slave1 ansible_ssh_host=192.168.1.11 ansible_ssh_user=vagrant ansible_ssh_pass=vagrant host_slave2 ansible_ssh_host=192.168.1.12 ansible_ssh_user=vagrant ansible_ssh_pass=vagrant host_pgpool ansible_ssh_host=192.168.1.13 ansible_ssh_user=vagrant ansible_ssh_pass=vagrant [ssh] host_master host_pgpool host_slave1 [pg_pool] host_pgpool [database] host_master host_pgpool host_slave1 host_slave2
这是我的group_vars文件:
known_hosts: - 192.168.1.11 - 192.168.1.12
known_hosts: - 192.168.1.11 - 192.168.1.12
known_hosts: - 192.168.1.12
这里是我的site.yml :
--- # The main playbook to deploy the cluster # setup database - hosts: database sudo: True tags: - setup_db roles: - postgresql # setup ssh - hosts: all sudo: True tags: - setup_ssh roles: - ssh_agent
这里是ssh_agentangular色 :
--- - name: Install sshpass apt: name={{ item }} state=present with_items: - sshpass - rsync - name: Create ssh directory sudo_user: postgres command: mkdir -p /var/lib/postgresql/.ssh/ creates=/var/lib/postgresql/.ssh/ - name: Generate known hosts sudo_user: postgres shell: ssh-keyscan -t rsa {{ item }} >> /var/lib/postgresql/.ssh/known_hosts with_items: - "{{ known_hosts }}" - name: Generate id_rsa key sudo_user: postgres command: ssh-keygen -t rsa -N "" -C "" -f /var/lib/postgresql/.ssh/id_rsa - name: Add authorized_keys command: sshpass -p postgres ssh-copy-id -i /var/lib/postgresql/.ssh/id_rsa.pub postgres@{{ item }} sudo_user: postgres with_items: - "{{ known_hosts }}" - name: Owner postgresql command: chown postgres:postgres /var/lib/postgresql/.ssh/ -R
好吧,现在当我运行:
ansible-playbook -i cluster_hosts site.yml --tags setup_ssh
在生成已知主机任务中出现错误:
PLAY [all] ******************************************************************** GATHERING FACTS *************************************************************** ok: [host_pgpool] ok: [host_slave2] ok: [host_slave1] ok: [host_master] TASK: [ssh_agent | Install sshpass] ******************************************* ok: [host_slave1] => (item=sshpass,rsync) ok: [host_master] => (item=sshpass,rsync) ok: [host_pgpool] => (item=sshpass,rsync) ok: [host_slave2] => (item=sshpass,rsync) TASK: [ssh_agent | Create ssh directory] ************************************** skipping: [host_master] skipping: [host_slave2] skipping: [host_slave1] skipping: [host_pgpool] TASK: [ssh_agent | Generate known hosts] ************************************** fatal: [host_slave1] => One or more undefined variables: 'known_hosts' is undefined fatal: [host_master] => One or more undefined variables: 'known_hosts' is undefined fatal: [host_slave2] => One or more undefined variables: 'known_hosts' is undefined fatal: [host_pgpool] => One or more undefined variables: 'known_hosts' is undefined FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/home/robe/site.retry host_master : ok=2 changed=0 unreachable=1 failed=0 host_pgpool : ok=2 changed=0 unreachable=1 failed=0 host_slave1 : ok=2 changed=0 unreachable=1 failed=0 host_slave2 : ok=2 changed=0 unreachable=1 failed=0
我不明白为什么这个错误? 如果每个variables都在group_vars(host_master,host_pgpool,host_slave1)中声明。
我的XML语法错了吗? 我想这也许是问题所在,但我认为这对我来说是正确的。
默认情况下, group_vars/
不会读取group_vars/
所有文件; 它只读取group_vars/all
(或group_vars/all.yml
;顺便说一句,我发现将.yml
扩展名添加到variables文件更方便)。 你需要告诉它在你的site.yml
使用vars_files
来读取你想要的文件,如下所示:
- hosts: database sudo: True tags: - setup_db roles: - postgresql vars_files: - group_vars/host_master