Merge pull request #775 from wazuh/merge-43-master

Merge 4.3 into master
This commit is contained in:
Alberto Rodríguez 2022-05-20 18:55:49 +02:00 committed by GitHub
commit ca5a2b53a1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
144 changed files with 1960 additions and 4334 deletions

View File

@ -9,4 +9,4 @@ updates:
directory: "/" # Location of package manifests
schedule:
interval: "daily"
target-branch: "4.3"
target-branch: "4.4"

View File

@ -35,39 +35,8 @@ jobs:
PY_COLORS: '1'
ANSIBLE_FORCE_COLOR: '1'
scenario-distributed-wazuh-elk:
name: Distributed ELK + Wazuh
runs-on: ubuntu-latest
steps:
- name: Check out the codebase.
uses: actions/checkout@v2
- name: Hack to get setup-python to work on act. See act issue 251
run: |
if [ ! -f "/etc/lsb-release" ] ; then
echo "DISTRIB_RELEASE=18.04" > /etc/lsb-release
fi
- name: Set up Python 3.
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install poetry
run: pip3 install poetry
- name: Install dependencies
run: poetry install
- name: Run Molecule tests.
run: poetry run molecule test -s distributed-wazuh-elk
env:
PY_COLORS: '1'
ANSIBLE_FORCE_COLOR: '1'
scenario-distributed-wazuh-elk-xpack:
name: Distributed ELK + XPack + Wazuh
scenario-distributed-wazuh:
name: Distributed Wazuh
runs-on: ubuntu-latest
steps:
- name: Check out the codebase.
@ -91,37 +60,7 @@ jobs:
run: poetry install
- name: Run Molecule tests.
run: poetry run molecule test -s distributed-wazuh-elk-xpack
env:
PY_COLORS: '1'
ANSIBLE_FORCE_COLOR: '1'
scenario-distributed-wazuh-odfe:
name: Distributed ODFE + Wazuh
runs-on: ubuntu-latest
steps:
- name: Check out the codebase.
uses: actions/checkout@v2
- name: Hack to get setup-python to work on act. See act issue 251
run: |
if [ ! -f "/etc/lsb-release" ] ; then
echo "DISTRIB_RELEASE=18.04" > /etc/lsb-release
fi
- name: Set up Python 3.
uses: actions/setup-python@v2
with:
python-version: '3.x'
- name: Install poetry
run: pip3 install poetry
- name: Install dependencies
run: poetry install
- name: Run Molecule tests.
run: poetry run molecule test -s distributed-wazuh-odfe
run: poetry run molecule test -s distributed-wazuh
env:
PY_COLORS: '1'
ANSIBLE_FORCE_COLOR: '1'

View File

@ -6,7 +6,13 @@ All notable changes to this project will be documented in this file.
### Added
- Update to [Wazuh v4.4.0](https://github.com/wazuh/wazuh/blob/v4.4.0/CHANGELOG.md#v440)
-
## [v4.3.1]
### Added
- Update to [Wazuh v4.3.1](https://github.com/wazuh/wazuh/blob/v4.3.1/CHANGELOG.md#v431)
## [v4.3.0]
### Added

264
README.md
View File

@ -5,7 +5,7 @@
[![Documentation](https://img.shields.io/badge/docs-view-green.svg)](https://documentation.wazuh.com)
[![Documentation](https://img.shields.io/badge/web-view-green.svg)](https://wazuh.com)
These playbooks install and configure Wazuh agent, manager and Elastic Stack.
These playbooks install and configure Wazuh agent, manager and indexer and dashboard.
## Branches
* `master` branch contains the latest code, be aware of possible bugs on this branch.
@ -16,6 +16,7 @@ These playbooks install and configure Wazuh agent, manager and Elastic Stack.
| Wazuh version | Elastic | ODFE |
|---------------|---------|--------|
| v4.4.0 | | |
| v4.3.1 | | |
| v4.3.0 | | |
| v4.2.6 | 7.10.2 | 1.13.2 |
| v4.2.5 | 7.10.2 | 1.13.2 |
@ -39,33 +40,23 @@ These playbooks install and configure Wazuh agent, manager and Elastic Stack.
├── wazuh-ansible
│ ├── roles
│ │ ├── elastic-stack
│ │ │ ├── ansible-elasticsearch
│ │ │ ├── ansible-kibana
│ │
│ │ ├── opendistro
│ │ │ ├── opendistro-elasticsearch
│ │ │ ├── opendistro-kibana
│ │
│ │ ├── wazuh
│ │ │ ├── ansible-filebeat
│ │ │ ├── ansible-filebeat-oss
│ │ │ ├── ansible-wazuh-manager
│ │ │ ├── ansible-wazuh-agent
│ │ │ ├── wazuh-dashboard
│ │ │ ├── wazuh-indexer
│ │
│ │ ├── ansible-galaxy
│ │ │ ├── meta
│ ├── playbooks
│ │ ├── wazuh-agent.yml
│ │ ├── wazuh-elastic.yml
│ │ ├── wazuh-elastic_stack-distributed.yml
│ │ ├── wazuh-elastic_stack-single.yml
│ │ ├── wazuh-kibana.yml
│ │ ├── wazuh-manager.yml
│ │ ├── wazuh-dashboard.yml
│ │ ├── wazuh-indexer.yml
│ │ ├── wazuh-manager-oss.yml
│ │ ├── wazuh-opendistro.yml
│ │ ├── wazuh-opendistro-kibana.yml
| | ├── wazuh-production-ready
│ │ ├── wazuh-single.yml
│ ├── README.md
│ ├── VERSION
@ -75,87 +66,102 @@ These playbooks install and configure Wazuh agent, manager and Elastic Stack.
## Example: production-ready distributed environment
### Playbook
The hereunder example playbook uses the `wazuh-ansible` role to provision a production-ready Wazuh environment. The architecture includes 2 Wazuh nodes, 3 ODFE nodes and a mixed ODFE-Kibana node.
The hereunder example playbook uses the `wazuh-ansible` role to provision a production-ready Wazuh environment. The architecture includes 2 Wazuh nodes, 3 Wazuh indexer nodes and a mixed Wazuh dashboard node (Wazuh indexer data node + Wazuh dashboard).
```yaml
---
# Certificates generation
- hosts: es1
- hosts: wi1
roles:
- role: ../roles/opendistro/opendistro-elasticsearch
elasticsearch_network_host: "{{ private_ip }}"
elasticsearch_cluster_nodes:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
elasticsearch_discovery_nodes:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
- role: ../roles/wazuh/wazuh-indexer
indexer_network_host: "{{ private_ip }}"
indexer_cluster_nodes:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
indexer_discovery_nodes:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
perform_installation: false
become: yes
become_user: root
become: no
vars:
elasticsearch_node_master: true
indexer_node_master: true
instances:
node1:
name: node-1 # Important: must be equal to elasticsearch_node_name.
ip: "{{ hostvars.es1.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
name: node-1 # Important: must be equal to indexer_node_name.
ip: "{{ hostvars.wi1.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
role: indexer
node2:
name: node-2
ip: "{{ hostvars.es2.private_ip }}"
ip: "{{ hostvars.wi2.private_ip }}"
role: indexer
node3:
name: node-3
ip: "{{ hostvars.es3.private_ip }}"
ip: "{{ hostvars.wi3.private_ip }}"
role: indexer
node4:
name: node-4
ip: "{{ hostvars.manager.private_ip }}"
role: wazuh
node_type: master
node5:
name: node-5
ip: "{{ hostvars.worker.private_ip }}"
role: wazuh
node_type: worker
node6:
name: node-6
ip: "{{ hostvars.kibana.private_ip }}"
ip: "{{ hostvars.dashboard.private_ip }}"
role: dashboard
tags:
- generate-certs
#ODFE Cluster
- hosts: odfe_cluster
# Wazuh indexer cluster
- hosts: wi_cluster
strategy: free
roles:
- role: ../roles/opendistro/opendistro-elasticsearch
elasticsearch_network_host: "{{ private_ip }}"
- role: ../roles/wazuh/wazuh-indexer
indexer_network_host: "{{ private_ip }}"
become: yes
become_user: root
vars:
elasticsearch_cluster_nodes:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
elasticsearch_discovery_nodes:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
elasticsearch_node_master: true
indexer_cluster_nodes:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
indexer_discovery_nodes:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
indexer_node_master: true
instances:
node1:
name: node-1 # Important: must be equal to elasticsearch_node_name.
ip: "{{ hostvars.es1.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
name: node-1 # Important: must be equal to indexer_node_name.
ip: "{{ hostvars.wi1.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
role: indexer
node2:
name: node-2
ip: "{{ hostvars.es2.private_ip }}"
ip: "{{ hostvars.wi2.private_ip }}"
role: indexer
node3:
name: node-3
ip: "{{ hostvars.es3.private_ip }}"
ip: "{{ hostvars.wi3.private_ip }}"
role: indexer
node4:
name: node-4
ip: "{{ hostvars.manager.private_ip }}"
role: wazuh
node_type: master
node5:
name: node-5
ip: "{{ hostvars.worker.private_ip }}"
role: wazuh
node_type: worker
node6:
name: node-6
ip: "{{ hostvars.kibana.private_ip }}"
ip: "{{ hostvars.dashboard.private_ip }}"
role: dashboard
# Wazuh cluster
- hosts: manager
@ -182,10 +188,13 @@ The hereunder example playbook uses the `wazuh-ansible` role to provision a prod
nodes:
- "{{ hostvars.manager.private_ip }}"
hidden: 'no'
filebeat_output_elasticsearch_hosts:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
wazuh_api_users:
- username: custom-user
password: SecretPassword1!
filebeat_output_indexer_hosts:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
- hosts: worker
roles:
@ -211,58 +220,67 @@ The hereunder example playbook uses the `wazuh-ansible` role to provision a prod
nodes:
- "{{ hostvars.manager.private_ip }}"
hidden: 'no'
filebeat_output_elasticsearch_hosts:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
filebeat_output_indexer_hosts:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
#ODFE+Kibana node
- hosts: kibana
# Indexer + dashboard node
- hosts: dashboard
roles:
- role: "../roles/opendistro/opendistro-elasticsearch"
- role: "../roles/opendistro/opendistro-kibana"
- role: "../roles/wazuh/wazuh-indexer"
- role: "../roles/wazuh/wazuh-dashboard"
become: yes
become_user: root
vars:
elasticsearch_network_host: "{{ hostvars.kibana.private_ip }}"
elasticsearch_node_name: node-6
elasticsearch_node_master: false
elasticsearch_node_ingest: false
elasticsearch_node_data: false
elasticsearch_cluster_nodes:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
elasticsearch_discovery_nodes:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
kibana_node_name: node-6
indexer_network_host: "{{ hostvars.dashboard.private_ip }}"
indexer_node_name: node-6
indexer_node_master: false
indexer_node_ingest: false
indexer_node_data: false
indexer_cluster_nodes:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
indexer_discovery_nodes:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
dashboard_node_name: node-6
wazuh_api_credentials:
- id: default
url: https://{{ hostvars.manager.private_ip }}
port: 55000
user: foo
password: bar
username: custom-user
password: SecretPassword1!
instances:
node1:
name: node-1 # Important: must be equal to elasticsearch_node_name.
ip: "{{ hostvars.es1.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
name: node-1
ip: "{{ hostvars.wi1.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
role: indexer
node2:
name: node-2
ip: "{{ hostvars.es2.private_ip }}"
ip: "{{ hostvars.wi2.private_ip }}"
role: indexer
node3:
name: node-3
ip: "{{ hostvars.es3.private_ip }}"
ip: "{{ hostvars.wi3.private_ip }}"
role: indexer
node4:
name: node-4
ip: "{{ hostvars.manager.private_ip }}"
role: wazuh
node_type: master
node5:
name: node-5
ip: "{{ hostvars.worker.private_ip }}"
role: wazuh
node_type: worker
node6:
name: node-6
ip: "{{ hostvars.kibana.private_ip }}"
ip: "{{ hostvars.dashboard.private_ip }}"
role: dashboard
ansible_shell_allow_world_readable_temp: true
```
### Inventory file
@ -273,17 +291,17 @@ The hereunder example playbook uses the `wazuh-ansible` role to provision a prod
- The ssh credentials used by Ansible during the provision can be specified in this file too. Another option is including them directly on the playbook.
```ini
es1 ansible_host=<es1_ec2_public_ip> private_ip=<es1_ec2_private_ip> elasticsearch_node_name=node-1
es2 ansible_host=<es2_ec2_public_ip> private_ip=<es2_ec2_private_ip> elasticsearch_node_name=node-2
es3 ansible_host=<es3_ec2_public_ip> private_ip=<es3_ec2_private_ip> elasticsearch_node_name=node-3
kibana ansible_host=<kibana_node_public_ip> private_ip=<kibana_ec2_private_ip>
wi1 ansible_host=<wi1_ec2_public_ip> private_ip=<wi1_ec2_private_ip> indexer_node_name=node-1
wi2 ansible_host=<wi2_ec2_public_ip> private_ip=<wi2_ec2_private_ip> indexer_node_name=node-2
wi3 ansible_host=<wi3_ec2_public_ip> private_ip=<wi3_ec2_private_ip> indexer_node_name=node-3
dashboard ansible_host=<dashboard_node_public_ip> private_ip=<dashboard_ec2_private_ip>
manager ansible_host=<manager_node_public_ip> private_ip=<manager_ec2_private_ip>
worker ansible_host=<worker_node_public_ip> private_ip=<worker_ec2_private_ip>
[odfe_cluster]
es1
es2
es3
[wi_cluster]
wi1
wi2
wi3
[all:vars]
ansible_ssh_user=vagrant
@ -294,47 +312,63 @@ ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
### Launching the playbook
```bash
ansible-playbook wazuh-odfe-production-ready.yml -i inventory
sudo ansible-playbook wazuh-production-ready.yml -i inventory
```
After the playbook execution, the Wazuh UI should be reachable through `https://<kibana_host>:5601`
After the playbook execution, the Wazuh UI should be reachable through `https://<dashboard_host>`
## Example: single-host environment
### Playbook
The hereunder example playbook uses the `wazuh-ansible` role to provision a single-host Wazuh environment. This architecture includes all the Wazuh and ODFE components in a single node.
The hereunder example playbook uses the `wazuh-ansible` role to provision a single-host Wazuh environment. This architecture includes all the Wazuh and Opensearch components in a single node.
```yaml
---
# Certificates generation
- hosts: aio
roles:
- role: ../roles/wazuh/wazuh-indexer
perform_installation: false
become: no
#become_user: root
vars:
indexer_node_master: true
instances:
node1:
name: node-1 # Important: must be equal to indexer_node_name.
ip: 127.0.0.1
role: indexer
tags:
- generate-certs
# Single node
- hosts: server
- hosts: aio
become: yes
become_user: root
roles:
- role: ../roles/opendistro/opendistro-elasticsearch
- role: "../roles/wazuh/ansible-wazuh-manager"
- role: "../roles/wazuh/ansible-filebeat-oss"
- role: "../roles/opendistro/opendistro-kibana"
- role: ../roles/wazuh/wazuh-indexer
- role: ../roles/wazuh/ansible-wazuh-manager
- role: ../roles/wazuh/ansible-filebeat-oss
- role: ../roles/wazuh/wazuh-dashboard
vars:
single_node: true
minimum_master_nodes: 1
elasticsearch_node_master: true
elasticsearch_network_host: <your server host>
indexer_node_master: true
indexer_network_host: 127.0.0.1
filebeat_node_name: node-1
filebeat_output_elasticsearch_hosts: <your server host>
ansible_ssh_user: vagrant
ansible_ssh_private_key_file: /path/to/ssh/key.pem
ansible_ssh_extra_args: '-o StrictHostKeyChecking=no'
filebeat_output_indexer_hosts:
- 127.0.0.1
instances:
node1:
name: node-1 # Important: must be equal to elasticsearch_node_name.
ip: <your server host>
name: node-1 # Important: must be equal to indexer_node_name.
ip: 127.0.0.1
role: indexer
ansible_shell_allow_world_readable_temp: true
```
### Inventory file
```ini
[server]
[aio]
<your server host>
[all:vars]
@ -346,10 +380,10 @@ ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
### Launching the playbook
```bash
ansible-playbook wazuh-odfe-single.yml -i inventory
sudo ansible-playbook wazuh-single.yml -i inventory
```
After the playbook execution, the Wazuh UI should be reachable through `https://<your server host>:5601`
After the playbook execution, the Wazuh UI should be reachable through `https://<your server host>`
## Contribute

View File

@ -1,11 +1,23 @@
---
- name: Converge
- name: ConvergeCerts
hosts: localhost
roles:
- role: ../../roles/wazuh/wazuh-indexer
perform_installation: false
vars:
instances:
node1:
name: node-1 # Important: must be equal to indexer_node_name.
ip: 127.0.0.1
role: indexer
tags:
- generate-certs
- name: ConvergeInstall
hosts: all
roles:
- role: ../../roles/wazuh/ansible-wazuh-manager
vars:
- { role: ../../roles/wazuh/ansible-filebeat, filebeat_output_elasticsearch_hosts: "elasticsearch_centos7:9200" }
vars:
- { role: ../../roles/wazuh/ansible-filebeat-oss, filebeat_output_indexer_hosts: "indexer_centos7:9200" }
pre_tasks:
- name: (converge) fix missing packages in cloud images
apt:

View File

@ -1,94 +0,0 @@
---
- name: Generate certificates prior to converging
hosts: all
become: true
become_user: root
vars:
endpoints_hostvars: '{{ managers_hostvars | union(elastic_hostvars) | union(kibana_hostvars) }}'
roles:
- role: ../../roles/elastic-stack/ansible-elasticsearch
vars:
node_certs_generator: true
instances: '{{ elk_endpoint_list }}'
when:
- inventory_hostname in groups['elastic']
- ansible_hostname == 'wazuh-es01'
pre_tasks:
- name: (converge) build instances list dynamically for cert generator consumption
set_fact:
elk_endpoint_list: "{{ elk_endpoint_list | default({}) | combine({ instance_hostname: instance_item }) }}"
vars:
instance_hostname: '{{ item.ansible_facts.hostname }}'
instance_item:
name: '{{ item.private_ip}}'
ip: '{{ item.private_ip }}'
loop: '{{ endpoints_hostvars }}'
no_log: true
- name: overview of cert configuration
debug:
var: elk_endpoint_list
- name: Converge
hosts: all
become: true
become_user: root
vars:
endpoints_hostvars: '{{ managers_hostvars | union(elastic_hostvars) | union(kibana_hostvars) }}'
# arguments common to all managers
wazuh_managers_common:
port: 1514
protocol: tcp
api_port: 55000
api_proto: 'http'
api_user: ansible
max_retries: 5
retry_interval: 5
roles:
# 1. Elasticsearch
- role: ../../roles/elastic-stack/ansible-elasticsearch
vars:
instances: '{{ elk_endpoint_list }}'
when: inventory_hostname in groups['elastic']
# 2. Managers
- role: ../../roles/wazuh/ansible-wazuh-manager
when: inventory_hostname in groups['managers']
- role: ../../roles/wazuh/ansible-filebeat
when: inventory_hostname in groups['managers']
# 3. Kibana
- role: ../../roles/elastic-stack/ansible-kibana
when: inventory_hostname in groups['kibana']
# 4. Agents:
- role: ../../roles/wazuh/ansible-wazuh-agent
vars:
wazuh_managers: '{{ wazuh_managers_list }}'
when: inventory_hostname in groups['agents']
pre_tasks:
- name: (converge) build wazuh_managers list dynamically for agents to consume
set_fact:
wazuh_managers_list: '{{ wazuh_managers_list | default([]) | union([manager_item]) }}'
vars:
manager_item: '{{ wazuh_managers_common | combine({"address": item}) }}'
loop: '{{ manager_addresses }}'
- name: (converge) build instances list dynamically for cert generator consumption
set_fact:
elk_endpoint_list: "{{ elk_endpoint_list | default({}) | combine({ instance_hostname: instance_item }) }}"
vars:
instance_hostname: '{{ item.ansible_facts.hostname }}'
instance_item:
name: '{{ item.private_ip}}'
ip: '{{ item.private_ip }}'
loop: '{{ endpoints_hostvars }}'
no_log: true
- name: (converge) fix ubuntu repository key task in thin images where gpg-agent is missing
apt:
name: gpg-agent
state: present
update_cache: yes
when:
- ansible_distribution == "Ubuntu"
- inventory_hostname in groups['agents']

View File

@ -1,25 +0,0 @@
---
wazuh_agent_config:
enrollment:
enabled: 'yes'
#manager_address: ''
#port: 1515
agent_name: '{{ ansible_hostname }}'
#groups: ''
#agent_address: ''
#ssl_cipher: HIGH:!ADH:!EXP:!MD5:!RC4:!3DES:!CAMELLIA:@STRENGTH
#server_ca_path: ''
#agent_certificate_path: ''
#agent_key_path: ''
#authorization_pass_path : /var/ossec/etc/authd.pass
#auto_method: 'no'
#delay_after_enrollment: 20
#use_source_ip: 'no'
wazuh_agent_authd:
registration_address: '{{ manager_addresses | random }}'
enable: true
port: 1515
ssl_agent_ca: null
ssl_auto_negotiate: 'no'

View File

@ -1,17 +0,0 @@
---
single_node: false
elasticsearch_node_master: true
minimum_master_nodes: 1
elasticsearch_network_host: '{{ private_ip }}'
elasticsearch_node_name: '{{ private_ip }}'
elasticsearch_reachable_host: '{{ private_ip }}'
elasticsearch_http_port: 9200
elasticsearch_bootstrap_node: true
elasticsearch_cluster_nodes: '{{ elastic_addresses }}'
elasticsearch_discovery_nodes: '{{ elastic_addresses }}'
elasticsearch_jvm_xms: 1024

View File

@ -1,19 +0,0 @@
---
kibana_server_name: '{{ ansible_hostname }}'
kibana_node_name: '{{ private_ip }}'
elasticsearch_network_host: "{{ elastic_addresses[0] }}"
#elasticsearch_http_port: 9200
elasticsearch_node_master: false
elasticsearch_node_ingest: false
elasticsearch_node_data: false
wazuh_api_credentials:
- id: default
url: 'https://{{ manager_addresses[0] }}'
port: 55000
#port: 1514
username: wazuh
password: wazuh

View File

@ -1,21 +0,0 @@
---
wazuh_manager_fqdn: '{{ ansible_hostname }}'
filebeat_node_name: '{{ private_ip }}'
filebeat_output_elasticsearch_hosts: '{{ elastic_addresses }}'
wazuh_manager_config:
connection:
- type: 'secure'
port: '1514'
protocol: 'tcp'
queue_size: 131072
api:
https: 'yes'
cluster:
disable: 'no'
node_name: '{{ ansible_hostname }}'
node_type: "{{ 'master' if ansible_hostname == 'wazuh-mgr01' else 'worker' }}"
nodes: '{{ manager_addresses }}'
hidden: 'no'

View File

@ -1,162 +0,0 @@
---
# Distributed scenario: clustered manager scenario + connected agents
# 2-core CPU
# 7 GB of RAM memory
# 14 GB of SSD disk space
#
# Source: https://docs.github.com/en/free-pro-team@latest/actions/reference/specifications-for-github-hosted-runners
dependency:
name: galaxy
driver:
name: docker
lint: |
yamllint .
ansible-lint roles
flake8 molecule
platforms:
################################################
# Wazuh Managers
################################################
- name: molecule_xpack_manager_centos7
hostname: wazuh-mgr01
image: geerlingguy/docker-centos7-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory_reservation: 512m
memory: 1024m
groups:
- managers
ulimits:
- nofile:262144:262144
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- name: molecule_xpack_manager_debian9
hostname: wazuh-mgr02
image: geerlingguy/docker-debian9-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory_reservation: 512m
memory: 1024m
groups:
- managers
ulimits:
- nofile:262144:262144
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
################################################
# Elastic Cluster
################################################
- name: molecule_xpack_elasticsearch_centos7
hostname: wazuh-es01
image: geerlingguy/docker-centos7-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory: 4096m
memory_reservation: 2048m
groups:
- elastic
ulimits:
- nofile:262144:262144
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- name: molecule_xpack_elasticsearch_debian9
hostname: wazuh-es02
image: geerlingguy/docker-debian9-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory: 4096m
memory_reservation: 2048m
groups:
- elastic
ulimits:
- nofile:262144:262144
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
################################################
# Wazuh Agents
################################################
- name: molecule_xpack_agent_centos7
hostname: wazuh-agent01
image: geerlingguy/docker-centos7-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory: 1024m
memory_reservation: 512m
groups:
- agents
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- name: molecule_xpack_agent_debian9
hostname: wazuh-agent02
image: geerlingguy/docker-debian9-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory: 1024m
memory_reservation: 512m
groups:
- agents
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
################################################
# Kibana
################################################
- name: molecule_xpack_kibana_centos7
hostname: wazuh-kib01
image: geerlingguy/docker-centos7-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory: 2048m
memory_reservation: 512m
groups:
- kibana
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
provisioner:
name: ansible
ansible_args:
- -vv
inventory:
links:
group_vars: group_vars
playbooks:
create: create.yml
converge: converge.yml
#destroy: destroy.yml
config_options:
defaults:
hash_behaviour: merge
env:
ANSIBLE_ROLES_PATH: ./roles
lint:
name: ansible-lint
enabled: false
scenario:
name: distributed-wazuh-elk-xpack
test_sequence:
- dependency
- syntax
- create
- prepare
- converge
#- idempotence
#- verify
- cleanup
- destroy
verifier:
name: testinfra

View File

@ -1,16 +0,0 @@
*******
Install
*******
Requirements
============
* Docker Engine
* docker-py
Install
=======
.. code-block:: bash
$ sudo pip install docker-py

View File

@ -1,60 +0,0 @@
---
- name: Converge
hosts: all
become: true
become_user: root
vars:
# arguments common to all managers
wazuh_managers_common:
port: 1514
protocol: tcp
api_port: 55000
api_proto: 'http'
api_user: ansible
max_retries: 5
retry_interval: 5
roles:
# 1. Elasticsearch
- role: ../../roles/elastic-stack/ansible-elasticsearch
when: inventory_hostname in groups['elastic']
# 2. Managers
- role: ../../roles/wazuh/ansible-wazuh-manager
when: inventory_hostname in groups['managers']
- role: ../../roles/wazuh/ansible-filebeat
when: inventory_hostname in groups['managers']
# 3. Kibana
- role: ../../roles/elastic-stack/ansible-kibana
when: inventory_hostname in groups['kibana']
# 4. Agents:
- role: ../../roles/wazuh/ansible-wazuh-agent
vars:
wazuh_managers: '{{ wazuh_managers_list }}'
when: inventory_hostname in groups['agents']
pre_tasks:
- name: (converge) build wazuh_managers list dynamically for agents to consume
set_fact:
wazuh_managers_list: '{{ wazuh_managers_list | default([]) | union([merged_dict]) }}'
vars:
merged_dict: '{{ wazuh_managers_common | combine({"address": item}) }}'
loop: '{{ manager_addresses }}'
- name: (converge) fix ubuntu repository key task in thin images where gpg-agent is missing
apt:
name: gpg-agent
state: present
update_cache: yes
when:
- ansible_distribution == "Ubuntu"
- inventory_hostname in groups['agents']
- debug:
msg: |
-----------------------------------------
managers: {{ managers_hostvars | length }}
addresses: {{ manager_addresses }}
-----------------------------------------
elastic: {{ elastic_hostvars | length }}
addresses: {{ elastic_addresses }}
-----------------------------------------

View File

@ -1,18 +0,0 @@
---
wazuh_agent_config:
enrollment:
enabled: 'yes'
#manager_address: ''
#port: 1515
agent_name: '{{ ansible_hostname }}'
#groups: ''
#agent_address: ''
#ssl_cipher: HIGH:!ADH:!EXP:!MD5:!RC4:!3DES:!CAMELLIA:@STRENGTH
#server_ca_path: ''
#agent_certificate_path: ''
#agent_key_path: ''
#authorization_pass_path : /var/ossec/etc/authd.pass
#auto_method: 'no'
#delay_after_enrollment: 20
#use_source_ip: 'no'

View File

@ -1,21 +0,0 @@
---
single_node: false
elasticsearch_node_master: true
minimum_master_nodes: 1
elasticsearch_network_host: '{{ private_ip }}'
elasticsearch_node_name: '{{ ansible_hostname }}'
elasticsearch_reachable_host: '{{ private_ip }}'
elasticsearch_http_port: 9200
# This scenario runs without xpack-security
elasticsearch_xpack_security: false
node_certs_generator: false
elasticsearch_bootstrap_node: true
elasticsearch_cluster_nodes: '{{ elastic_addresses }}'
elasticsearch_discovery_nodes: '{{ elastic_addresses }}'
elasticsearch_jvm_xms: 1024

View File

@ -1,19 +0,0 @@
---
kibana_node_name: '{{ ansible_hostname }}'
kibana_server_name: '{{ ansible_hostname }}'
elasticsearch_network_host: "{{ elastic_addresses | random }}"
#elasticsearch_http_port: 9200
elasticsearch_node_master: false
elasticsearch_node_ingest: false
elasticsearch_node_data: false
wazuh_api_credentials:
- id: default
url: 'https://{{ manager_addresses[0] }}'
port: 55000
#port: 1514
username: wazuh
password: wazuh

View File

@ -1,7 +0,0 @@
---
wazuh_agent_authd:
registration_address: '{{ manager_addresses | random }}'
enable: true
port: 1515
ssl_agent_ca: null
ssl_auto_negotiate: 'no'

View File

@ -1,163 +0,0 @@
---
# Distributed scenario: clustered manager scenario + connected agents
# 2-core CPU
# 7 GB of RAM memory
# 14 GB of SSD disk space
#
# Source: https://docs.github.com/en/free-pro-team@latest/actions/reference/specifications-for-github-hosted-runners
dependency:
name: galaxy
driver:
name: docker
lint: |
yamllint .
ansible-lint roles
flake8 molecule
platforms:
################################################
# Wazuh Managers
################################################
- name: wazuh_manager_centos7
hostname: wazuh-mgr01
image: geerlingguy/docker-centos7-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory_reservation: 512m
memory: 1024m
groups:
- managers
ulimits:
- nofile:262144:262144
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- name: wazuh_manager_debian9
hostname: wazuh-mgr02
image: geerlingguy/docker-debian9-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory_reservation: 512m
memory: 1024m
groups:
- managers
ulimits:
- nofile:262144:262144
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
################################################
# Elastic Cluster
################################################
- name: wazuh_elasticsearch_centos7
hostname: wazuh-es01
image: geerlingguy/docker-centos7-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory: 4096m
memory_reservation: 2048m
groups:
- elastic
ulimits:
- nofile:262144:262144
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- name: wazuh_elasticsearch_debian9
hostname: wazuh-es02
image: geerlingguy/docker-debian9-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory: 4096m
memory_reservation: 2048m
groups:
- elastic
ulimits:
- nofile:262144:262144
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
################################################
# Wazuh Agents
################################################
- name: wazuh_agent_centos7
hostname: wazuh-agent01
image: geerlingguy/docker-centos7-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory: 1024m
memory_reservation: 512m
groups:
- agents
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- name: wazuh_agent_debian9
hostname: wazuh-agent01
image: geerlingguy/docker-debian9-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory: 1024m
memory_reservation: 512m
groups:
- agents
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
################################################
# Kibana
################################################
- name: wazuh_kibana_centos7
hostname: wazuh-kib01
image: geerlingguy/docker-centos7-ansible
command: /sbin/init
pre_build_image: true
privileged: true
memory: 2048m
memory_reservation: 512m
groups:
- kibana
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
provisioner:
name: ansible
ansible_args:
- -vv
inventory:
links:
group_vars: group_vars
host_vars: host_vars
playbooks:
create: create.yml
converge: converge.yml
#destroy: destroy.yml
config_options:
defaults:
hash_behaviour: merge
env:
ANSIBLE_ROLES_PATH: ./roles
lint:
name: ansible-lint
enabled: false
scenario:
name: distributed-wazuh-elk
test_sequence:
- dependency
- syntax
- create
- prepare
- converge
#- idempotence
#- verify
- cleanup
- destroy
verifier:
name: testinfra

View File

@ -1,16 +0,0 @@
*******
Install
*******
Requirements
============
* Docker Engine
* docker-py
Install
=======
.. code-block:: bash
$ sudo pip install docker-py

View File

@ -1,75 +0,0 @@
---
- name: Build Facts
hosts: all
become: true
become_user: root
vars:
endpoints_hostvars: '{{ managers_hostvars | union(elastic_hostvars) | union(kibana_hostvars) }}'
wazuh_managers_common:
port: 1514
protocol: tcp
api_port: 55000
api_proto: 'http'
api_user: ansible
max_retries: 5
retry_interval: 5
pre_tasks:
- name: (converge) build instances list dynamically for cert generator consumption
set_fact:
odfe_endpoint_list: "{{ odfe_endpoint_list | default({}) | combine({ instance_hostname: instance_item }) }}"
vars:
instance_hostname: '{{ item.ansible_facts.hostname }}'
instance_item:
name: '{{ instance_hostname }}'
ip: '{{ item.private_ip }}'
loop: '{{ endpoints_hostvars }}'
no_log: true
- name: (converge) build wazuh_managers list dynamically for agents to consume
set_fact:
wazuh_managers_list: '{{ wazuh_managers_list | default([]) | union([manager_item]) }}'
vars:
manager_item: '{{ wazuh_managers_common | combine({"address": item}) }}'
loop: '{{ manager_addresses }}'
- name: overview of cert configuration
debug:
var: odfe_endpoint_list
- name: Generate certificates prior to converging
hosts: molecule_odfe_elasticsearch_centos7
become: true
become_user: root
roles:
- role: ../../roles/opendistro/opendistro-elasticsearch
vars:
generate_certs: true
perform_installation: false
instances: '{{ odfe_endpoint_list }}'
pre_tasks:
- name: overview of cert configuration
debug:
var: odfe_endpoint_list
- name: Converge
hosts: all
become: true
become_user: root
roles:
# 1. Elasticsearch
- role: ../../roles/opendistro/opendistro-elasticsearch
when: inventory_hostname in groups['elastic']
# 2. Managers
- role: ../../roles/wazuh/ansible-wazuh-manager
when: inventory_hostname in groups['managers']
- role: ../../roles/wazuh/ansible-filebeat-oss
when: inventory_hostname in groups['managers']
# 3. Kibana
- role: ../../roles/opendistro/opendistro-kibana
when: inventory_hostname in groups['kibana']
# 4. Agents:
- role: ../../roles/wazuh/ansible-wazuh-agent
vars:
wazuh_managers: '{{ wazuh_managers_list }}'
when: inventory_hostname in groups['agents']

View File

@ -1,16 +0,0 @@
---
single_node: false
elasticsearch_node_master: true
minimum_master_nodes: 1
elasticsearch_network_host: '{{ private_ip }}'
elasticsearch_reachable_host: '{{ private_ip }}'
elasticsearch_http_port: 9200
elasticsearch_bootstrap_node: true
elasticsearch_cluster_nodes: '{{ elastic_addresses }}'
elasticsearch_discovery_nodes: '{{ elastic_addresses }}'
opendistro_jvm_xms: 1024

View File

@ -1,17 +0,0 @@
---
kibana_server_name: '{{ ansible_hostname }}'
elasticsearch_network_host: "{{ elastic_addresses[0] }}"
#elasticsearch_http_port: 9200
elasticsearch_node_master: false
elasticsearch_node_ingest: false
elasticsearch_node_data: false
wazuh_api_credentials:
- id: default
url: 'https://{{ manager_addresses[0] }}'
port: 55000
#port: 1514
username: wazuh
password: wazuh

View File

@ -1,19 +0,0 @@
---
wazuh_manager_fqdn: '{{ ansible_hostname }}'
filebeat_output_elasticsearch_hosts: '{{ elastic_addresses }}'
wazuh_manager_config:
connection:
- type: 'secure'
port: '1514'
protocol: 'tcp'
queue_size: 131072
api:
https: 'yes'
cluster:
disable: 'no'
node_name: '{{ ansible_hostname }}'
node_type: "{{ 'master' if ansible_hostname == 'wazuh-mgr01' else 'worker' }}"
nodes: '{{ manager_addresses }}'
hidden: 'no'

View File

@ -0,0 +1,121 @@
---
- name: Build Facts
hosts: all
become: true
become_user: root
vars:
endpoints_hostvars: '{{ managers_hostvars | union(indexer_hostvars) | union(dashboard_hostvars) }}'
wazuh_managers_common:
port: 1514
protocol: tcp
api_port: 55000
api_proto: 'http'
api_user: ansible
max_retries: 5
retry_interval: 5
pre_tasks:
- name: (converge) build instances list dynamically for cert generator consumption
set_fact:
wazuh_endpoint_list: "{{ wazuh_endpoint_list | default({}) | combine({ instance_hostname: instance_item }) }}"
vars:
instance_hostname: '{{ item.ansible_facts.hostname }}'
instance_item:
name: '{{ instance_hostname }}'
ip: '{{ item.private_ip }}'
loop: '{{ endpoints_hostvars }}'
no_log: true
- name: (converge) build wazuh_managers list dynamically for agents to consume
set_fact:
wazuh_managers_list: '{{ wazuh_managers_list | default([]) | union([manager_item]) }}'
vars:
manager_item: '{{ wazuh_managers_common | combine({"address": item}) }}'
loop: '{{ manager_addresses }}'
- name: overview of cert configuration
debug:
var: wazuh_endpoint_list
- name: Generate certificates prior to converging
hosts: molecule_wazuh_indexer_centos7
become: true
become_user: root
roles:
- role: ../../roles/wazuh/wazuh-indexer
vars:
generate_certs: true
perform_installation: false
instances:
node1:
name: wazuh-es01 # Important: must be equal to indexer_node_name.
ip: "{{ hostvars.molecule_wazuh_indexer_centos7.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
role: indexer
node2:
name: wazuh-es02
ip: "{{ hostvars.molecule_wazuh_indexer_centos7_2.private_ip }}"
role: indexer
node3:
name: wazuh-mgr01
ip: "{{ hostvars.molecule_wazuh_manager_debian9.private_ip }}"
role: wazuh
node_type: master
node4:
name: wazuh-mgr02
ip: "{{ hostvars.molecule_wazuh_manager_centos7.private_ip }}"
role: wazuh
node_type: worker
node5:
name: wazuh-dash01
ip: "{{ hostvars.molecule_wazuh_dashboard_centos7.private_ip }}"
role: dashboard
pre_tasks:
- name: overview of cert configuration
debug:
var: wazuh_endpoint_list
- name: Converge
hosts: all
become: true
become_user: root
roles:
# 1. Wazuh indexer
- role: ../../roles/wazuh/wazuh-indexer
when: inventory_hostname in groups['indexer']
# 2. Managers
- role: ../../roles/wazuh/ansible-wazuh-manager
when: inventory_hostname in groups['managers']
- role: ../../roles/wazuh/ansible-filebeat-oss
when: inventory_hostname in groups['managers']
# 3. Wazuh dashboard
- role: ../../roles/wazuh/wazuh-dashboard
when: inventory_hostname in groups['dashboard']
# 4. Agents:
- role: ../../roles/wazuh/ansible-wazuh-agent
vars:
wazuh_managers: '{{ wazuh_managers_list }}'
when: inventory_hostname in groups['agents']
vars:
instances:
node1:
name: wazuh-es01 # Important: must be equal to indexer_node_name.
ip: "{{ hostvars.molecule_wazuh_indexer_centos7.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
role: indexer
node2:
name: wazuh-es02
ip: "{{ hostvars.molecule_wazuh_indexer_centos7_2.private_ip }}"
role: indexer
node3:
name: wazuh-mgr01
ip: "{{ hostvars.molecule_wazuh_manager_debian9.private_ip }}"
role: wazuh
node_type: master
node4:
name: wazuh-mgr02
ip: "{{ hostvars.molecule_wazuh_manager_centos7.private_ip }}"
role: wazuh
node_type: worker
node5:
name: wazuh-dash01
ip: "{{ hostvars.molecule_wazuh_dashboard_centos7.private_ip }}"
role: dashboard

View File

@ -8,7 +8,6 @@ wazuh_agent_config:
agent_name: '{{ ansible_hostname }}'
#groups: ''
#agent_address: ''
#ssl_cipher: HIGH:!ADH:!EXP:!MD5:!RC4:!3DES:!CAMELLIA:@STRENGTH
#server_ca_path: ''
#agent_certificate_path: ''
#agent_key_path: ''

View File

@ -0,0 +1,39 @@
---
########################################################
# Helper variables
private_ip: '{{ ansible_default_ipv4.address }}'
managers_hostvars: "{{ groups['managers'] | map('extract', hostvars) | list }}"
indexer_hostvars: "{{ groups['indexer'] | map('extract', hostvars) | list }}"
dashboard_hostvars: "{{ groups['dashboard'] | map('extract', hostvars) | list }}"
manager_addresses: "{{ managers_hostvars | map(attribute='private_ip') | list }}"
indexer_addresses: "{{ indexer_hostvars | map(attribute='private_ip') | list }}"
dashboard_addresses: "{{ dashboard_hostvars | map(attribute='private_ip') | list }}"
########################################################
# General Wazuh stack variables
# Wazuh indexer/dashboard
dashboard_security: true
dashboard_user: kibanaserver
indexer_security_user: admin
dashboard_password: changeme
indexer_security_password: changeme
indexer_admin_password: changeme
# All nodes are called by IP name
indexer_node_name: '{{ ansible_facts.hostname }}'
dashboard_node_name: '{{ ansible_facts.hostname }}'
filebeat_node_name: '{{ ansible_facts.hostname }}'
indexer_version: 4.4.0
filebeat_version: 7.10.2
wazuh_version: 4.4.0
# Debian packages need the ${VERSION}-1
wazuh_manager_version: 4.4.0-1
wazuh_agent_version: 4.4.0-1

View File

@ -0,0 +1,16 @@
---
dashboard_server_name: '{{ ansible_hostname }}'
indexer_network_host: "{{ indexer_addresses[0] }}"
indexer_node_master: false
indexer_node_ingest: false
indexer_node_data: false
role: 'dashboard'
wazuh_api_credentials:
- id: default
url: 'https://{{ manager_addresses[0] }}'
port: 55000
username: wazuh
password: wazuh

View File

@ -0,0 +1,13 @@
---
single_node: false
indexer_node_master: true
minimum_master_nodes: 1
role: 'indexer'
indexer_network_host: '{{ private_ip }}'
indexer_http_port: 9200
indexer_cluster_nodes: '{{ indexer_addresses }}'
indexer_discovery_nodes: '{{ indexer_addresses }}'

View File

@ -1,8 +1,9 @@
---
wazuh_manager_fqdn: '{{ ansible_hostname }}'
filebeat_node_name: '{{ ansible_hostname }}'
filebeat_output_elasticsearch_hosts: '{{ elastic_addresses }}'
filebeat_output_indexer_hosts: '{{ indexer_addresses }}'
node_type: "{{ 'master' if ansible_hostname == 'wazuh-mgr01' else 'worker' }}"
role: 'wazuh'
wazuh_manager_config:
connection:

View File

@ -18,7 +18,7 @@ platforms:
################################################
# Wazuh Managers
################################################
- name: molecule_odfe_manager_centos7
- name: molecule_wazuh_manager_centos7
hostname: wazuh-mgr01
image: geerlingguy/docker-centos7-ansible
command: /sbin/init
@ -33,7 +33,7 @@ platforms:
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- name: molecule_odfe_manager_debian9
- name: molecule_wazuh_manager_debian9
hostname: wazuh-mgr02
image: geerlingguy/docker-debian9-ansible
command: /sbin/init
@ -49,9 +49,9 @@ platforms:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
################################################
# Elastic Cluster
# Wazuh indexer Cluster
################################################
- name: molecule_odfe_elasticsearch_centos7
- name: molecule_wazuh_indexer_centos7
hostname: wazuh-es01
image: geerlingguy/docker-centos7-ansible
command: /sbin/init
@ -60,13 +60,13 @@ platforms:
memory: 4096m
memory_reservation: 2048m
groups:
- elastic
- indexer
ulimits:
- nofile:262144:262144
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- name: molecule_odfe_elasticsearch_centos7_2
- name: molecule_wazuh_indexer_centos7_2
hostname: wazuh-es02
image: geerlingguy/docker-centos7-ansible
command: /sbin/init
@ -75,7 +75,7 @@ platforms:
memory: 4096m
memory_reservation: 2048m
groups:
- elastic
- indexer
ulimits:
- nofile:262144:262144
volumes:
@ -84,7 +84,7 @@ platforms:
################################################
# Wazuh Agents
################################################
- name: molecule_odfe_agent_centos7
- name: molecule_wazuh_agent_centos7
hostname: wazuh-agent01
image: geerlingguy/docker-centos7-ansible
command: /sbin/init
@ -97,7 +97,7 @@ platforms:
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
- name: molecule_odfe_agent_debian9
- name: molecule_wazuh_agent_debian9
hostname: wazuh-agent02
image: geerlingguy/docker-debian9-ansible
command: /sbin/init
@ -111,11 +111,11 @@ platforms:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
################################################
# Kibana
# Wazuh dashboard
################################################
- name: molecule_odfe_kibana_centos7
hostname: wazuh-kib01
- name: molecule_wazuh_dashboard_centos7
hostname: wazuh-dash01
image: geerlingguy/docker-centos7-ansible
command: /sbin/init
pre_build_image: true
@ -123,7 +123,7 @@ platforms:
memory: 2048m
memory_reservation: 512m
groups:
- kibana
- dashboard
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
@ -147,7 +147,7 @@ provisioner:
name: ansible-lint
enabled: false
scenario:
name: distributed-wazuh-odfe
name: distributed-wazuh
test_sequence:
- dependency
- syntax

View File

@ -1,6 +1,6 @@
---
- hosts: es1
- hosts: wi1
roles:
- role: ../roles/opendistro/opendistro-kibana
- role: ../roles/wazuh/wazuh-dashboard
vars:
ansible_shell_allow_world_readable_temp: true

View File

@ -1,5 +0,0 @@
---
- hosts: <YOUR_ELASTICSEARCH_IP>
roles:
- role: ../roles/elastic-stack/ansible-elasticsearch
elasticsearch_network_host: '<YOUR_ELASTICSEARCH_IP>'

View File

@ -1,91 +0,0 @@
---
- hosts: <node-1 IP>
roles:
- role: ../roles/elastic-stack/ansible-elasticsearch
elasticsearch_network_host: <node-1 IP>
elasticsearch_node_name: node-1
elasticsearch_bootstrap_node: true
elasticsearch_cluster_nodes:
- <node-1 IP>
- <node-2 IP>
- <node-3 IP>
elasticsearch_discovery_nodes:
- <node-1 IP>
- <node-2 IP>
- <node-3 IP>
elasticsearch_xpack_security: true
node_certs_generator: true
elasticsearch_xpack_security_password: elastic_pass
single_node: false
vars:
instances:
node1:
name: node-1 # Important: must be equal to elasticsearch_node_name.
ip: <node-1 IP> # When unzipping, the node will search for its node name folder to get the cert.
node2:
name: node-2
ip: <node-2 IP>
node3:
name: node-3
ip: <node-3 IP>
- hosts: <node-2 IP>
roles:
- role: ../roles/elastic-stack/ansible-elasticsearch
elasticsearch_network_host: <node-2 IP>
elasticsearch_node_name: node-2
single_node: false
elasticsearch_xpack_security: true
elasticsearch_master_candidate: true
elasticsearch_discovery_nodes:
- <node-1 IP>
- <node-2 IP>
- <node-3 IP>
- hosts: <node-3 IP>
roles:
- role: ../roles/elastic-stack/ansible-elasticsearch
elasticsearch_network_host: <node-3 IP>
elasticsearch_node_name: node-3
single_node: false
elasticsearch_xpack_security: true
elasticsearch_master_candidate: true
elasticsearch_discovery_nodes:
- <node-1 IP>
- <node-2 IP>
- <node-3 IP>
# - hosts: 172.16.0.162
# roles:
# - role: ../roles/wazuh/ansible-wazuh-manager
# - role: ../roles/wazuh/ansible-filebeat
# filebeat_output_elasticsearch_hosts: 172.16.0.161:9200
# filebeat_xpack_security: true
# filebeat_node_name: node-2
# node_certs_generator: false
# elasticsearch_xpack_security_password: elastic_pass
# - role: ../roles/elastic-stack/ansible-elasticsearch
# elasticsearch_network_host: 172.16.0.162
# node_name: node-2
# elasticsearch_bootstrap_node: false
# elasticsearch_master_candidate: true
# elasticsearch_discovery_nodes:
# - 172.16.0.161
# - 172.16.0.162
# elasticsearch_xpack_security: true
# node_certs_generator: false
# - hosts: 172.16.0.163
# roles:
# - role: ../roles/elastic-stack/ansible-kibana
# kibana_xpack_security: true
# kibana_node_name: node-3
# elasticsearch_network_host: 172.16.0.161
# node_certs_generator: false
# elasticsearch_xpack_security_password: elastic_pass

View File

@ -1,8 +0,0 @@
---
- hosts: <your server host>
roles:
- {role: ../roles/wazuh/ansible-wazuh-manager}
- role: ../roles/wazuh/ansible-filebeat
filebeat_output_elasticsearch_hosts: localhost:9200
- {role: ../roles/elastic-stack/ansible-elasticsearch, elasticsearch_network_host: '0.0.0.0', single_node: true}
- { role: ../roles/elastic-stack/ansible-kibana, elasticsearch_network_host: '0.0.0.0', elasticsearch_reachable_host: 'localhost' }

View File

@ -1,17 +1,20 @@
---
- hosts: es_cluster
- hosts: wi_cluster
roles:
- role: ../roles/opendistro/opendistro-elasticsearch
- role: ../roles/wazuh/wazuh-indexer
vars:
instances: # A certificate will be generated for every node using the name as CN.
node1:
name: node-1
ip: <node-1 IP>
role: indexer
node2:
name: node-2
ip: <node-2 IP>
role: indexer
node3:
name: node-3
ip: <node-3 IP>
role: indexer

View File

@ -1,7 +0,0 @@
---
- hosts: <KIBANA_HOST>
roles:
- role: ../roles/elastic-stack/ansible-kibana
elasticsearch_network_host: <YOUR_ELASTICSEARCH_IP>
vars:
ansible_shell_allow_world_readable_temp: true

View File

@ -3,7 +3,7 @@
roles:
- role: ../roles/wazuh/ansible-wazuh-manager
- role: ../roles/wazuh/ansible-filebeat-oss
filebeat_output_elasticsearch_hosts:
- "<elastic-node-1>:9200"
- "<elastic-node-2>:9200"
- "<elastic-node-2>:9200"
filebeat_output_indexer_hosts:
- "<indexer-node-1>:9200"
- "<indexer-node-2>:9200"
- "<indexer-node-2>:9200"

View File

@ -1,8 +0,0 @@
---
- hosts: <WAZUH_MANAGER_HOST>
roles:
- role: ../roles/wazuh/ansible-wazuh-manager
- role: ../roles/wazuh/ansible-filebeat
filebeat_output_elasticsearch_hosts: <YOUR_ELASTICSEARCH_IP>:9200

View File

@ -1,189 +0,0 @@
---
# Certificates generation
- hosts: es1
roles:
- role: ../roles/opendistro/opendistro-elasticsearch
elasticsearch_network_host: "{{ private_ip }}"
elasticsearch_cluster_nodes:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
elasticsearch_discovery_nodes:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
perform_installation: false
become: yes
become_user: root
vars:
elasticsearch_node_master: true
instances:
node1:
name: node-1 # Important: must be equal to elasticsearch_node_name.
ip: "{{ hostvars.es1.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
node2:
name: node-2
ip: "{{ hostvars.es2.private_ip }}"
node3:
name: node-3
ip: "{{ hostvars.es3.private_ip }}"
node4:
name: node-4
ip: "{{ hostvars.manager.private_ip }}"
node5:
name: node-5
ip: "{{ hostvars.worker.private_ip }}"
node6:
name: node-6
ip: "{{ hostvars.kibana.private_ip }}"
tags:
- generate-certs
#ODFE Cluster
- hosts: odfe_cluster
strategy: free
roles:
- role: ../roles/opendistro/opendistro-elasticsearch
elasticsearch_network_host: "{{ private_ip }}"
become: yes
become_user: root
vars:
elasticsearch_cluster_nodes:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
elasticsearch_discovery_nodes:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
elasticsearch_node_master: true
instances:
node1:
name: node-1 # Important: must be equal to elasticsearch_node_name.
ip: "{{ hostvars.es1.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
node2:
name: node-2
ip: "{{ hostvars.es2.private_ip }}"
node3:
name: node-3
ip: "{{ hostvars.es3.private_ip }}"
node4:
name: node-4
ip: "{{ hostvars.manager.private_ip }}"
node5:
name: node-5
ip: "{{ hostvars.worker.private_ip }}"
node6:
name: node-6
ip: "{{ hostvars.kibana.private_ip }}"
#Wazuh cluster
- hosts: manager
roles:
- role: "../roles/wazuh/ansible-wazuh-manager"
- role: "../roles/wazuh/ansible-filebeat-oss"
filebeat_node_name: node-4
become: yes
become_user: root
vars:
wazuh_manager_config:
connection:
- type: 'secure'
port: '1514'
protocol: 'tcp'
queue_size: 131072
api:
https: 'yes'
cluster:
disable: 'no'
node_name: 'master'
node_type: 'master'
key: 'c98b62a9b6169ac5f67dae55ae4a9088'
nodes:
- "{{ hostvars.manager.private_ip }}"
hidden: 'no'
wazuh_api_users:
- username: custom-user
password: .S3cur3Pa55w0rd*-
filebeat_output_elasticsearch_hosts:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
- hosts: worker
roles:
- role: "../roles/wazuh/ansible-wazuh-manager"
- role: "../roles/wazuh/ansible-filebeat-oss"
filebeat_node_name: node-5
become: yes
become_user: root
vars:
wazuh_manager_config:
connection:
- type: 'secure'
port: '1514'
protocol: 'tcp'
queue_size: 131072
api:
https: 'yes'
cluster:
disable: 'no'
node_name: 'worker_01'
node_type: 'worker'
key: 'c98b62a9b6169ac5f67dae55ae4a9088'
nodes:
- "{{ hostvars.manager.private_ip }}"
hidden: 'no'
filebeat_output_elasticsearch_hosts:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
#ODFE+Kibana node
- hosts: kibana
roles:
- role: "../roles/opendistro/opendistro-elasticsearch"
- role: "../roles/opendistro/opendistro-kibana"
become: yes
become_user: root
vars:
elasticsearch_network_host: "{{ hostvars.kibana.private_ip }}"
elasticsearch_node_name: node-6
elasticsearch_node_master: false
elasticsearch_node_ingest: false
elasticsearch_node_data: false
elasticsearch_cluster_nodes:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
elasticsearch_discovery_nodes:
- "{{ hostvars.es1.private_ip }}"
- "{{ hostvars.es2.private_ip }}"
- "{{ hostvars.es3.private_ip }}"
kibana_node_name: node-6
wazuh_api_credentials:
- id: default
url: https://{{ hostvars.manager.private_ip }}
port: 55000
username: custom-user
password: .S3cur3Pa55w0rd*-
instances:
node1:
name: node-1 # Important: must be equal to elasticsearch_node_name.
ip: "{{ hostvars.es1.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
node2:
name: node-2
ip: "{{ hostvars.es2.private_ip }}"
node3:
name: node-3
ip: "{{ hostvars.es3.private_ip }}"
node4:
name: node-4
ip: "{{ hostvars.manager.private_ip }}"
node5:
name: node-5
ip: "{{ hostvars.worker.private_ip }}"
node6:
name: node-6
ip: "{{ hostvars.kibana.private_ip }}"
ansible_shell_allow_world_readable_temp: true

View File

@ -1,22 +0,0 @@
---
# Single node
- hosts: <your server host>
become: yes
become_user: root
roles:
- role: ../roles/opendistro/opendistro-elasticsearch
- role: ../roles/wazuh/ansible-wazuh-manager
- role: ../roles/wazuh/ansible-filebeat-oss
- role: ../roles/opendistro/opendistro-kibana
vars:
single_node: true
minimum_master_nodes: 1
elasticsearch_node_master: true
elasticsearch_network_host: 127.0.0.1
filebeat_node_name: node-1
filebeat_output_elasticsearch_hosts: 127.0.0.1
instances:
node1:
name: node-1 # Important: must be equal to elasticsearch_node_name.
ip: 127.0.0.1
ansible_shell_allow_world_readable_temp: true

View File

@ -0,0 +1,212 @@
---
# Certificates generation
- hosts: wi1
roles:
- role: ../roles/wazuh/wazuh-indexer
indexer_network_host: "{{ private_ip }}"
indexer_cluster_nodes:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
indexer_discovery_nodes:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
perform_installation: false
become: no
vars:
indexer_node_master: true
instances:
node1:
name: node-1 # Important: must be equal to indexer_node_name.
ip: "{{ hostvars.wi1.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
role: indexer
node2:
name: node-2
ip: "{{ hostvars.wi2.private_ip }}"
role: indexer
node3:
name: node-3
ip: "{{ hostvars.wi3.private_ip }}"
role: indexer
node4:
name: node-4
ip: "{{ hostvars.manager.private_ip }}"
role: wazuh
node_type: master
node5:
name: node-5
ip: "{{ hostvars.worker.private_ip }}"
role: wazuh
node_type: worker
node6:
name: node-6
ip: "{{ hostvars.dashboard.private_ip }}"
role: dashboard
tags:
- generate-certs
# Wazuh indexer cluster
- hosts: wi_cluster
strategy: free
roles:
- role: ../roles/wazuh/wazuh-indexer
indexer_network_host: "{{ private_ip }}"
become: yes
become_user: root
vars:
indexer_cluster_nodes:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
indexer_discovery_nodes:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
indexer_node_master: true
instances:
node1:
name: node-1 # Important: must be equal to indexer_node_name.
ip: "{{ hostvars.wi1.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
role: indexer
node2:
name: node-2
ip: "{{ hostvars.wi2.private_ip }}"
role: indexer
node3:
name: node-3
ip: "{{ hostvars.wi3.private_ip }}"
role: indexer
node4:
name: node-4
ip: "{{ hostvars.manager.private_ip }}"
role: wazuh
node_type: master
node5:
name: node-5
ip: "{{ hostvars.worker.private_ip }}"
role: wazuh
node_type: worker
node6:
name: node-6
ip: "{{ hostvars.dashboard.private_ip }}"
role: dashboard
# Wazuh cluster
- hosts: manager
roles:
- role: "../roles/wazuh/ansible-wazuh-manager"
- role: "../roles/wazuh/ansible-filebeat-oss"
filebeat_node_name: node-4
become: yes
become_user: root
vars:
wazuh_manager_config:
connection:
- type: 'secure'
port: '1514'
protocol: 'tcp'
queue_size: 131072
api:
https: 'yes'
cluster:
disable: 'no'
node_name: 'master'
node_type: 'master'
key: 'c98b62a9b6169ac5f67dae55ae4a9088'
nodes:
- "{{ hostvars.manager.private_ip }}"
hidden: 'no'
wazuh_api_users:
- username: custom-user
password: SecretPassword1!
filebeat_output_indexer_hosts:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
- hosts: worker
roles:
- role: "../roles/wazuh/ansible-wazuh-manager"
- role: "../roles/wazuh/ansible-filebeat-oss"
filebeat_node_name: node-5
become: yes
become_user: root
vars:
wazuh_manager_config:
connection:
- type: 'secure'
port: '1514'
protocol: 'tcp'
queue_size: 131072
api:
https: 'yes'
cluster:
disable: 'no'
node_name: 'worker_01'
node_type: 'worker'
key: 'c98b62a9b6169ac5f67dae55ae4a9088'
nodes:
- "{{ hostvars.manager.private_ip }}"
hidden: 'no'
filebeat_output_indexer_hosts:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
# Indexer + dashboard node
- hosts: dashboard
roles:
- role: "../roles/wazuh/wazuh-indexer"
- role: "../roles/wazuh/wazuh-dashboard"
become: yes
become_user: root
vars:
indexer_network_host: "{{ hostvars.dashboard.private_ip }}"
indexer_node_name: node-6
indexer_node_master: false
indexer_node_ingest: false
indexer_node_data: false
indexer_cluster_nodes:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
indexer_discovery_nodes:
- "{{ hostvars.wi1.private_ip }}"
- "{{ hostvars.wi2.private_ip }}"
- "{{ hostvars.wi3.private_ip }}"
dashboard_node_name: node-6
wazuh_api_credentials:
- id: default
url: https://{{ hostvars.manager.private_ip }}
port: 55000
username: custom-user
password: SecretPassword1!
instances:
node1:
name: node-1
ip: "{{ hostvars.wi1.private_ip }}" # When unzipping, the node will search for its node name folder to get the cert.
role: indexer
node2:
name: node-2
ip: "{{ hostvars.wi2.private_ip }}"
role: indexer
node3:
name: node-3
ip: "{{ hostvars.wi3.private_ip }}"
role: indexer
node4:
name: node-4
ip: "{{ hostvars.manager.private_ip }}"
role: wazuh
node_type: master
node5:
name: node-5
ip: "{{ hostvars.worker.private_ip }}"
role: wazuh
node_type: worker
node6:
name: node-6
ip: "{{ hostvars.dashboard.private_ip }}"
role: dashboard
ansible_shell_allow_world_readable_temp: true

View File

@ -0,0 +1,40 @@
---
# Certificates generation
- hosts: aio
roles:
- role: ../roles/wazuh/wazuh-indexer
perform_installation: false
become: no
#become_user: root
vars:
indexer_node_master: true
instances:
node1:
name: node-1 # Important: must be equal to indexer_node_name.
ip: 127.0.0.1
role: indexer
tags:
- generate-certs
# Single node
- hosts: aio
become: yes
become_user: root
roles:
- role: ../roles/wazuh/wazuh-indexer
- role: ../roles/wazuh/ansible-wazuh-manager
- role: ../roles/wazuh/ansible-filebeat-oss
- role: ../roles/wazuh/wazuh-dashboard
vars:
single_node: true
minimum_master_nodes: 1
indexer_node_master: true
indexer_network_host: 127.0.0.1
filebeat_node_name: node-1
filebeat_output_indexer_hosts:
- 127.0.0.1
instances:
node1:
name: node-1 # Important: must be equal to indexer_node_name.
ip: 127.0.0.1
role: indexer
ansible_shell_allow_world_readable_temp: true

524
poetry.lock generated
View File

@ -1,17 +1,17 @@
[[package]]
name = "ansible"
version = "4.2.0"
version = "4.10.0"
description = "Radically simple IT automation"
category = "main"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*"
[package.dependencies]
ansible-core = ">=2.11.2,<2.12"
ansible-core = ">=2.11.7,<2.12.0"
[[package]]
name = "ansible-core"
version = "2.11.6"
version = "2.11.10"
description = "Radically simple IT automation"
category = "main"
optional = false
@ -26,20 +26,30 @@ resolvelib = ">=0.5.3,<0.6.0"
[[package]]
name = "ansible-lint"
version = "4.3.7"
version = "5.4.0"
description = "Checks playbooks for practices and behaviour that could potentially be improved"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
ansible = ">=2.8"
enrich = ">=1.2.6"
packaging = "*"
pyyaml = "*"
rich = "*"
rich = ">=9.5.1"
"ruamel.yaml" = [
{version = ">=0.15.34,<1", markers = "python_version < \"3.7\""},
{version = ">=0.15.37,<1", markers = "python_version >= \"3.7\""},
]
tenacity = "*"
typing-extensions = {version = "*", markers = "python_version < \"3.8\""}
wcmatch = ">=7.0"
[package.extras]
community = ["ansible (>=2.10)"]
core = ["ansible-core (>=2.11.4)"]
test = ["coverage (>=6.2,<6.3)", "tomli (>=1.2.3,<2.0.0)", "flaky (>=3.7.0)", "pytest (>=6.0.1)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=2.1.0)", "psutil"]
yamllint = ["yamllint (>=1.25.0)"]
typing-extensions = {version = "*", markers = "python_version < \"3.8\""}
[[package]]
@ -102,6 +112,14 @@ python-versions = "*"
[package.dependencies]
chardet = ">=3.0.2"
[[package]]
name = "bracex"
version = "2.2.1"
description = "Bash style brace expander."
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cerberus"
version = "1.3.2"
@ -150,11 +168,16 @@ unicode_backport = ["unicodedata2"]
[[package]]
name = "click"
version = "7.1.2"
version = "8.0.4"
description = "Composable command line interface toolkit"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
python-versions = ">=3.6"
optional = false
python-versions = ">=3.5.0"
[package.extras]
unicode_backport = ["unicodedata2"]
[[package]]
name = "click-completion"
@ -165,21 +188,19 @@ optional = false
python-versions = "*"
[package.dependencies]
click = "*"
jinja2 = "*"
shellingham = "*"
six = "*"
colorama = {version = "*", markers = "platform_system == \"Windows\""}
importlib-metadata = {version = "*", markers = "python_version < \"3.8\""}
[[package]]
name = "click-help-colors"
version = "0.8"
version = "0.9.1"
description = "Colorization of help messages in Click"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
click = ">=7.0"
click = ">=7.0,<9"
[package.extras]
dev = ["pytest"]
@ -205,7 +226,7 @@ test = ["flake8 (==3.7.8)", "hypothesis (==3.55.3)"]
[[package]]
name = "cookiecutter"
version = "1.7.2"
version = "1.7.3"
description = "A command-line utility that creates projects from project templates, e.g. creating a Python package project from a Python package project template."
category = "dev"
optional = false
@ -276,30 +297,32 @@ ssh = ["paramiko (>=2.4.2)"]
tls = ["pyOpenSSL (>=17.5.0)", "cryptography (>=1.3.4)", "idna (>=2.0.0)"]
[[package]]
name = "fasteners"
version = "0.15"
description = "A python package that provides useful locks."
name = "enrich"
version = "1.2.7"
description = "enrich"
category = "dev"
optional = false
python-versions = "*"
python-versions = ">=3.6"
[package.dependencies]
monotonic = ">=0.1"
six = "*"
rich = ">=9.5.1"
[package.extras]
test = ["mock (>=3.0.5)", "pytest-cov (>=2.7.1)", "pytest-mock (>=3.3.1)", "pytest-plus", "pytest-xdist (>=1.29.0)", "pytest (>=5.4.0)"]
[[package]]
name = "flake8"
version = "3.8.4"
version = "4.0.1"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
python-versions = ">=3.6"
[package.dependencies]
importlib-metadata = {version = "*", markers = "python_version < \"3.8\""}
importlib-metadata = {version = "<4.3", markers = "python_version < \"3.8\""}
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.6.0a1,<2.7.0"
pyflakes = ">=2.2.0,<2.3.0"
pycodestyle = ">=2.8.0,<2.9.0"
pyflakes = ">=2.4.0,<2.5.0"
[[package]]
name = "idna"
@ -324,19 +347,27 @@ zipp = ">=0.5"
docs = ["sphinx", "rst.linker"]
testing = ["packaging", "pep517", "importlib-resources (>=1.3)"]
[[package]]
name = "iniconfig"
version = "1.1.1"
description = "iniconfig: brain-dead simple config-ini parsing"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "jinja2"
version = "2.11.3"
version = "3.0.3"
description = "A very fast and expressive template engine."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
python-versions = ">=3.6"
[package.dependencies]
MarkupSafe = ">=0.23"
MarkupSafe = ">=2.0"
[package.extras]
i18n = ["Babel (>=0.8)"]
i18n = ["Babel (>=2.7)"]
[[package]]
name = "jinja2-time"
@ -352,11 +383,11 @@ jinja2 = "*"
[[package]]
name = "markupsafe"
version = "1.1.1"
version = "2.0.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "main"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*"
python-versions = ">=3.6"
[[package]]
name = "mccabe"
@ -368,55 +399,57 @@ python-versions = "*"
[[package]]
name = "molecule"
version = "3.0.8"
version = "3.3.4"
description = "Molecule aids in the development and testing of Ansible roles"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
ansible = ">=2.8"
cerberus = ">=1.3.1"
click = ">=7.0"
click-completion = ">=0.5.1"
click-help-colors = ">=0.6"
colorama = ">=0.3.9"
cookiecutter = ">=1.6.0,<1.7.1 || >1.7.1"
docker = {version = ">=2.0.0", optional = true, markers = "extra == \"docker\""}
ansible-lint = ">=5.0.12"
cerberus = ">=1.3.1,<1.3.3 || >1.3.3,<1.3.4 || >1.3.4"
click = ">=8.0,<9"
click-help-colors = ">=0.9"
cookiecutter = ">=1.7.3"
dataclasses = {version = "*", markers = "python_version < \"3.7\""}
enrich = ">=1.2.5"
Jinja2 = ">=2.10.1"
molecule-docker = {version = "*", optional = true, markers = "extra == \"docker\""}
packaging = "*"
paramiko = ">=2.5.0,<3"
pexpect = ">=4.6.0,<5"
pluggy = ">=0.7.1,<1.0"
python-gilt = ">=1.2.1,<2"
PyYAML = ">=5.1,<6"
rich = ">=9.5.1"
selinux = {version = "*", markers = "sys_platform == \"linux\" or sys_platform == \"linux2\""}
sh = ">=1.13.1,<1.14"
tabulate = ">=0.8.4"
tree-format = ">=0.1.2"
yamllint = ">=1.15.0,<2"
subprocess-tee = ">=0.3.2"
[package.extras]
docker = ["docker (>=2.0.0)"]
docs = ["simplejson", "sphinx", "sphinx-ansible-theme (>=0.2.2)"]
lint = ["ansible-lint (>=4.2.0,<5)", "flake8 (>=3.6.0)", "pre-commit (>=1.21.0)", "yamllint (>=1.15.0)"]
test = ["ansi2html", "coverage (<5)", "mock (>=3.0.5,<4)", "packaging", "pytest-cov (>=2.7.1,<3)", "pytest-helpers-namespace (>=2019.1.8,<2020)", "pytest-html (>=1.21.0)", "pytest-mock (>=1.10.4,<2)", "pytest-verbose-parametrize (>=1.7.0,<2)", "pytest-plus", "pytest-xdist (>=1.29.0,<2)", "pytest (>=5.4.0,<5.5)", "testinfra (>=3.4.0)"]
ansible = ["ansible (>=2.10)"]
ansible-base = ["ansible-base (>=2.10)"]
docker = ["molecule-docker"]
docs = ["Sphinx (>=4.0.2)", "simplejson (>=3.17.2)", "sphinx-notfound-page (>=0.7.1)", "sphinx-ansible-theme (>=0.2.2)"]
lint = ["flake8 (>=3.8.4)", "pre-commit (>=2.10.1)", "yamllint"]
podman = ["molecule-podman"]
test = ["ansi2html (>=1.6.0)", "pexpect (>=4.8.0,<5)", "pytest-cov (>=2.10.1)", "pytest-helpers-namespace (>=2019.1.8)", "pytest-html (>=3.0.0)", "pytest-mock (>=3.3.1)", "pytest-plus (>=0.2)", "pytest-testinfra (>=6.1.0)", "pytest-verbose-parametrize (>=1.7.0)", "pytest-xdist (>=2.1.0)", "pytest (>=6.1.2)"]
windows = ["pywinrm"]
[[package]]
name = "monotonic"
version = "1.5"
description = "An implementation of time.monotonic() for Python 2 & < 3.3"
name = "molecule-docker"
version = "0.3.4"
description = "Molecule aids in the development and testing of Ansible roles"
category = "dev"
optional = false
python-versions = "*"
python-versions = ">=3.6"
[[package]]
name = "more-itertools"
version = "8.6.0"
description = "More routines for operating on iterables, beyond itertools"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.dependencies]
docker = ">=4.3.1"
molecule = ">=3.3.0"
selinux = {version = "*", markers = "sys_platform == \"linux\" or sys_platform == \"linux2\""}
[package.extras]
docs = ["simplejson", "sphinx", "sphinx-ansible-theme (>=0.2.2)"]
lint = ["pre-commit (>=1.21.0)"]
test = ["molecule", "pytest-helpers-namespace"]
[[package]]
name = "packaging"
@ -490,6 +523,8 @@ description = "A lightweight YAML Parser for Python. 🐓"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "ptyprocess"
@ -509,11 +544,11 @@ python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pycodestyle"
version = "2.6.0"
version = "2.8.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycparser"
@ -525,7 +560,7 @@ python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pyflakes"
version = "2.2.0"
version = "2.4.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
@ -565,26 +600,42 @@ python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "pytest"
version = "5.4.3"
version = "7.0.1"
description = "pytest: simple powerful testing with Python"
category = "dev"
optional = false
python-versions = ">=3.5"
python-versions = ">=3.6"
[package.dependencies]
atomicwrites = {version = ">=1.0", markers = "sys_platform == \"win32\""}
attrs = ">=17.4.0"
attrs = ">=19.2.0"
colorama = {version = "*", markers = "sys_platform == \"win32\""}
importlib-metadata = {version = ">=0.12", markers = "python_version < \"3.8\""}
more-itertools = ">=4.0.0"
iniconfig = "*"
packaging = "*"
pluggy = ">=0.12,<1.0"
py = ">=1.5.0"
wcwidth = "*"
pluggy = ">=0.12,<2.0"
py = ">=1.8.2"
tomli = ">=1.0.0"
[package.extras]
checkqa-mypy = ["mypy (==v0.761)"]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "requests", "xmlschema"]
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
[[package]]
name = "pytest-testinfra"
version = "6.6.0"
description = "Test infrastructures"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
pytest = "!=3.0.2"
[package.extras]
ansible = ["ansible"]
paramiko = ["paramiko"]
salt = ["salt"]
winrm = ["pywinrm"]
[[package]]
name = "python-dateutil"
@ -679,7 +730,7 @@ test = ["commentjson", "packaging", "pytest"]
[[package]]
name = "rich"
version = "9.1.0"
version = "10.11.0"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
category = "dev"
optional = false
@ -688,9 +739,9 @@ python-versions = ">=3.6,<4.0"
[package.dependencies]
colorama = ">=0.4.0,<0.5.0"
commonmark = ">=0.9.0,<0.10.0"
dataclasses = {version = ">=0.7,<0.8", markers = "python_version >= \"3.6\" and python_version < \"3.7\""}
dataclasses = {version = ">=0.7,<0.9", markers = "python_version >= \"3.6\" and python_version < \"3.7\""}
pygments = ">=2.6.0,<3.0.0"
typing-extensions = ">=3.7.4,<4.0.0"
typing-extensions = {version = ">=3.7.4,<4.0.0", markers = "python_version < \"3.8\""}
[package.extras]
jupyter = ["ipywidgets (>=7.5.1,<8.0.0)"]
@ -729,6 +780,33 @@ python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
[package.dependencies]
distro = ">=1.3.0"
[[package]]
name = "six"
version = "1.15.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "subprocess-tee"
version = "0.3.5"
description = "subprocess-tee"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
test = ["enrich (>=1.2.6)", "mock (>=4.0.3)", "molecule (>=3.4.0)", "pytest-cov (>=2.12.1)", "pytest-plus (>=0.2)", "pytest-xdist (>=2.3.0)", "pytest (>=6.2.5)"]
[[package]]
name = "tenacity"
version = "8.0.1"
description = "Retry code until it succeeds"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "sh"
version = "1.13.1"
@ -745,14 +823,6 @@ category = "dev"
optional = false
python-versions = "!=3.0,!=3.1,!=3.2,!=3.3,>=2.6"
[[package]]
name = "six"
version = "1.15.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "tabulate"
version = "0.8.9"
@ -762,24 +832,24 @@ optional = false
python-versions = "*"
[package.extras]
widechars = ["wcwidth"]
doc = ["reno", "sphinx", "tornado (>=4.5)"]
[[package]]
name = "testinfra"
version = "5.3.1"
version = "6.0.0"
description = "Test infrastructures"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.dependencies]
pytest = "!=3.0.2"
pytest-testinfra = "*"
[package.extras]
ansible = ["ansible"]
paramiko = ["paramiko"]
salt = ["salt"]
winrm = ["pywinrm"]
ansible = ["pytest-testinfra"]
paramiko = ["pytest-testinfra"]
salt = ["pytest-testinfra"]
winrm = ["pytest-testinfra"]
[[package]]
name = "text-unidecode"
@ -789,6 +859,14 @@ category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "tomli"
version = "1.2.3"
description = "A lil' TOML parser"
category = "dev"
optional = false
python-versions = ">=3.6"
[[package]]
name = "tree-format"
version = "0.1.2"
@ -821,6 +899,17 @@ brotli = ["brotlipy (>=0.6.0)"]
secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
[[package]]
name = "wcmatch"
version = "8.3"
description = "Wildcard/glob file name matcher."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
bracex = ">=2.1.1"
[[package]]
name = "wcwidth"
version = "0.2.5"
@ -842,11 +931,11 @@ six = "*"
[[package]]
name = "yamllint"
version = "1.25.0"
version = "1.26.3"
description = "A linter for YAML files."
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,>=2.7"
python-versions = ">=3.5"
[package.dependencies]
pathspec = ">=0.5.3"
@ -867,18 +956,18 @@ testing = ["pytest (>=3.5,!=3.7.3)", "pytest-checkdocs (>=1.2.3)", "pytest-flake
[metadata]
lock-version = "1.1"
python-versions = "^3.6"
content-hash = "73af60d240671a44b5eda548c9ff8676084a2380af4c40bce9337ecc8e29bdc4"
content-hash = "1ef64c8c2ca8b979f8200c1131842702b436d1c56db5bc58e33ac9cc570538b1"
[metadata.files]
ansible = [
{file = "ansible-4.2.0.tar.gz", hash = "sha256:737d819ffbd7a80c28795b4edd93e59ad21e6e6d53af0d19f57412814f9260d0"},
{file = "ansible-4.10.0.tar.gz", hash = "sha256:88af9479e81a3931bb3a1b8c4eeb252cd4f38c03daafd6a5aa120d6b0d70d45c"},
]
ansible-core = [
{file = "ansible-core-2.11.6.tar.gz", hash = "sha256:93d50283c7c5b476debf83dc089b3f679b939a8b9a7b5d628d28daafbb3d303a"},
{file = "ansible-core-2.11.10.tar.gz", hash = "sha256:a0fba319963ff83c249bd0531b0b87d67e3ac0723f9cbf24b96790ff3774a897"},
]
ansible-lint = [
{file = "ansible-lint-4.3.7.tar.gz", hash = "sha256:1012fc3f5c4c0c58eece515860f19c34c5088faa5be412eec6fae5b45bda9c4f"},
{file = "ansible_lint-4.3.7-py2.py3-none-any.whl", hash = "sha256:300e841f690b556a08d44902d6414283dc101079b27909e3a892f1cf1d10d7ff"},
{file = "ansible-lint-5.4.0.tar.gz", hash = "sha256:2160a60b4ab034c04006d701a1779340ffb0f6e28f030ff8de958e1062a88962"},
{file = "ansible_lint-5.4.0-py3-none-any.whl", hash = "sha256:fb57755825b50da88c226052772bd843d37714155b504175912daac0e186e8c0"},
]
arrow = [
{file = "arrow-0.17.0-py2.py3-none-any.whl", hash = "sha256:e098abbd9af3665aea81bdd6c869e93af4feb078e98468dd351c383af187aac5"},
@ -908,6 +997,10 @@ binaryornot = [
{file = "binaryornot-0.4.4-py2.py3-none-any.whl", hash = "sha256:b8b71173c917bddcd2c16070412e369c3ed7f0528926f70cac18a6c97fd563e4"},
{file = "binaryornot-0.4.4.tar.gz", hash = "sha256:359501dfc9d40632edc9fac890e19542db1a287bbcfa58175b66658392018061"},
]
bracex = [
{file = "bracex-2.2.1-py3-none-any.whl", hash = "sha256:096c4b788bf492f7af4e90ef8b5bcbfb99759ae3415ea1b83c9d29a5ed8f9a94"},
{file = "bracex-2.2.1.tar.gz", hash = "sha256:1c8d1296e00ad9a91030ccb4c291f9e4dc7c054f12c707ba3c5ff3e9a81bcd21"},
]
cerberus = [
{file = "Cerberus-1.3.2.tar.gz", hash = "sha256:302e6694f206dd85cb63f13fd5025b31ab6d38c99c50c6d769f8fa0b0f299589"},
]
@ -962,15 +1055,12 @@ charset-normalizer = [
{file = "charset_normalizer-2.0.12-py3-none-any.whl", hash = "sha256:6881edbebdb17b39b4eaaa821b438bf6eddffb4468cf344f09f89def34a8b1df"},
]
click = [
{file = "click-7.1.2-py2.py3-none-any.whl", hash = "sha256:dacca89f4bfadd5de3d7489b7c8a566eee0d3676333fbb50030263894c38c0dc"},
{file = "click-7.1.2.tar.gz", hash = "sha256:d2b5255c7c6349bc1bd1e59e08cd12acbbd63ce649f2588755783aa94dfb6b1a"},
]
click-completion = [
{file = "click-completion-0.5.2.tar.gz", hash = "sha256:5bf816b81367e638a190b6e91b50779007d14301b3f9f3145d68e3cade7bce86"},
{file = "click-8.0.4-py3-none-any.whl", hash = "sha256:6a7a62563bbfabfda3a38f3023a1db4a35978c0abd76f6c9605ecd6554d6d9b1"},
{file = "click-8.0.4.tar.gz", hash = "sha256:8458d7b1287c5fb128c90e23381cf99dcde74beaf6c7ff6384ce84d6fe090adb"},
]
click-help-colors = [
{file = "click-help-colors-0.8.tar.gz", hash = "sha256:119e5faf69cfc919c995c5962326ac8fd87f11e56a371af594e3dfd8458f4c6e"},
{file = "click_help_colors-0.8-py3-none-any.whl", hash = "sha256:0d841a4058ec88c47f93ff6f32547a055f8e0a0273f6bd6cb3e08430f195131d"},
{file = "click-help-colors-0.9.1.tar.gz", hash = "sha256:78cbcf30cfa81c5fc2a52f49220121e1a8190cd19197d9245997605d3405824d"},
{file = "click_help_colors-0.9.1-py3-none-any.whl", hash = "sha256:25a6bd22d8abbc72c18a416a1cf21ab65b6120bee48e9637829666cbad22d51d"},
]
colorama = [
{file = "colorama-0.4.4-py2.py3-none-any.whl", hash = "sha256:9f47eda37229f68eee03b24b9748937c7dc3868f906e8ba69fbcbdd3bc5dc3e2"},
@ -981,8 +1071,8 @@ commonmark = [
{file = "commonmark-0.9.1.tar.gz", hash = "sha256:452f9dc859be7f06631ddcb328b6919c67984aca654e5fefb3914d54691aed60"},
]
cookiecutter = [
{file = "cookiecutter-1.7.2-py2.py3-none-any.whl", hash = "sha256:430eb882d028afb6102c084bab6cf41f6559a77ce9b18dc6802e3bc0cc5f4a30"},
{file = "cookiecutter-1.7.2.tar.gz", hash = "sha256:efb6b2d4780feda8908a873e38f0e61778c23f6a2ea58215723bcceb5b515dac"},
{file = "cookiecutter-1.7.3-py2.py3-none-any.whl", hash = "sha256:f8671531fa96ab14339d0c59b4f662a4f12a2ecacd94a0f70a3500843da588e2"},
{file = "cookiecutter-1.7.3.tar.gz", hash = "sha256:6b9a4d72882e243be077a7397d0f1f76fe66cf3df91f3115dbb5330e214fa457"},
]
cryptography = [
{file = "cryptography-3.3.2-cp27-cp27m-macosx_10_10_x86_64.whl", hash = "sha256:541dd758ad49b45920dda3b5b48c968f8b2533d8981bcdb43002798d8f7a89ed"},
@ -1012,13 +1102,13 @@ docker = [
{file = "docker-4.3.1-py2.py3-none-any.whl", hash = "sha256:13966471e8bc23b36bfb3a6fb4ab75043a5ef1dac86516274777576bed3b9828"},
{file = "docker-4.3.1.tar.gz", hash = "sha256:bad94b8dd001a8a4af19ce4becc17f41b09f228173ffe6a4e0355389eef142f2"},
]
fasteners = [
{file = "fasteners-0.15-py2.py3-none-any.whl", hash = "sha256:007e4d2b2d4a10093f67e932e5166722d2eab83b77724156e92ad013c6226574"},
{file = "fasteners-0.15.tar.gz", hash = "sha256:3a176da6b70df9bb88498e1a18a9e4a8579ed5b9141207762368a1017bf8f5ef"},
enrich = [
{file = "enrich-1.2.7-py3-none-any.whl", hash = "sha256:f29b2c8c124b4dbd7c975ab5c3568f6c7a47938ea3b7d2106c8a3bd346545e4f"},
{file = "enrich-1.2.7.tar.gz", hash = "sha256:0a2ab0d2931dff8947012602d1234d2a3ee002d9a355b5d70be6bf5466008893"},
]
flake8 = [
{file = "flake8-3.8.4-py2.py3-none-any.whl", hash = "sha256:749dbbd6bfd0cf1318af27bf97a14e28e5ff548ef8e5b1566ccfb25a11e7c839"},
{file = "flake8-3.8.4.tar.gz", hash = "sha256:aadae8761ec651813c24be05c6f7b4680857ef6afaae4651a4eccaef97ce6c3b"},
{file = "flake8-4.0.1-py2.py3-none-any.whl", hash = "sha256:479b1304f72536a55948cb40a32dce8bb0ffe3501e26eaf292c7e60eb5e0428d"},
{file = "flake8-4.0.1.tar.gz", hash = "sha256:806e034dda44114815e23c16ef92f95c91e4c71100ff52813adf7132a6ad870d"},
]
idna = [
{file = "idna-2.10-py2.py3-none-any.whl", hash = "sha256:b97d804b1e9b523befed77c48dacec60e6dcb0b5391d57af6a65a312a90648c0"},
@ -1028,83 +1118,100 @@ importlib-metadata = [
{file = "importlib_metadata-2.0.0-py2.py3-none-any.whl", hash = "sha256:cefa1a2f919b866c5beb7c9f7b0ebb4061f30a8a9bf16d609b000e2dfaceb9c3"},
{file = "importlib_metadata-2.0.0.tar.gz", hash = "sha256:77a540690e24b0305878c37ffd421785a6f7e53c8b5720d211b211de8d0e95da"},
]
iniconfig = [
{file = "iniconfig-1.1.1-py2.py3-none-any.whl", hash = "sha256:011e24c64b7f47f6ebd835bb12a743f2fbe9a26d4cecaa7f53bc4f35ee9da8b3"},
{file = "iniconfig-1.1.1.tar.gz", hash = "sha256:bc3af051d7d14b2ee5ef9969666def0cd1a000e121eaea580d4a313df4b37f32"},
]
jinja2 = [
{file = "Jinja2-2.11.3-py2.py3-none-any.whl", hash = "sha256:03e47ad063331dd6a3f04a43eddca8a966a26ba0c5b7207a9a9e4e08f1b29419"},
{file = "Jinja2-2.11.3.tar.gz", hash = "sha256:a6d58433de0ae800347cab1fa3043cebbabe8baa9d29e668f1c768cb87a333c6"},
{file = "Jinja2-3.0.3-py3-none-any.whl", hash = "sha256:077ce6014f7b40d03b47d1f1ca4b0fc8328a692bd284016f806ed0eaca390ad8"},
{file = "Jinja2-3.0.3.tar.gz", hash = "sha256:611bb273cd68f3b993fabdc4064fc858c5b47a973cb5aa7999ec1ba405c87cd7"},
]
jinja2-time = [
{file = "jinja2-time-0.2.0.tar.gz", hash = "sha256:d14eaa4d315e7688daa4969f616f226614350c48730bfa1692d2caebd8c90d40"},
{file = "jinja2_time-0.2.0-py2.py3-none-any.whl", hash = "sha256:d3eab6605e3ec8b7a0863df09cc1d23714908fa61aa6986a845c20ba488b4efa"},
]
markupsafe = [
{file = "MarkupSafe-1.1.1-cp27-cp27m-macosx_10_6_intel.whl", hash = "sha256:09027a7803a62ca78792ad89403b1b7a73a01c8cb65909cd876f7fcebd79b161"},
{file = "MarkupSafe-1.1.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:e249096428b3ae81b08327a63a485ad0878de3fb939049038579ac0ef61e17e7"},
{file = "MarkupSafe-1.1.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:500d4957e52ddc3351cabf489e79c91c17f6e0899158447047588650b5e69183"},
{file = "MarkupSafe-1.1.1-cp27-cp27m-win32.whl", hash = "sha256:b2051432115498d3562c084a49bba65d97cf251f5a331c64a12ee7e04dacc51b"},
{file = "MarkupSafe-1.1.1-cp27-cp27m-win_amd64.whl", hash = "sha256:98c7086708b163d425c67c7a91bad6e466bb99d797aa64f965e9d25c12111a5e"},
{file = "MarkupSafe-1.1.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:cd5df75523866410809ca100dc9681e301e3c27567cf498077e8551b6d20e42f"},
{file = "MarkupSafe-1.1.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:43a55c2930bbc139570ac2452adf3d70cdbb3cfe5912c71cdce1c2c6bbd9c5d1"},
{file = "MarkupSafe-1.1.1-cp34-cp34m-macosx_10_6_intel.whl", hash = "sha256:1027c282dad077d0bae18be6794e6b6b8c91d58ed8a8d89a89d59693b9131db5"},
{file = "MarkupSafe-1.1.1-cp34-cp34m-manylinux1_i686.whl", hash = "sha256:62fe6c95e3ec8a7fad637b7f3d372c15ec1caa01ab47926cfdf7a75b40e0eac1"},
{file = "MarkupSafe-1.1.1-cp34-cp34m-manylinux1_x86_64.whl", hash = "sha256:88e5fcfb52ee7b911e8bb6d6aa2fd21fbecc674eadd44118a9cc3863f938e735"},
{file = "MarkupSafe-1.1.1-cp34-cp34m-win32.whl", hash = "sha256:ade5e387d2ad0d7ebf59146cc00c8044acbd863725f887353a10df825fc8ae21"},
{file = "MarkupSafe-1.1.1-cp34-cp34m-win_amd64.whl", hash = "sha256:09c4b7f37d6c648cb13f9230d847adf22f8171b1ccc4d5682398e77f40309235"},
{file = "MarkupSafe-1.1.1-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:79855e1c5b8da654cf486b830bd42c06e8780cea587384cf6545b7d9ac013a0b"},
{file = "MarkupSafe-1.1.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:c8716a48d94b06bb3b2524c2b77e055fb313aeb4ea620c8dd03a105574ba704f"},
{file = "MarkupSafe-1.1.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:7c1699dfe0cf8ff607dbdcc1e9b9af1755371f92a68f706051cc8c37d447c905"},
{file = "MarkupSafe-1.1.1-cp35-cp35m-win32.whl", hash = "sha256:6dd73240d2af64df90aa7c4e7481e23825ea70af4b4922f8ede5b9e35f78a3b1"},
{file = "MarkupSafe-1.1.1-cp35-cp35m-win_amd64.whl", hash = "sha256:9add70b36c5666a2ed02b43b335fe19002ee5235efd4b8a89bfcf9005bebac0d"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-macosx_10_6_intel.whl", hash = "sha256:24982cc2533820871eba85ba648cd53d8623687ff11cbb805be4ff7b4c971aff"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d53bc011414228441014aa71dbec320c66468c1030aae3a6e29778a3382d96e5"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:00bc623926325b26bb9605ae9eae8a215691f33cae5df11ca5424f06f2d1f473"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:717ba8fe3ae9cc0006d7c451f0bb265ee07739daf76355d06366154ee68d221e"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:3b8a6499709d29c2e2399569d96719a1b21dcd94410a586a18526b143ec8470f"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:84dee80c15f1b560d55bcfe6d47b27d070b4681c699c572af2e3c7cc90a3b8e0"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:b1dba4527182c95a0db8b6060cc98ac49b9e2f5e64320e2b56e47cb2831978c7"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-win32.whl", hash = "sha256:535f6fc4d397c1563d08b88e485c3496cf5784e927af890fb3c3aac7f933ec66"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b1282f8c00509d99fef04d8ba936b156d419be841854fe901d8ae224c59f0be5"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-macosx_10_6_intel.whl", hash = "sha256:8defac2f2ccd6805ebf65f5eeb132adcf2ab57aa11fdf4c0dd5169a004710e7d"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:bf5aa3cbcfdf57fa2ee9cd1822c862ef23037f5c832ad09cfea57fa846dec193"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:46c99d2de99945ec5cb54f23c8cd5689f6d7177305ebff350a58ce5f8de1669e"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:ba59edeaa2fc6114428f1637ffff42da1e311e29382d81b339c1817d37ec93c6"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:6fffc775d90dcc9aed1b89219549b329a9250d918fd0b8fa8d93d154918422e1"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:a6a744282b7718a2a62d2ed9d993cad6f5f585605ad352c11de459f4108df0a1"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:195d7d2c4fbb0ee8139a6cf67194f3973a6b3042d742ebe0a9ed36d8b6f0c07f"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-win32.whl", hash = "sha256:b00c1de48212e4cc9603895652c5c410df699856a2853135b3967591e4beebc2"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9bf40443012702a1d2070043cb6291650a0841ece432556f784f004937f0f32c"},
{file = "MarkupSafe-1.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6788b695d50a51edb699cb55e35487e430fa21f1ed838122d722e0ff0ac5ba15"},
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:cdb132fc825c38e1aeec2c8aa9338310d29d337bebbd7baa06889d09a60a1fa2"},
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:13d3144e1e340870b25e7b10b98d779608c02016d5184cfb9927a9f10c689f42"},
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:acf08ac40292838b3cbbb06cfe9b2cb9ec78fce8baca31ddb87aaac2e2dc3bc2"},
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:d9be0ba6c527163cbed5e0857c451fcd092ce83947944d6c14bc95441203f032"},
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:caabedc8323f1e93231b52fc32bdcde6db817623d33e100708d9a68e1f53b26b"},
{file = "MarkupSafe-1.1.1-cp38-cp38-win32.whl", hash = "sha256:596510de112c685489095da617b5bcbbac7dd6384aeebeda4df6025d0256a81b"},
{file = "MarkupSafe-1.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:e8313f01ba26fbbe36c7be1966a7b7424942f670f38e666995b88d012765b9be"},
{file = "MarkupSafe-1.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d73a845f227b0bfe8a7455ee623525ee656a9e2e749e4742706d80a6065d5e2c"},
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:98bae9582248d6cf62321dcb52aaf5d9adf0bad3b40582925ef7c7f0ed85fceb"},
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:2beec1e0de6924ea551859edb9e7679da6e4870d32cb766240ce17e0a0ba2014"},
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:7fed13866cf14bba33e7176717346713881f56d9d2bcebab207f7a036f41b850"},
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:6f1e273a344928347c1290119b493a1f0303c52f5a5eae5f16d74f48c15d4a85"},
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:feb7b34d6325451ef96bc0e36e1a6c0c1c64bc1fbec4b854f4529e51887b1621"},
{file = "MarkupSafe-1.1.1-cp39-cp39-win32.whl", hash = "sha256:22c178a091fc6630d0d045bdb5992d2dfe14e3259760e713c490da5323866c39"},
{file = "MarkupSafe-1.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:b7d644ddb4dbd407d31ffb699f1d140bc35478da613b441c582aeb7c43838dd8"},
{file = "MarkupSafe-1.1.1.tar.gz", hash = "sha256:29872e92839765e546828bb7754a68c418d927cd064fd4708fab9fe9c8bb116b"},
{file = "MarkupSafe-2.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:d8446c54dc28c01e5a2dbac5a25f071f6653e6e40f3a8818e8b45d790fe6ef53"},
{file = "MarkupSafe-2.0.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:36bc903cbb393720fad60fc28c10de6acf10dc6cc883f3e24ee4012371399a38"},
{file = "MarkupSafe-2.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d7d807855b419fc2ed3e631034685db6079889a1f01d5d9dac950f764da3dad"},
{file = "MarkupSafe-2.0.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:add36cb2dbb8b736611303cd3bfcee00afd96471b09cda130da3581cbdc56a6d"},
{file = "MarkupSafe-2.0.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:168cd0a3642de83558a5153c8bd34f175a9a6e7f6dc6384b9655d2697312a646"},
{file = "MarkupSafe-2.0.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:4dc8f9fb58f7364b63fd9f85013b780ef83c11857ae79f2feda41e270468dd9b"},
{file = "MarkupSafe-2.0.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:20dca64a3ef2d6e4d5d615a3fd418ad3bde77a47ec8a23d984a12b5b4c74491a"},
{file = "MarkupSafe-2.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:cdfba22ea2f0029c9261a4bd07e830a8da012291fbe44dc794e488b6c9bb353a"},
{file = "MarkupSafe-2.0.1-cp310-cp310-win32.whl", hash = "sha256:99df47edb6bda1249d3e80fdabb1dab8c08ef3975f69aed437cb69d0a5de1e28"},
{file = "MarkupSafe-2.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:e0f138900af21926a02425cf736db95be9f4af72ba1bb21453432a07f6082134"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:f9081981fe268bd86831e5c75f7de206ef275defcb82bc70740ae6dc507aee51"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:0955295dd5eec6cb6cc2fe1698f4c6d84af2e92de33fbcac4111913cd100a6ff"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:0446679737af14f45767963a1a9ef7620189912317d095f2d9ffa183a4d25d2b"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:f826e31d18b516f653fe296d967d700fddad5901ae07c622bb3705955e1faa94"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:fa130dd50c57d53368c9d59395cb5526eda596d3ffe36666cd81a44d56e48872"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:905fec760bd2fa1388bb5b489ee8ee5f7291d692638ea5f67982d968366bef9f"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bf5d821ffabf0ef3533c39c518f3357b171a1651c1ff6827325e4489b0e46c3c"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:0d4b31cc67ab36e3392bbf3862cfbadac3db12bdd8b02a2731f509ed5b829724"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:baa1a4e8f868845af802979fcdbf0bb11f94f1cb7ced4c4b8a351bb60d108145"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:deb993cacb280823246a026e3b2d81c493c53de6acfd5e6bfe31ab3402bb37dd"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:63f3268ba69ace99cab4e3e3b5840b03340efed0948ab8f78d2fd87ee5442a4f"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:8d206346619592c6200148b01a2142798c989edcb9c896f9ac9722a99d4e77e6"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-win32.whl", hash = "sha256:6c4ca60fa24e85fe25b912b01e62cb969d69a23a5d5867682dd3e80b5b02581d"},
{file = "MarkupSafe-2.0.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b2f4bf27480f5e5e8ce285a8c8fd176c0b03e93dcc6646477d4630e83440c6a9"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:0717a7390a68be14b8c793ba258e075c6f4ca819f15edfc2a3a027c823718567"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:6557b31b5e2c9ddf0de32a691f2312a32f77cd7681d8af66c2692efdbef84c18"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:49e3ceeabbfb9d66c3aef5af3a60cc43b85c33df25ce03d0031a608b0a8b2e3f"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:d7f9850398e85aba693bb640262d3611788b1f29a79f0c93c565694658f4071f"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:6a7fae0dd14cf60ad5ff42baa2e95727c3d81ded453457771d02b7d2b3f9c0c2"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:b7f2d075102dc8c794cbde1947378051c4e5180d52d276987b8d28a3bd58c17d"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e9936f0b261d4df76ad22f8fee3ae83b60d7c3e871292cd42f40b81b70afae85"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:2a7d351cbd8cfeb19ca00de495e224dea7e7d919659c2841bbb7f420ad03e2d6"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:60bf42e36abfaf9aff1f50f52644b336d4f0a3fd6d8a60ca0d054ac9f713a864"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d6c7ebd4e944c85e2c3421e612a7057a2f48d478d79e61800d81468a8d842207"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:f0567c4dc99f264f49fe27da5f735f414c4e7e7dd850cfd8e69f0862d7c74ea9"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:89c687013cb1cd489a0f0ac24febe8c7a666e6e221b783e53ac50ebf68e45d86"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-win32.whl", hash = "sha256:a30e67a65b53ea0a5e62fe23682cfe22712e01f453b95233b25502f7c61cb415"},
{file = "MarkupSafe-2.0.1-cp37-cp37m-win_amd64.whl", hash = "sha256:611d1ad9a4288cf3e3c16014564df047fe08410e628f89805e475368bd304914"},
{file = "MarkupSafe-2.0.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:5bb28c636d87e840583ee3adeb78172efc47c8b26127267f54a9c0ec251d41a9"},
{file = "MarkupSafe-2.0.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:be98f628055368795d818ebf93da628541e10b75b41c559fdf36d104c5787066"},
{file = "MarkupSafe-2.0.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:1d609f577dc6e1aa17d746f8bd3c31aa4d258f4070d61b2aa5c4166c1539de35"},
{file = "MarkupSafe-2.0.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:7d91275b0245b1da4d4cfa07e0faedd5b0812efc15b702576d103293e252af1b"},
{file = "MarkupSafe-2.0.1-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:01a9b8ea66f1658938f65b93a85ebe8bc016e6769611be228d797c9d998dd298"},
{file = "MarkupSafe-2.0.1-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:47ab1e7b91c098ab893b828deafa1203de86d0bc6ab587b160f78fe6c4011f75"},
{file = "MarkupSafe-2.0.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:97383d78eb34da7e1fa37dd273c20ad4320929af65d156e35a5e2d89566d9dfb"},
{file = "MarkupSafe-2.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6fcf051089389abe060c9cd7caa212c707e58153afa2c649f00346ce6d260f1b"},
{file = "MarkupSafe-2.0.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:5855f8438a7d1d458206a2466bf82b0f104a3724bf96a1c781ab731e4201731a"},
{file = "MarkupSafe-2.0.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:3dd007d54ee88b46be476e293f48c85048603f5f516008bee124ddd891398ed6"},
{file = "MarkupSafe-2.0.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:aca6377c0cb8a8253e493c6b451565ac77e98c2951c45f913e0b52facdcff83f"},
{file = "MarkupSafe-2.0.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:04635854b943835a6ea959e948d19dcd311762c5c0c6e1f0e16ee57022669194"},
{file = "MarkupSafe-2.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:6300b8454aa6930a24b9618fbb54b5a68135092bc666f7b06901f897fa5c2fee"},
{file = "MarkupSafe-2.0.1-cp38-cp38-win32.whl", hash = "sha256:023cb26ec21ece8dc3907c0e8320058b2e0cb3c55cf9564da612bc325bed5e64"},
{file = "MarkupSafe-2.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:984d76483eb32f1bcb536dc27e4ad56bba4baa70be32fa87152832cdd9db0833"},
{file = "MarkupSafe-2.0.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:2ef54abee730b502252bcdf31b10dacb0a416229b72c18b19e24a4509f273d26"},
{file = "MarkupSafe-2.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3c112550557578c26af18a1ccc9e090bfe03832ae994343cfdacd287db6a6ae7"},
{file = "MarkupSafe-2.0.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:53edb4da6925ad13c07b6d26c2a852bd81e364f95301c66e930ab2aef5b5ddd8"},
{file = "MarkupSafe-2.0.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:f5653a225f31e113b152e56f154ccbe59eeb1c7487b39b9d9f9cdb58e6c79dc5"},
{file = "MarkupSafe-2.0.1-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:4efca8f86c54b22348a5467704e3fec767b2db12fc39c6d963168ab1d3fc9135"},
{file = "MarkupSafe-2.0.1-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:ab3ef638ace319fa26553db0624c4699e31a28bb2a835c5faca8f8acf6a5a902"},
{file = "MarkupSafe-2.0.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:f8ba0e8349a38d3001fae7eadded3f6606f0da5d748ee53cc1dab1d6527b9509"},
{file = "MarkupSafe-2.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c47adbc92fc1bb2b3274c4b3a43ae0e4573d9fbff4f54cd484555edbf030baf1"},
{file = "MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:37205cac2a79194e3750b0af2a5720d95f786a55ce7df90c3af697bfa100eaac"},
{file = "MarkupSafe-2.0.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:1f2ade76b9903f39aa442b4aadd2177decb66525062db244b35d71d0ee8599b6"},
{file = "MarkupSafe-2.0.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:4296f2b1ce8c86a6aea78613c34bb1a672ea0e3de9c6ba08a960efe0b0a09047"},
{file = "MarkupSafe-2.0.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:9f02365d4e99430a12647f09b6cc8bab61a6564363f313126f775eb4f6ef798e"},
{file = "MarkupSafe-2.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5b6d930f030f8ed98e3e6c98ffa0652bdb82601e7a016ec2ab5d7ff23baa78d1"},
{file = "MarkupSafe-2.0.1-cp39-cp39-win32.whl", hash = "sha256:10f82115e21dc0dfec9ab5c0223652f7197feb168c940f3ef61563fc2d6beb74"},
{file = "MarkupSafe-2.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:693ce3f9e70a6cf7d2fb9e6c9d8b204b6b39897a2c4a1aa65728d5ac97dcc1d8"},
{file = "MarkupSafe-2.0.1.tar.gz", hash = "sha256:594c67807fb16238b30c44bdf74f36c02cdf22d1c8cda91ef8a0ed8dabf5620a"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
molecule = [
{file = "molecule-3.0.8-py3-none-any.whl", hash = "sha256:6fb202099ff52bc427c6bd8f46f13b49b3d5812daa6880960a0fd0562e8a9be6"},
{file = "molecule-3.0.8.tar.gz", hash = "sha256:42d0c661b52074b00a620466df367ddab9c3682875e6d685bfc93487ef0479cc"},
{file = "molecule-3.3.4-py3-none-any.whl", hash = "sha256:a44c36bfc3734d561d941a90cf6ab18ede36e00decc92bd0f81623fa32dbdb2f"},
{file = "molecule-3.3.4.tar.gz", hash = "sha256:5794d0b39e695c4544c714ac90ab90901f22d8c9f623c2fee665b8b2dc2ce6cc"},
]
monotonic = [
{file = "monotonic-1.5-py2.py3-none-any.whl", hash = "sha256:552a91f381532e33cbd07c6a2655a21908088962bb8fa7239ecbcc6ad1140cc7"},
{file = "monotonic-1.5.tar.gz", hash = "sha256:23953d55076df038541e648a53676fb24980f7a1be290cdda21300b3bc21dfb0"},
]
more-itertools = [
{file = "more-itertools-8.6.0.tar.gz", hash = "sha256:b3a9005928e5bed54076e6e549c792b306fddfe72b2d1d22dd63d42d5d3899cf"},
{file = "more_itertools-8.6.0-py3-none-any.whl", hash = "sha256:8e1a2a43b2f2727425f2b5839587ae37093f19153dc26c0927d1048ff6557330"},
molecule-docker = [
{file = "molecule-docker-0.3.4.tar.gz", hash = "sha256:dc9a8ad60b70ede303805cd6865deb5fb9c162e67ff5e7d1a45eb7e58cd36b88"},
{file = "molecule_docker-0.3.4-py3-none-any.whl", hash = "sha256:9d65761052a7a5dad6deee25e2ce9597148913532a10542c3ad8c342f56fe3b2"},
]
packaging = [
{file = "packaging-20.4-py2.py3-none-any.whl", hash = "sha256:998416ba6962ae7fbd6596850b80e17859a5753ba17c32284f67bfff33784181"},
@ -1118,10 +1225,6 @@ pathspec = [
{file = "pathspec-0.8.0-py2.py3-none-any.whl", hash = "sha256:7d91249d21749788d07a2d0f94147accd8f845507400749ea19c1ec9054a12b0"},
{file = "pathspec-0.8.0.tar.gz", hash = "sha256:da45173eb3a6f2a5a487efba21f050af2b41948be6ab52b6a1e3ff22bb8b7061"},
]
pexpect = [
{file = "pexpect-4.8.0-py2.py3-none-any.whl", hash = "sha256:0b48a55dcb3c05f3329815901ea4fc1537514d6ba867a152b581d69ae3710937"},
{file = "pexpect-4.8.0.tar.gz", hash = "sha256:fc65a43959d153d0114afe13997d439c22823a27cefceb5ff35c2178c6784c0c"},
]
pluggy = [
{file = "pluggy-0.13.1-py2.py3-none-any.whl", hash = "sha256:966c145cd83c96502c3c3868f50408687b38434af77734af1e9ca461a4081d2d"},
{file = "pluggy-0.13.1.tar.gz", hash = "sha256:15b2acde666561e1298d71b523007ed7364de07029219b604cf808bfa1c765b0"},
@ -1130,25 +1233,21 @@ poyo = [
{file = "poyo-0.5.0-py2.py3-none-any.whl", hash = "sha256:3e2ca8e33fdc3c411cd101ca395668395dd5dc7ac775b8e809e3def9f9fe041a"},
{file = "poyo-0.5.0.tar.gz", hash = "sha256:e26956aa780c45f011ca9886f044590e2d8fd8b61db7b1c1cf4e0869f48ed4dd"},
]
ptyprocess = [
{file = "ptyprocess-0.6.0-py2.py3-none-any.whl", hash = "sha256:d7cc528d76e76342423ca640335bd3633420dc1366f258cb31d05e865ef5ca1f"},
{file = "ptyprocess-0.6.0.tar.gz", hash = "sha256:923f299cc5ad920c68f2bc0bc98b75b9f838b93b599941a6b63ddbc2476394c0"},
]
py = [
{file = "py-1.10.0-py2.py3-none-any.whl", hash = "sha256:3b80836aa6d1feeaa108e046da6423ab8f6ceda6468545ae8d02d9d58d18818a"},
{file = "py-1.10.0.tar.gz", hash = "sha256:21b81bda15b66ef5e1a777a21c4dcd9c20ad3efd0b3f817e7a809035269e1bd3"},
]
pycodestyle = [
{file = "pycodestyle-2.6.0-py2.py3-none-any.whl", hash = "sha256:2295e7b2f6b5bd100585ebcb1f616591b652db8a741695b3d8f5d28bdc934367"},
{file = "pycodestyle-2.6.0.tar.gz", hash = "sha256:c58a7d2815e0e8d7972bf1803331fb0152f867bd89adf8a01dfd55085434192e"},
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
{file = "pycodestyle-2.8.0.tar.gz", hash = "sha256:eddd5847ef438ea1c7870ca7eb78a9d47ce0cdb4851a5523949f2601d0cbbe7f"},
]
pycparser = [
{file = "pycparser-2.20-py2.py3-none-any.whl", hash = "sha256:7582ad22678f0fcd81102833f60ef8d0e57288b6b5fb00323d101be910e35705"},
{file = "pycparser-2.20.tar.gz", hash = "sha256:2d475327684562c3a96cc71adf7dc8c4f0565175cf86b6d7a404ff4c771f15f0"},
]
pyflakes = [
{file = "pyflakes-2.2.0-py2.py3-none-any.whl", hash = "sha256:0d94e0e05a19e57a99444b6ddcf9a6eb2e5c68d3ca1e98e90707af8152c90a92"},
{file = "pyflakes-2.2.0.tar.gz", hash = "sha256:35b2d75ee967ea93b55750aa9edbbf72813e06a66ba54438df2cfac9e3c27fc8"},
{file = "pyflakes-2.4.0-py2.py3-none-any.whl", hash = "sha256:3bb3a3f256f4b7968c9c788781e4ff07dce46bdf12339dcda61053375426ee2e"},
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygments = [
{file = "Pygments-2.7.4-py3-none-any.whl", hash = "sha256:bc9591213a8f0e0ca1a5e68a479b4887fdc3e75d0774e5c71c31920c427de435"},
@ -1179,16 +1278,17 @@ pyparsing = [
{file = "pyparsing-2.4.7.tar.gz", hash = "sha256:c203ec8783bf771a155b207279b9bccb8dea02d8f0c9e5f8ead507bc3246ecc1"},
]
pytest = [
{file = "pytest-5.4.3-py3-none-any.whl", hash = "sha256:5c0db86b698e8f170ba4582a492248919255fcd4c79b1ee64ace34301fb589a1"},
{file = "pytest-5.4.3.tar.gz", hash = "sha256:7979331bfcba207414f5e1263b5a0f8f521d0f457318836a7355531ed1a4c7d8"},
{file = "pytest-7.0.1-py3-none-any.whl", hash = "sha256:9ce3ff477af913ecf6321fe337b93a2c0dcf2a0a1439c43f5452112c1e4280db"},
{file = "pytest-7.0.1.tar.gz", hash = "sha256:e30905a0c131d3d94b89624a1cc5afec3e0ba2fbdb151867d8e0ebd49850f171"},
]
pytest-testinfra = [
{file = "pytest-testinfra-6.6.0.tar.gz", hash = "sha256:c2c0af72e51d84f72306045b551a91ac8a2cef2ca1fa87636ee64ceaa0f219c5"},
{file = "pytest_testinfra-6.6.0-py3-none-any.whl", hash = "sha256:3aac5453a8b0e61b00539d8560442e862ce04ce1e31d08fd44500a4e43d3a989"},
]
python-dateutil = [
{file = "python-dateutil-2.8.1.tar.gz", hash = "sha256:73ebfe9dbf22e832286dafa60473e4cd239f8592f699aa5adaf10050e6e1823c"},
{file = "python_dateutil-2.8.1-py2.py3-none-any.whl", hash = "sha256:75bb3f31ea686f1197762692a9ee6a7550b59fc6ca3a1f4b5d7e32fb98e2da2a"},
]
python-gilt = [
{file = "python_gilt-1.2.3-py2.py3-none-any.whl", hash = "sha256:e220ea2e7e190ee06dbfa5fafe87967858b4ac0cf53f3072fa6ece4664a42082"},
]
python-slugify = [
{file = "python-slugify-4.0.1.tar.gz", hash = "sha256:69a517766e00c1268e5bbfc0d010a0a8508de0b18d30ad5a1ff357f8ae724270"},
]
@ -1236,8 +1336,8 @@ resolvelib = [
{file = "resolvelib-0.5.5.tar.gz", hash = "sha256:123de56548c90df85137425a3f51eb93df89e2ba719aeb6a8023c032758be950"},
]
rich = [
{file = "rich-9.1.0-py3-none-any.whl", hash = "sha256:5dd934a0f8953b59d9a5d8d58864012174f0b5ad2de687fd04f4df195f7f7066"},
{file = "rich-9.1.0.tar.gz", hash = "sha256:05f1cf4dc191c483867b098d8572546de266440d61911d8270069023e325d14a"},
{file = "rich-10.11.0-py3-none-any.whl", hash = "sha256:44bb3f9553d00b3c8938abf89828df870322b9ba43caf3b12bb7758debdc6dec"},
{file = "rich-10.11.0.tar.gz", hash = "sha256:016fa105f34b69c434e7f908bb5bd7fefa9616efdb218a2917117683a6394ce5"},
]
"ruamel.yaml" = [
{file = "ruamel.yaml-0.16.12-py2.py3-none-any.whl", hash = "sha256:012b9470a0ea06e4e44e99e7920277edf6b46eee0232a04487ea73a7386340a5"},
@ -1280,33 +1380,32 @@ selinux = [
{file = "selinux-0.2.1-py2.py3-none-any.whl", hash = "sha256:820adcf1b4451c9cc7759848797703263ba0eb6a4cad76d73548a9e0d57b7926"},
{file = "selinux-0.2.1.tar.gz", hash = "sha256:d435f514e834e3fdc0941f6a29d086b80b2ea51b28112aee6254bd104ee42a74"},
]
sh = [
{file = "sh-1.13.1-py2.py3-none-any.whl", hash = "sha256:6f792e45b45d039b423081558904981e8ab49572b0c38009fcc65feaab06bcda"},
{file = "sh-1.13.1.tar.gz", hash = "sha256:97a3d2205e3c6a842d87ebbc9ae93acae5a352b1bc4609b428d0fd5bb9e286a3"},
]
shellingham = [
{file = "shellingham-1.3.2-py2.py3-none-any.whl", hash = "sha256:7f6206ae169dc1a03af8a138681b3f962ae61cc93ade84d0585cca3aaf770044"},
{file = "shellingham-1.3.2.tar.gz", hash = "sha256:576c1982bea0ba82fb46c36feb951319d7f42214a82634233f58b40d858a751e"},
]
six = [
{file = "six-1.15.0-py2.py3-none-any.whl", hash = "sha256:8b74bedcbbbaca38ff6d7491d76f2b06b3592611af620f8426e82dddb04a5ced"},
{file = "six-1.15.0.tar.gz", hash = "sha256:30639c035cdb23534cd4aa2dd52c3bf48f06e5f4a941509c8bafd8ce11080259"},
]
subprocess-tee = [
{file = "subprocess-tee-0.3.5.tar.gz", hash = "sha256:ff5cced589a4b8ac973276ca1ba21bb6e3de600cde11a69947ff51f696efd577"},
{file = "subprocess_tee-0.3.5-py3-none-any.whl", hash = "sha256:d34186c639aa7f8013b5dfba80e17f52589539137c9d9205f2ae1c1bd03549e1"},
]
tenacity = [
{file = "tenacity-8.0.1-py3-none-any.whl", hash = "sha256:f78f4ea81b0fabc06728c11dc2a8c01277bfc5181b321a4770471902e3eb844a"},
{file = "tenacity-8.0.1.tar.gz", hash = "sha256:43242a20e3e73291a28bcbcacfd6e000b02d3857a9a9fff56b297a27afdc932f"},
tabulate = [
{file = "tabulate-0.8.9-py3-none-any.whl", hash = "sha256:d7c013fe7abbc5e491394e10fa845f8f32fe54f8dc60c6622c6cf482d25d47e4"},
{file = "tabulate-0.8.9.tar.gz", hash = "sha256:eb1d13f25760052e8931f2ef80aaf6045a6cceb47514db8beab24cded16f13a7"},
]
testinfra = [
{file = "testinfra-5.3.1-py3-none-any.whl", hash = "sha256:9d3a01fb787253df76ac4ab46d18a84d4b01be877ed1b5812e590dcf480a627e"},
{file = "testinfra-5.3.1.tar.gz", hash = "sha256:baf1d809ea2dc22c0cb5b9441bf4e17c1eb653e1ccc02cc63137d0ab467fa1de"},
{file = "testinfra-6.0.0-py3-none-any.whl", hash = "sha256:1a75b5025dbe82ffedec50afeaf9a7f96a8cd1e294f0d40de3a089a369ceae0e"},
{file = "testinfra-6.0.0.tar.gz", hash = "sha256:4225d36e4bb02eb1618429325280c4b62a18cea8a90c91564ee0cc1d31ca331a"},
]
text-unidecode = [
{file = "text-unidecode-1.3.tar.gz", hash = "sha256:bad6603bb14d279193107714b288be206cac565dfa49aa5b105294dd5c4aab93"},
{file = "text_unidecode-1.3-py2.py3-none-any.whl", hash = "sha256:1311f10e8b895935241623731c2ba64f4c455287888b18189350b67134a822e8"},
]
tree-format = [
{file = "tree-format-0.1.2.tar.gz", hash = "sha256:a538523aa78ae7a4b10003b04f3e1b37708e0e089d99c9d3b9e1c71384c9a7f9"},
{file = "tree_format-0.1.2-py2-none-any.whl", hash = "sha256:b5056228dbedde1fb81b79f71fb0c23c98e9d365230df9b29af76e8d8003de11"},
tomli = [
{file = "tomli-1.2.3-py3-none-any.whl", hash = "sha256:e3069e4be3ead9668e21cb9b074cd948f7b3113fd9c8bba083f48247aab8b11c"},
{file = "tomli-1.2.3.tar.gz", hash = "sha256:05b6166bff487dc068d322585c7ea4ef78deed501cc124060e0f238e89a9231f"},
]
typing-extensions = [
{file = "typing_extensions-3.7.4.3-py2-none-any.whl", hash = "sha256:dafc7639cde7f1b6e1acc0f457842a83e722ccca8eef5270af2d74792619a89f"},
@ -1317,17 +1416,16 @@ urllib3 = [
{file = "urllib3-1.26.5-py2.py3-none-any.whl", hash = "sha256:753a0374df26658f99d826cfe40394a686d05985786d946fbe4165b5148f5a7c"},
{file = "urllib3-1.26.5.tar.gz", hash = "sha256:a7acd0977125325f516bda9735fa7142b909a8d01e8b2e4c8108d0984e6e0098"},
]
wcwidth = [
{file = "wcwidth-0.2.5-py2.py3-none-any.whl", hash = "sha256:beb4802a9cebb9144e99086eff703a642a13d6a0052920003a230f3294bbe784"},
{file = "wcwidth-0.2.5.tar.gz", hash = "sha256:c4d647b99872929fdb7bdcaa4fbe7f01413ed3d98077df798530e5b04f116c83"},
wcmatch = [
{file = "wcmatch-8.3-py3-none-any.whl", hash = "sha256:7141d2c85314253f16b38cb3d6cc0fb612918d407e1df3ccc2be7c86cc259c22"},
{file = "wcmatch-8.3.tar.gz", hash = "sha256:371072912398af61d1e4e78609e18801c6faecd3cb36c54c82556a60abc965db"},
]
websocket-client = [
{file = "websocket_client-0.57.0-py2.py3-none-any.whl", hash = "sha256:0fc45c961324d79c781bab301359d5a1b00b13ad1b10415a4780229ef71a5549"},
{file = "websocket_client-0.57.0.tar.gz", hash = "sha256:d735b91d6d1692a6a181f2a8c9e0238e5f6373356f561bb9dc4c7af36f452010"},
]
yamllint = [
{file = "yamllint-1.25.0-py2.py3-none-any.whl", hash = "sha256:c7be4d0d2584a1b561498fa9acb77ad22eb434a109725c7781373ae496d823b3"},
{file = "yamllint-1.25.0.tar.gz", hash = "sha256:b1549cbe5b47b6ba67bdeea31720f5c51431a4d0c076c1557952d841f7223519"},
{file = "yamllint-1.26.3.tar.gz", hash = "sha256:3934dcde484374596d6b52d8db412929a169f6d9e52e20f9ade5bf3523d9b96e"},
]
zipp = [
{file = "zipp-3.4.0-py3-none-any.whl", hash = "sha256:102c24ef8f171fd729d46599845e95c7ab894a4cf45f5de11a44cc7444fb1108"},

View File

@ -8,20 +8,20 @@ authors = ["neonmei <neonmei@pm.me>"]
python = "^3.6"
# Pin ansible version to that currently present on awx
ansible = "==4.2.0"
jinja2 = "^2.11.3"
ansible = "==4.10.0"
jinja2 = "^3.0.3"
[tool.poetry.dev-dependencies]
pytest = "^5.2"
ansible-lint = "^4.3.5"
flake8 = "^3.8.4"
pytest = "^7.0"
ansible-lint = "^5.4.0"
flake8 = "^4.0.1"
selinux = "^0.2.1"
yamllint = "^1.25.0"
yamllint = "^1.26.3"
# minimum version is 3.0.3, because we need docker memory limitation
# https://github.com/ansible-community/molecule/pull/2615
molecule = {extras = ["docker"], version = "==3.0.8"}
testinfra = "^5.3.1"
molecule = {extras = ["docker"], version = "==3.3.4"}
testinfra = "^6.0.0"
[build-system]
requires = ["poetry>=0.12"]

View File

@ -1,145 +0,0 @@
Ansible Role: Elasticsearch
===========================
An Ansible Role that installs [Elasticsearch](https://www.elastic.co/products/elasticsearch).
Requirements
------------
This role will work on:
* Red Hat
* CentOS
* Fedora
* Debian
* Ubuntu
For the elasticsearch role with XPack security the `unzip` command must be available on the Ansible master.
Role Variables
--------------
Defaults variables are listed below, along with its values (see `defaults/main.yml`):
```
elasticsearch_cluster_name: wazuh
elasticsearch_node_name: node-1
elasticsearch_http_port: 9200
elasticsearch_network_host: 127.0.0.1
elasticsearch_jvm_xms: 1g
elastic_stack_version: 5.5.0
```
Example Playbook
----------------
- Single-node
```
- hosts: elasticsearch
roles:
- { role: ansible-role-elasticsearch, elasticsearch_network_host: '192.168.33.182', single_node: true }
```
- Three nodes Elasticsearch cluster
```
---
- hosts: 172.16.0.161
roles:
- {role: ../roles/elastic-stack/ansible-elasticsearch, elasticsearch_network_host: '172.16.0.161', elasticsearch_bootstrap_node: true, elasticsearch_cluster_nodes: ['172.16.0.162','172.16.0.163','172.16.0.161']}
- hosts: 172.16.0.162
roles:
- {role: ../roles/elastic-stack/ansible-elasticsearch, elasticsearch_network_host: '172.16.0.162', elasticsearch_node_master: true, elasticsearch_cluster_nodes: ['172.16.0.162','172.16.0.163','172.16.0.161']}
- hosts: 172.16.0.163
roles:
- {role: ../roles/elastic-stack/ansible-elasticsearch, elasticsearch_network_host: '172.16.0.163', elasticsearch_node_master: true, elasticsearch_cluster_nodes: ['172.16.0.162','172.16.0.163','172.16.0.161']}
```
- Three nodes Elasticsearch cluster with XPack security
```
---
- hosts: elastic-1
roles:
- role: ../roles/elastic-stack/ansible-elasticsearch
elasticsearch_network_host: 172.16.0.111
elasticsearch_node_name: node-1
single_node: false
elasticsearch_node_master: true
elasticsearch_bootstrap_node: true
elasticsearch_cluster_nodes:
- 172.16.0.111
- 172.16.0.112
- 172.16.0.113
elasticsearch_discovery_nodes:
- 172.16.0.111
- 172.16.0.112
- 172.16.0.113
elasticsearch_xpack_security: true
node_certs_generator: true
node_certs_generator_ip: 172.16.0.111
vars:
instances:
node-1:
name: node-1
ip: 172.16.0.111
node-2:
name: node-2
ip: 172.16.0.112
node-3:
name: node-3
ip: 172.16.0.113
- hosts: elastic-2
roles:
- role: ../roles/elastic-stack/ansible-elasticsearch
elasticsearch_network_host: 172.16.0.112
elasticsearch_node_name: node-2
single_node: false
elasticsearch_xpack_security: true
elasticsearch_node_master: true
node_certs_generator_ip: 172.16.0.111
elasticsearch_discovery_nodes:
- 172.16.0.111
- 172.16.0.112
- 172.16.0.113
- hosts: elastic-3
roles:
- role: ../roles/elastic-stack/ansible-elasticsearch
elasticsearch_network_host: 172.16.0.113
elasticsearch_node_name: node-3
single_node: false
elasticsearch_xpack_security: true
elasticsearch_node_master: true
node_certs_generator_ip: 172.16.0.111
elasticsearch_discovery_nodes:
- 172.16.0.111
- 172.16.0.112
- 172.16.0.113
vars:
elasticsearch_xpack_users:
anne:
password: 'PasswordHere'
roles: '["kibana_user", "monitoring_user"]'
jack:
password: 'PasswordHere'
roles: '["superuser"]'
```
It is possible to define users directly on the playbook, these must be defined on a variable `elasticsearch_xpack_users` on the last node of the cluster as in the example.
License and copyright
---------------------
WAZUH Copyright (C) 2021 Wazuh Inc. (License GPLv3)
### Based on previous work from geerlingguy
- https://github.com/geerlingguy/ansible-role-elasticsearch
### Modified by Wazuh
The playbooks have been modified by Wazuh, including some specific requirements, templates and configuration to improve integration with Wazuh ecosystem.

View File

@ -1,44 +0,0 @@
---
elasticsearch_http_port: 9200
elasticsearch_network_host: 127.0.0.1
elasticsearch_reachable_host: 127.0.0.1
elasticsearch_jvm_xms: null
elastic_stack_version: 7.10.2
elasticsearch_lower_disk_requirements: false
elasticsearch_path_repo: []
elasticsearch_start_timeout: 90
elasticrepo:
apt: 'https://artifacts.elastic.co/packages/7.x/apt'
yum: 'https://artifacts.elastic.co/packages/7.x/yum'
gpg: 'https://artifacts.elastic.co/GPG-KEY-elasticsearch'
key_id: '46095ACC8548582C1A2699A9D27D666CD88E42B4'
# Cluster Settings
single_node: true
elasticsearch_cluster_name: wazuh
elasticsearch_node_name: node-1
elasticsearch_bootstrap_node: false
elasticsearch_node_master: false
elasticsearch_cluster_nodes:
- 127.0.0.1
elasticsearch_discovery_nodes:
- 127.0.0.1
elasticsearch_node_data: true
elasticsearch_node_ingest: true
# X-Pack Security
elasticsearch_xpack_security: false
elasticsearch_xpack_security_password: elastic_pass
node_certs_generator: false
node_certs_source: /usr/share/elasticsearch
node_certs_destination: /etc/elasticsearch/certs
# CA generation
master_certs_path: "{{ playbook_dir }}/es_certs"
generate_CA: true
ca_key_name: ""
ca_cert_name: ""
ca_password: ""

View File

@ -1,3 +0,0 @@
---
- name: restart elasticsearch
service: name=elasticsearch state=restarted

View File

@ -1,24 +0,0 @@
---
galaxy_info:
author: Wazuh
description: Installing and maintaining Elasticsearch server.
company: wazuh.com
license: license (GPLv3)
min_ansible_version: 2.0
platforms:
- name: EL
versions:
- all
- name: Ubuntu
versions:
- all
- name: Debian
versions:
- all
- name: Fedora
versions:
- all
galaxy_tags:
- web
- system
- monitoring

View File

@ -1,42 +0,0 @@
---
- name: Debian/Ubuntu | Install apt-transport-https and ca-certificates
apt:
name:
- apt-transport-https
- ca-certificates
state: present
register: elasticsearch_ca_packages_installed
until: elasticsearch_ca_packages_installed is succeeded
- name: Update and upgrade apt packages
become: true
apt:
upgrade: yes
update_cache: yes
cache_valid_time: 86400 #One day
when:
- ansible_distribution == "Ubuntu"
- ansible_distribution_major_version | int == 14
- name: Debian/Ubuntu | Add Elasticsearch GPG key.
apt_key:
url: "{{ elasticrepo.gpg }}"
id: "{{ elasticrepo.key_id }}"
state: present
- name: Debian/Ubuntu | Install Elastic repo
apt_repository:
repo: "deb {{ elasticrepo.apt }} stable main"
state: present
filename: 'elastic_repo_7'
update_cache: true
changed_when: false
- name: Debian/Ubuntu | Install Elasticsarch
apt:
name: "elasticsearch={{ elastic_stack_version }}"
state: present
cache_valid_time: 3600
register: elasticsearch_main_packages_installed
until: elasticsearch_main_packages_installed is succeeded
tags: install

View File

@ -1,6 +0,0 @@
---
- name: Debian/Ubuntu | Removing Elasticsearch repository
apt_repository:
repo: "deb {{ elasticrepo.apt }} stable main"
state: absent
changed_when: false

View File

@ -1,14 +0,0 @@
---
- name: RedHat/CentOS/Fedora | Install Elastic repo
yum_repository:
name: elastic_repo_7
description: Elastic repository for 7.x packages
baseurl: "{{ elasticrepo.yum }}"
gpgkey: "{{ elasticrepo.gpg }}"
gpgcheck: true
changed_when: false
- name: RedHat/CentOS/Fedora | Install Elasticsarch
package: name=elasticsearch-{{ elastic_stack_version }} state=present
tags: install

View File

@ -1,176 +0,0 @@
---
- import_tasks: RedHat.yml
when: ansible_os_family == 'RedHat'
- import_tasks: Debian.yml
when: ansible_os_family == "Debian"
- name: Create elasticsearch.service.d folder.
file:
path: /etc/systemd/system/elasticsearch.service.d
state: directory
owner: root
group: root
mode: 0755
when:
- ansible_service_mgr == "systemd"
- name: Configure Elasticsearch System Resources.
template:
src: elasticsearch_systemd.conf.j2
dest: /etc/systemd/system/elasticsearch.service.d/elasticsearch.conf
owner: root
group: elasticsearch
mode: 0660
notify: restart elasticsearch
tags: configure
when:
- ansible_service_mgr == "systemd"
- name: Debian/Ubuntu | Configure Elasticsearch System Resources.
template:
src: elasticsearch_nonsystemd.j2
dest: /etc/default/elasticsearch
owner: root
group: elasticsearch
mode: 0660
notify: restart elasticsearch
tags: configure
when:
- ansible_service_mgr != "systemd"
- ansible_os_family == "Debian"
- name: RedHat/CentOS/Fedora | Configure Elasticsearch System Resources.
template:
src: elasticsearch_nonsystemd.j2
dest: /etc/sysconfig/elasticsearch
owner: root
group: elasticsearch
mode: 0660
notify: restart elasticsearch
tags: configure
when:
- ansible_service_mgr != "systemd"
- ansible_os_family == "RedHat"
- name: Configure Elasticsearch JVM memmory.
template:
src: jvm.options.j2
dest: /etc/elasticsearch/jvm.options
owner: root
group: elasticsearch
mode: 0660
notify: restart elasticsearch
tags: configure
# fix in new PR (ignore_errors)
- import_tasks: "RMRedHat.yml"
when: ansible_os_family == "RedHat"
- import_tasks: "xpack_security.yml"
when:
- elasticsearch_xpack_security
- name: Configure Elasticsearch.
template:
src: elasticsearch.yml.j2
dest: /etc/elasticsearch/elasticsearch.yml
owner: root
group: elasticsearch
mode: 0660
notify: restart elasticsearch
tags: configure
- name: Trusty | set MAX_LOCKED_MEMORY=unlimited in Elasticsearch in /etc/security/limits.conf
lineinfile: # noqa 208
path: /etc/security/limits.conf
line: elasticsearch - memlock unlimited
create: yes
become: true
when:
- ansible_distribution == "Ubuntu"
- ansible_distribution_major_version | int == 14
changed_when: false
- name: Trusty | set MAX_LOCKED_MEMORY=unlimited in Elasticsearch in /etc/security/limits.d/elasticsearch.conf
lineinfile:
path: /etc/security/limits.d/elasticsearch.conf
line: elasticsearch - memlock unlimited
owner: root
group: root
mode: 0644
create: yes
become: true
changed_when: false
when:
- ansible_distribution == "Ubuntu"
- ansible_distribution_major_version | int == 14
- name: Ensure extra time for Elasticsearch to start on reboots
lineinfile:
path: /usr/lib/systemd/system/elasticsearch.service
regexp: '^TimeoutStartSec='
line: "TimeoutStartSec={{ elasticsearch_start_timeout }}"
become: yes
tags: configure
- name: Ensure Elasticsearch started and enabled
service:
name: elasticsearch
enabled: true
state: started
tags:
- configure
- init
- name: Make sure Elasticsearch is running before proceeding
wait_for: host={{ elasticsearch_reachable_host }} port={{ elasticsearch_http_port }} delay=3 timeout=400
tags:
- configure
- init
- import_tasks: "RMRedHat.yml"
when: ansible_os_family == "RedHat"
- import_tasks: "RMDebian.yml"
when: ansible_os_family == "Debian"
- name: Wait for Elasticsearch API
uri:
url: "https://{{ node_certs_generator_ip }}:{{ elasticsearch_http_port }}/_cluster/health/"
user: "elastic" # Default Elasticsearch user is always "elastic"
password: "{{ elasticsearch_xpack_security_password }}"
validate_certs: no
status_code: 200,401
return_content: yes
force_basic_auth: yes
timeout: 4
register: _result
until: ( _result.json is defined) and (_result.json.status == "green")
retries: 24
delay: 5
when:
- elasticsearch_xpack_users is defined
- name: Create elasticsearch users
uri:
url: "https://{{ node_certs_generator_ip }}:{{ elasticsearch_http_port }}/_security/user/{{ item.key }}"
method: POST
body_format: json
user: "elastic"
password: "{{ elasticsearch_xpack_security_password }}"
body: '{ "password" : "{{ item.value["password"] }}", "roles" : {{ item.value["roles"] }} }'
validate_certs: no
force_basic_auth: yes
loop: "{{ elasticsearch_xpack_users|default({})|dict2items }}"
register: http_response
failed_when: http_response.status != 200
when:
- elasticsearch_xpack_users is defined
- name: Reload systemd configuration
systemd:
daemon_reload: true
become: yes
notify: restart elasticsearch

View File

@ -1,209 +0,0 @@
- name: Check if certificate exists locally
stat:
path: "{{ node_certs_destination }}/{{ elasticsearch_node_name }}.crt"
register: certificate_file_exists
- name: Write the instances.yml file in the selected node (force = no)
template:
src: instances.yml.j2
dest: "{{ node_certs_source }}/instances.yml"
owner: root
group: root
mode: 0644
force: no
register: instances_file_exists
tags:
- config
- xpack-security
when:
- node_certs_generator
- not certificate_file_exists.stat.exists
- name: Update instances.yml status after generation
stat:
path: "{{ node_certs_source }}/instances.yml"
register: instances_file_exists
when:
- node_certs_generator
- name: Check if the certificates ZIP file exists
stat:
path: "{{ node_certs_source }}/certs.zip"
register: xpack_certs_zip
when:
- node_certs_generator
- name: Importing custom CA key
copy:
src: "{{ master_certs_path }}/ca/{{ ca_key_name }}"
dest: "{{ node_certs_source }}/{{ ca_key_name }}"
mode: 0440
when:
- not generate_CA
- node_certs_generator
tags: xpack-security
- name: Importing custom CA cert
copy:
src: "{{ master_certs_path }}/ca/{{ ca_cert_name }}"
dest: "{{ node_certs_source }}/{{ ca_cert_name }}"
mode: 0440
when:
- not generate_CA
- node_certs_generator
tags: xpack-security
- name: Generating certificates for Elasticsearch security (generating CA)
command: >-
/usr/share/elasticsearch/bin/elasticsearch-certutil cert ca --pem
--in {{ node_certs_source }}/instances.yml
--out {{ node_certs_source }}/certs.zip
when:
- node_certs_generator
- not xpack_certs_zip.stat.exists
- generate_CA
tags:
- xpack-security
- molecule-idempotence-notest
- name: Generating certificates for Elasticsearch security (using provided CA | Without CA Password)
command: >-
/usr/share/elasticsearch/bin/elasticsearch-certutil cert
--ca-key {{ node_certs_source }}/{{ ca_key_name }}
--ca-cert {{ node_certs_source }}/{{ ca_cert_name }}
--pem --in {{ node_certs_source }}/instances.yml
--out {{ node_certs_source }}/certs.zip
when:
- node_certs_generator
- not xpack_certs_zip.stat.exists
- not generate_CA
- ca_password | length == 0
tags:
- xpack-security
- molecule-idempotence-notest
- name: Generating certificates for Elasticsearch security (using provided CA | Using CA Password)
command: >-
/usr/share/elasticsearch/bin/elasticsearch-certutil cert
--ca-key {{ node_certs_source }}/{{ ca_key_name }}
--ca-cert {{ node_certs_source }}/{{ ca_cert_name }}
--pem --in {{ node_certs_source }}/instances.yml --out {{ node_certs_source }}/certs.zip
--ca-pass {{ ca_password }}
when:
- node_certs_generator
- not xpack_certs_zip.stat.exists
- not generate_CA
- ca_password | length > 0
tags:
- xpack-security
- molecule-idempotence-notest
- name: Verify the Elastic certificates directory
file:
path: "{{ master_certs_path }}"
state: directory
mode: 0700
delegate_to: "127.0.0.1"
become: no
when:
- node_certs_generator
- name: Verify the Certificates Authority directory
file:
path: "{{ master_certs_path }}/ca/"
state: directory
mode: 0700
delegate_to: "127.0.0.1"
become: no
when:
- node_certs_generator
- name: Copying certificates to Ansible master
fetch:
src: "{{ node_certs_source }}/certs.zip"
dest: "{{ master_certs_path }}/"
flat: yes
mode: 0700
when:
- node_certs_generator
tags:
- xpack-security
- molecule-idempotence-notest
- name: Delete certs.zip in Generator node
file:
state: absent
path: "{{ node_certs_source }}/certs.zip"
when:
- node_certs_generator
tags: molecule-idempotence-notest
- name: Unzip generated certs.zip
unarchive:
src: "{{ master_certs_path }}/certs.zip"
dest: "{{ master_certs_path }}/"
delegate_to: "127.0.0.1"
become: no
when:
- node_certs_generator
tags:
- xpack-security
- molecule-idempotence-notest
- name: Copying node's certificate from master
copy:
src: "{{ item }}"
dest: "{{ node_certs_destination }}/"
owner: root
group: elasticsearch
mode: 0440
with_items:
- "{{ master_certs_path }}/{{ elasticsearch_node_name }}/{{ elasticsearch_node_name }}.key"
- "{{ master_certs_path }}/{{ elasticsearch_node_name }}/{{ elasticsearch_node_name }}.crt"
- "{{ master_certs_path }}/ca/ca.crt"
when:
- generate_CA
tags:
- xpack-security
- molecule-idempotence-notest
- name: Copying node's certificate from master (Custom CA)
copy:
src: "{{ item }}"
dest: "{{ node_certs_destination }}/"
owner: root
group: elasticsearch
mode: 0440
with_items:
- "{{ master_certs_path }}/{{ elasticsearch_node_name }}/{{ elasticsearch_node_name }}.key"
- "{{ master_certs_path }}/{{ elasticsearch_node_name }}/{{ elasticsearch_node_name }}.crt"
- "{{ master_certs_path }}/ca/{{ ca_cert_name }}"
when:
- not generate_CA
tags:
- xpack-security
- molecule-idempotence-notest
- name: Ensuring folder permissions
file:
path: "{{ node_certs_destination }}/"
owner: root
group: elasticsearch
mode: 0770
state: directory
recurse: no
when:
- elasticsearch_xpack_security
- generate_CA
tags: xpack-security
- name: Set elasticsearch bootstrap password
shell: |
set -o pipefail
echo {{ elasticsearch_xpack_security_password }} | {{ node_certs_source }}/bin/elasticsearch-keystore add -xf bootstrap.password
args:
executable: /bin/bash
when:
- node_certs_generator
tags: molecule-idempotence-notest

View File

@ -1,70 +0,0 @@
# {{ ansible_managed }}
cluster.name: {{ elasticsearch_cluster_name }}
node.name: {{ elasticsearch_node_name }}
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: {{ elasticsearch_network_host }}
{% if elasticsearch_path_repo | length>0 %}
path.repo:
{% for item in elasticsearch_path_repo %}
- {{ item }}
{% endfor %}
{% endif %}
{% if single_node %}
discovery.type: single-node
{% elif elasticsearch_bootstrap_node %}
node.master: true
cluster.initial_master_nodes:
{% for item in elasticsearch_cluster_nodes %}
- {{ item }}
{% endfor %}
discovery.seed_hosts:
{% for item in elasticsearch_discovery_nodes %}
- {{ item }}
{% endfor %}
{% else %}
node.master: {{ elasticsearch_node_master|lower }}
{% if elasticsearch_node_data|lower == 'false' %}
node.data: false
{% endif %}
{% if elasticsearch_node_ingest|lower == 'false' %}
node.ingest: false
{% endif %}
discovery.seed_hosts:
{% for item in elasticsearch_discovery_nodes %}
- {{ item }}
{% endfor %}
{% endif %}
{% if elasticsearch_lower_disk_requirements %}
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.flood_stage: 200mb
cluster.routing.allocation.disk.watermark.low: 500mb
cluster.routing.allocation.disk.watermark.high: 300mb
{% endif %}
{% if elasticsearch_xpack_security %}
# XPACK Security
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: {{node_certs_destination}}/{{ elasticsearch_node_name }}.key
xpack.security.transport.ssl.certificate: {{node_certs_destination}}/{{ elasticsearch_node_name }}.crt
{% if generate_CA == true %}
xpack.security.transport.ssl.certificate_authorities: [ "{{ node_certs_destination }}/ca.crt" ]
{% elif generate_CA == false %}
xpack.security.transport.ssl.certificate_authorities: [ "{{ node_certs_destination }}/{{ca_cert_name}}" ]
{% endif %}
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.verification_mode: certificate
xpack.security.http.ssl.key: {{node_certs_destination}}/{{ elasticsearch_node_name }}.key
xpack.security.http.ssl.certificate: {{node_certs_destination}}/{{ elasticsearch_node_name }}.crt
{% if generate_CA == true %}
xpack.security.http.ssl.certificate_authorities: [ "{{ node_certs_destination }}/ca.crt" ]
{% elif generate_CA == false %}
xpack.security.http.ssl.certificate_authorities: [ "{{ node_certs_destination }}/{{ca_cert_name}}" ]
{% endif %}
{% endif %}

View File

@ -1,52 +0,0 @@
# {{ ansible_managed }}
################################
# Elasticsearch
################################
# Elasticsearch home directory
#ES_HOME=/usr/share/elasticsearch
# Elasticsearch Java path
#JAVA_HOME=
# Elasticsearch configuration directory
ES_PATH_CONF=/etc/elasticsearch
# Elasticsearch PID directory
#PID_DIR=/var/run/elasticsearch
# Additional Java OPTS
#ES_JAVA_OPTS=
# Configure restart on package upgrade (true, every other setting will lead to not restarting)
#RESTART_ON_UPGRADE=true
################################
# Elasticsearch service
################################
# SysV init.d
#
# The number of seconds to wait before checking if Elasticsearch started successfully as a daemon process
ES_STARTUP_SLEEP_TIME=5
################################
# System properties
################################
# Specifies the maximum file descriptor number that can be opened by this process
# When using Systemd, this setting is ignored and the LimitNOFILE defined in
# /usr/lib/systemd/system/elasticsearch.service takes precedence
#MAX_OPEN_FILES=65536
# The maximum number of bytes of memory that may be locked into RAM
# Set to "unlimited" if you use the 'bootstrap.memory_lock: true' option
# in elasticsearch.yml.
# When using systemd, LimitMEMLOCK must be set in a unit file such as
# /etc/systemd/system/elasticsearch.service.d/override.conf.
MAX_LOCKED_MEMORY=unlimited
# Maximum number of VMA (Virtual Memory Areas) a process can own
# When using Systemd, this setting is ignored and the 'vm.max_map_count'
# property is set at boot time in /usr/lib/sysctl.d/elasticsearch.conf
#MAX_MAP_COUNT=262144

View File

@ -1,3 +0,0 @@
# {{ ansible_managed }}
[Service]
LimitMEMLOCK=infinity

View File

@ -1,17 +0,0 @@
# {{ ansible_managed }}
# TO-DO
{% if node_certs_generator %}
instances:
{% for (key,value) in instances.items() %}
- name: "{{ value.name }}"
{% if value.ip is defined and value.ip | length > 0 %}
ip:
- "{{ value.ip }}"
{% elif value.dns is defined and value.dns | length > 0 %}
dns:
- "{{ value.dns }}"
{% endif %}
{% endfor %}
{% endif %}

View File

@ -1,140 +0,0 @@
#jinja2: trim_blocks:False
# {{ ansible_managed }}
## JVM configuration
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## You should always set the min and max JVM heap
## size to the same value. For example, to set
## the heap to 4 GB, set:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
{% if elasticsearch_jvm_xms is not none %}
{% if elasticsearch_jvm_xms < 32000 %}
-Xms{{ elasticsearch_jvm_xms }}m
-Xmx{{ elasticsearch_jvm_xms }}m
{% else %}
-Xms32000m
-Xmx32000m
{% endif %}
{% else %}
-Xms{% if ansible_memtotal_mb < 64000 %}{{ ((ansible_memtotal_mb|int)/2)|int }}m{% else %}32000m{% endif %}
-Xmx{% if ansible_memtotal_mb < 64000 %}{{ ((ansible_memtotal_mb|int)/2)|int }}m{% else %}32000m{% endif %}
{% endif %}
################################################################
## Expert settings
################################################################
##
## All settings below this section are considered
## expert settings. Don't tamper with them unless
## you understand what you are doing
##
################################################################
## GC configuration
8-13:-XX:+UseConcMarkSweepGC
8-13:-XX:CMSInitiatingOccupancyFraction=75
8-13:-XX:+UseCMSInitiatingOccupancyOnly
## G1GC Configuration
# NOTE: G1 GC is only supported on JDK version 10 or later
# to use G1GC, uncomment the next two lines and update the version on the
# following three lines to your version of the JDK
# 10-13:-XX:-UseConcMarkSweepGC
# 10-13:-XX:-UseCMSInitiatingOccupancyOnly
14-:-XX:+UseG1GC
14-:-XX:G1ReservePercent=25
14-:-XX:InitiatingHeapOccupancyPercent=30
## JVM temporary directory
-Djava.io.tmpdir=${ES_TMPDIR}
## optimizations
# pre-touch memory pages used by the JVM during initialization
-XX:+AlwaysPreTouch
## basic
# force the server VM
-server
# explicitly set the stack size
-Xss1m
# set to headless, just in case
-Djava.awt.headless=true
# ensure UTF-8 encoding by default (e.g. filenames)
-Dfile.encoding=UTF-8
# use our provided JNA always versus the system one
-Djna.nosys=true
# turn off a JDK optimization that throws away stack traces for common
# exceptions because stack traces are important for debugging
-XX:-OmitStackTraceInFastThrow
# flags to configure Netty
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
# log4j 2
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
## heap dumps
# generate a heap dump when an allocation from the Java heap fails
# heap dumps are created in the working directory of the JVM
-XX:+HeapDumpOnOutOfMemoryError
# specify an alternative path for heap dumps
# ensure the directory exists and has sufficient space
-XX:HeapDumpPath=/var/lib/elasticsearch
# specify an alternative path for JVM fatal error logs
-XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log
## GC logging
## JDK 8 GC logging
# 8:-XX:+PrintGCDetails
# 8:-XX:+PrintGCDateStamps
# 8:-XX:+PrintTenuringDistribution
# 8:-XX:+PrintGCApplicationStoppedTime
# 8:-Xloggc:/var/log/elasticsearch/gc.log
# 8:-XX:+UseGCLogFileRotation
# 8:-XX:NumberOfGCLogFiles=32
# 8:-XX:GCLogFileSize=64m
# JDK 9+ GC logging
# 9-:-Xlog:gc*,gc+age=trace,safepoint:file=/var/log/elasticsearch/gc.log:utctime,pid,tags:filecount=32,filesize=64m
# log GC status to a file with time stamps
# ensure the directory exists
#-Xloggc:${loggc}
# By default, the GC log file will not rotate.
# By uncommenting the lines below, the GC log file
# will be rotated every 128MB at most 32 times.
#-XX:+UseGCLogFileRotation
#-XX:NumberOfGCLogFiles=32
#-XX:GCLogFileSize=128M

View File

@ -1,48 +0,0 @@
Ansible Role: Kibana for Elastic Stack
------------------------------------
An Ansible Role that installs [Kibana](https://www.elastic.co/products/kibana) and [Wazuh APP](https://github.com/wazuh/wazuh-kibana-app).
Requirements
------------
This role will work on:
* Red Hat
* CentOS
* Fedora
* Debian
* Ubuntu
Role Variables
--------------
```
---
elasticsearch_http_port: "9200"
elasticsearch_network_host: "127.0.0.1"
kibana_server_host: "0.0.0.0"
kibana_server_port: "5601"
elastic_stack_version: 5.5.0
```
Example Playbook
----------------
```
- hosts: kibana
roles:
- { role: ansible-role-kibana, elasticsearch_network_host: '192.168.33.182' }
```
License and copyright
---------------------
WAZUH Copyright (C) 2021 Wazuh Inc. (License GPLv3)
### Based on previous work from geerlingguy
- https://github.com/geerlingguy/ansible-role-elasticsearch
### Modified by Wazuh
The playbooks have been modified by Wazuh, including some specific requirements, templates and configuration to improve integration with Wazuh ecosystem.

View File

@ -1,5 +0,0 @@
---
- name: restart kibana
service:
name: kibana
state: restarted

View File

@ -1,24 +0,0 @@
---
galaxy_info:
author: Wazuh
description: Installing and maintaining Elasticsearch server.
company: wazuh.com
license: license (GPLv3)
min_ansible_version: 2.0
platforms:
- name: EL
versions:
- all
- name: Fedora
versions:
- all
- name: Debian
versions:
- all
- name: Ubuntu
versions:
- all
galaxy_tags:
- web
- system
- monitoring

View File

@ -1,32 +0,0 @@
---
- name: Debian/Ubuntu | Install apt-transport-https and ca-certificates
apt:
name:
- apt-transport-https
- ca-certificates
state: present
register: kibana_installing_ca_package
until: kibana_installing_ca_package is succeeded
- name: Debian/Ubuntu | Add Elasticsearch GPG key
apt_key:
url: "{{ elasticrepo.gpg }}"
id: "{{ elasticrepo.key_id }}"
state: present
- name: Debian/Ubuntu | Install Elastic repo
apt_repository:
repo: "deb {{ elasticrepo.apt }} stable main"
state: present
filename: 'elastic_repo_7'
update_cache: true
changed_when: false
- name: Debian/Ubuntu | Install Kibana
apt:
name: "kibana={{ elastic_stack_version }}"
state: present
cache_valid_time: 3600
register: installing_kibana_package
until: installing_kibana_package is succeeded
tags: install

View File

@ -1,6 +0,0 @@
---
- name: Debian/Ubuntu | Removing Elasticsearch repository
apt_repository:
repo: "deb {{ elasticrepo.apt }} stable main"
state: absent
changed_when: false

View File

@ -1,6 +0,0 @@
---
- name: Remove Elasticsearch repository (and clean up left-over metadata)
yum_repository:
name: elastic_repo_7
state: absent
changed_when: false

View File

@ -1,15 +0,0 @@
---
- name: RedHat/CentOS/Fedora | Install Elastic repo
yum_repository:
name: elastic_repo_7
description: Elastic repository for 7.x packages
baseurl: "{{ elasticrepo.yum }}"
gpgkey: "{{ elasticrepo.gpg }}"
gpgcheck: true
changed_when: false
- name: RedHat/CentOS/Fedora | Install Kibana
package: name=kibana-{{ elastic_stack_version }} state=present
register: installing_kibana_package
until: installing_kibana_package is succeeded
tags: install

View File

@ -1,76 +0,0 @@
---
- name: Ensure the Git package is present
package:
name: git
state: present
- name: Modify repo url if host is in Debian family
set_fact:
node_js_repo_type: deb
when:
- ansible_os_family | lower == "debian"
- name: Download script to install Nodejs repository
get_url:
url: "https://{{ nodejs['repo_dict'][ansible_os_family|lower] }}.{{ nodejs['repo_url_ext'] }}"
dest: "/tmp/setup_nodejs_repo.sh"
mode: 0700
- name: Execute downloaded script to install Nodejs repo
command: /tmp/setup_nodejs_repo.sh
register: node_repo_installation_result
changed_when: false
- name: Install Nodejs
package:
name: nodejs
state: present
- name: Install yarn dependency to build the Wazuh Kibana Plugin
# Using shell due to errors when evaluating text between @ with command
shell: "npm install -g {{ 'yarn' }}{{ '@' }}{{ '1.10.1'}}" # noqa 305
register: install_yarn_result
changed_when: install_yarn_result == 0
- name: Remove old wazuh-kibana-app git directory
file:
path: /tmp/app
state: absent
changed_when: false
- name: Clone wazuh-kibana-app repository # Using command as git module doesn't cover single-branch nor depth
command: git clone https://github.com/wazuh/wazuh-kibana-app -b {{ wazuh_plugin_branch }} --single-branch --depth=1 app # noqa 303
register: clone_app_repo_result
changed_when: false
args:
chdir: "/tmp"
- name: Executing yarn to build the package
command: "{{ item }}"
with_items:
- "yarn"
- "yarn build"
register: yarn_execution_result
changed_when: false
args:
chdir: "/tmp/app/"
- name: Obtain name of generated package
shell: "find ./ -name 'wazuh-*.zip' -printf '%f\\n'"
register: wazuhapp_package_name
changed_when: false
args:
chdir: "/tmp/app/build"
- name: Install Wazuh Plugin (can take a while)
shell: NODE_OPTIONS="{{ node_options }}" /usr/share/kibana/bin/kibana-plugin install file:///tmp/app/build/{{ wazuhapp_package_name.stdout }}
args:
executable: /bin/bash
creates: /usr/share/kibana/plugins/wazuh/package.json
chdir: /usr/share/kibana
become: yes
become_user: kibana
notify: restart kibana
tags:
- install
- skip_ansible_lint

View File

@ -1,189 +0,0 @@
---
- name: Stopping early, trying to compile Wazuh Kibana Plugin on Debian 10 is not possible
fail:
msg: "It's not possible to compile the Wazuh Kibana plugin on Debian 10 due to: https://github.com/wazuh/wazuh-kibana-app/issues/1924"
when:
- build_from_sources
- ansible_distribution == "Debian"
- ansible_distribution_major_version == "10"
- import_tasks: RedHat.yml
when: ansible_os_family == 'RedHat'
- import_tasks: Debian.yml
when: ansible_os_family == 'Debian'
- name: Copying node's certificate from master
copy:
src: "{{ item }}"
dest: "{{ node_certs_destination }}/"
owner: root
group: kibana
mode: 0440
with_items:
- "{{ master_certs_path }}/{{ kibana_node_name }}/{{ kibana_node_name }}.key"
- "{{ master_certs_path }}/{{ kibana_node_name }}/{{ kibana_node_name }}.crt"
- "{{ master_certs_path }}/ca/ca.crt"
tags: xpack-security
when:
- kibana_xpack_security
- generate_CA
- name: Copying node's certificate from master (Custom CA)
copy:
src: "{{ item }}"
dest: "{{ node_certs_destination }}/"
owner: root
group: kibana
mode: 0440
with_items:
- "{{ master_certs_path }}/{{ kibana_node_name }}/{{ kibana_node_name }}.key"
- "{{ master_certs_path }}/{{ kibana_node_name }}/{{ kibana_node_name }}.crt"
- "{{ master_certs_path }}/ca/{{ ca_cert_name }}"
when:
- kibana_xpack_security
- not generate_CA
tags: xpack-security
- name: Ensuring certificates folder owner and permissions
file:
path: "{{ node_certs_destination }}/"
state: directory
recurse: no
owner: kibana
group: kibana
mode: 0770
when:
- kibana_xpack_security
notify: restart kibana
tags: xpack-security
- name: Kibana configuration
template:
src: kibana.yml.j2
dest: /etc/kibana/kibana.yml
owner: root
group: root
mode: 0644
notify: restart kibana
tags: configure
- name: Checking Wazuh-APP version
shell: >-
grep -c -E 'version.*{{ elastic_stack_version }}' /usr/share/kibana/plugins/wazuh/package.json
args:
executable: /bin/bash
removes: /usr/share/kibana/plugins/wazuh/package.json
register: wazuh_app_verify
changed_when: false
failed_when:
- wazuh_app_verify.rc != 0
- wazuh_app_verify.rc != 1
- name: Removing old Wazuh-APP
command: /usr/share/kibana/bin/kibana-plugin --allow-root remove wazuh
when: wazuh_app_verify.rc == 1
tags: install
- name: Removing bundles
file:
path: /usr/share/kibana/data/bundles
state: absent
when: wazuh_app_verify.rc == 1
tags: install
- name: Explicitly starting Kibana to generate "wazuh-"
service:
name: kibana
state: started
- name: Ensuring Kibana directory owner
file:
# noqa 208
path: "/usr/share/kibana"
state: directory
owner: kibana
group: kibana
recurse: yes
- name: Build and Install Wazuh Kibana Plugin from sources
import_tasks: build_wazuh_plugin.yml
when:
- build_from_sources is defined
- build_from_sources
- name: Install Wazuh Plugin (can take a while)
shell: >-
NODE_OPTIONS="{{ node_options }}" /usr/share/kibana/bin/kibana-plugin install
{{ wazuh_app_url }}-{{ wazuh_version }}_{{ elastic_stack_version }}-1.zip
args:
executable: /bin/bash
creates: /usr/share/kibana/plugins/wazuh/package.json
chdir: /usr/share/kibana
become: yes
become_user: kibana
notify: restart kibana
tags:
- install
- skip_ansible_lint
when:
- not build_from_sources
- name: Kibana optimization (can take a while)
shell: /usr/share/kibana/node/bin/node {{ node_options }} /usr/share/kibana/src/cli/cli.js --optimize -c {{ kibana_conf_path }}/kibana.yml
args:
executable: /bin/bash
creates: /usr/share/kibana/data/wazuh/
become: yes
become_user: kibana
tags:
- skip_ansible_lint
- name: Wait for Elasticsearch port
wait_for: host={{ elasticsearch_network_host }} port={{ elasticsearch_http_port }}
- name: Select correct API protocol
set_fact:
elastic_api_protocol: "{% if kibana_xpack_security %}https{% else %}http{% endif %}"
- name: Attempting to delete legacy Wazuh index if exists
uri:
url: "{{ elastic_api_protocol }}://{{ elasticsearch_network_host }}:{{ elasticsearch_http_port }}/.wazuh"
method: DELETE
user: "{{ elasticsearch_xpack_security_user }}"
password: "{{ elasticsearch_xpack_security_password }}"
validate_certs: no
status_code: 200, 404
force_basic_auth: yes
- name: Create wazuh plugin config directory
file:
path: /usr/share/kibana/data/wazuh/config/
state: directory
recurse: yes
owner: kibana
group: kibana
mode: 0751
changed_when: False
- name: Configure Wazuh Kibana Plugin
template:
src: wazuh.yml.j2
dest: /usr/share/kibana/data/wazuh/config/wazuh.yml
owner: kibana
group: kibana
mode: 0751
changed_when: False
- name: Ensure Kibana is started and enabled
service:
name: kibana
enabled: true
state: started
- import_tasks: RMRedHat.yml
when: ansible_os_family == 'RedHat'
- import_tasks: RMDebian.yml
when: ansible_os_family == 'Debian'

View File

@ -1,121 +0,0 @@
# {{ ansible_managed }}
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: {{ kibana_server_port }}
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: {{ kibana_server_host }}
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URL of the Elasticsearch instance to use for all your queries.
{% if kibana_xpack_security %}
elasticsearch.hosts: "https://{{ elasticsearch_network_host }}:{{ elasticsearch_http_port }}"
{% else %}
elasticsearch.hosts: "http://{{ elasticsearch_network_host }}:{{ elasticsearch_http_port }}"
{% endif %}
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "discover"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"
# Paths to the PEM-format SSL certificate and SSL key files, respectively. These
# files enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.cert: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.cert: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.ca: /path/to/your/CA.pem
# To disregard the validity of SSL certificates, change this setting's value to false.
#elasticsearch.ssl.verify: true
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# Xpack Security
{% if kibana_xpack_security %}
elasticsearch.username: "{{ elasticsearch_xpack_security_user }}"
elasticsearch.password: "{{ elasticsearch_xpack_security_password }}"
server.ssl.enabled: true
server.ssl.key: "{{node_certs_destination}}/{{ kibana_node_name }}.key"
server.ssl.certificate: "{{node_certs_destination}}/{{ kibana_node_name }}.crt"
elasticsearch.ssl.verificationMode: "{{ kibana_ssl_verification_mode }}"
{% if generate_CA == true %}
elasticsearch.ssl.certificateAuthorities: ["{{ node_certs_destination }}/ca.crt"]
{% elif generate_CA == false %}
elasticsearch.ssl.certificateAuthorities: ["{{ node_certs_destination }}/{{ca_cert_name}}"]
{% endif %}
{% endif %}
server.defaultRoute: /app/wazuh

View File

@ -1,134 +0,0 @@
---
#
# Wazuh app - App configuration file
# Copyright (C) 2015-2019 Wazuh, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# Find more information about this on the LICENSE file.
#
# ======================== Wazuh app configuration file ========================
#
# Please check the documentation for more information on configuration options:
# https://documentation.wazuh.com/current/installation-guide/index.html
#
# Also, you can check our repository:
# https://github.com/wazuh/wazuh-kibana-app
#
# ------------------------------- Index patterns -------------------------------
#
# Default index pattern to use.
#pattern: wazuh-alerts-4.x-*
#
# ----------------------------------- Checks -----------------------------------
#
# Defines which checks must to be consider by the healthcheck
# step once the Wazuh app starts. Values must to be true or false.
#checks.pattern : true
#checks.template: true
#checks.api : true
#checks.setup : true
#
# --------------------------------- Extensions ---------------------------------
#
# Defines which extensions should be activated when you add a new API entry.
# You can change them after Wazuh app starts.
# Values must to be true or false.
#extensions.pci : true
#extensions.gdpr : true
#extensions.hipaa : true
#extensions.nist : true
#extensions.audit : true
#extensions.oscap : false
#extensions.ciscat : false
#extensions.aws : false
#extensions.virustotal: false
#extensions.osquery : false
#extensions.docker : false
#
# ---------------------------------- Time out ----------------------------------
#
# Defines maximum timeout to be used on the Wazuh app requests.
# It will be ignored if it is bellow 1500.
# It means milliseconds before we consider a request as failed.
# Default: 20000
#timeout: 20000
#
# ------------------------------ Advanced indices ------------------------------
#
# Configure .wazuh indices shards and replicas.
#wazuh.shards : 1
#wazuh.replicas : 0
#
# --------------------------- Index pattern selector ---------------------------
#
# Defines if the user is allowed to change the selected
# index pattern directly from the Wazuh app top menu.
# Default: true
#ip.selector: true
#
# List of index patterns to be ignored
#ip.ignore: []
#
# -------------------------------- X-Pack RBAC ---------------------------------
#
# Custom setting to enable/disable built-in X-Pack RBAC security capabilities.
# Default: enabled
#xpack.rbac.enabled: true
#
# ------------------------------ wazuh-monitoring ------------------------------
#
# Custom setting to enable/disable wazuh-monitoring indices.
# Values: true, false, worker
# If worker is given as value, the app will show the Agents status
# visualization but won't insert data on wazuh-monitoring indices.
# Default: true
#wazuh.monitoring.enabled: true
#
# Custom setting to set the frequency for wazuh-monitoring indices cron task.
# Default: 900 (s)
#wazuh.monitoring.frequency: 900
#
# Configure wazuh-monitoring-4.x-* indices shards and replicas.
#wazuh.monitoring.shards: 2
#wazuh.monitoring.replicas: 0
#
# Configure wazuh-monitoring-4.x-* indices custom creation interval.
# Values: h (hourly), d (daily), w (weekly), m (monthly)
# Default: d
#wazuh.monitoring.creation: d
#
# Default index pattern to use for Wazuh monitoring
#wazuh.monitoring.pattern: wazuh-monitoring-4.x-*
#
#
# ------------------------------- App privileges --------------------------------
#admin: true
#
# ------------------------------- App logging level -----------------------------
# Set the logging level for the Wazuh App log files.
# Default value: info
# Allowed values: info, debug
#logs.level: info
#
#-------------------------------- API entries -----------------------------------
#The following configuration is the default structure to define an API entry.
#
#hosts:
# - <id>:
# url: http(s)://<url>
# port: <port>
# user: <user>
# password: <password>
hosts:
{% for api in wazuh_api_credentials %}
- {{ api['id'] }}:
url: {{ api['url'] }}
port: {{ api['port'] }}
username: {{ api['username'] }}
password: {{ api['password'] }}
{% endfor %}

View File

@ -1,69 +0,0 @@
---
# Cluster Settings
opendistro_version: 1.13.2
single_node: false
elasticsearch_node_name: node-1
opendistro_cluster_name: wazuh
elasticsearch_network_host: '0.0.0.0'
elasticsearch_node_master: true
elasticsearch_node_data: true
elasticsearch_node_ingest: true
elasticsearch_start_timeout: 90
elasticsearch_lower_disk_requirements: false
elasticsearch_cluster_nodes:
- 127.0.0.1
elasticsearch_discovery_nodes:
- 127.0.0.1
local_certs_path: "{{ playbook_dir }}/opendistro/certificates"
# Minimum master nodes in cluster, 2 for 3 nodes elasticsearch cluster
minimum_master_nodes: 2
# Configure hostnames for Elasticsearch nodes
# Example es1.example.com, es2.example.com
domain_name: wazuh.com
# The OpenDistro package repository
package_repos:
yum:
opendistro:
baseurl: 'https://packages.wazuh.com/4.x/yum/'
gpg: 'https://packages.wazuh.com/key/GPG-KEY-WAZUH'
apt:
opendistro:
baseurl: 'deb https://packages.wazuh.com/4.x/apt/ stable main'
gpg: 'https://packages.wazuh.com/key/GPG-KEY-WAZUH'
openjdk:
baseurl: 'deb http://deb.debian.org/debian stretch-backports main'
opendistro_sec_plugin_conf_path: /usr/share/elasticsearch/plugins/opendistro_security/securityconfig
opendistro_sec_plugin_tools_path: /usr/share/elasticsearch/plugins/opendistro_security/tools
opendistro_conf_path: /etc/elasticsearch/
# Security password
opendistro_custom_user: ""
opendistro_custom_user_role: "admin"
# Set JVM memory limits
opendistro_jvm_xms: null
opendistro_http_port: 9200
certs_gen_tool_version: 1.8
# Url of Search Guard certificates generator tool
certs_gen_tool_url: "https://search.maven.org/remotecontent?filepath=com/floragunn/search-guard-tlstool/{{ certs_gen_tool_version }}/search-guard-tlstool-{{ certs_gen_tool_version }}.zip"
opendistro_admin_password: changeme
opendistro_kibana_password: changeme
# Deployment settings
generate_certs: true
perform_installation: true
opendistro_nolog_sensible: true

View File

@ -1,5 +0,0 @@
---
- name: restart elasticsearch
service:
name: elasticsearch
state: restarted

View File

@ -1,6 +0,0 @@
---
- name: RedHat/CentOS/Fedora | Remove Elasticsearch repository (and clean up left-over metadata)
yum_repository:
name: opendistro_repo
state: absent
changed_when: false

View File

@ -1,50 +0,0 @@
---
- block:
- name: RedHat/CentOS/Fedora | Add OpenDistro repo
yum_repository:
file: opendistro
name: opendistro_repo
description: Opendistro yum repository
baseurl: "{{ package_repos.yum.opendistro.baseurl }}"
gpgkey: "{{ package_repos.yum.opendistro.gpg }}"
gpgcheck: true
changed_when: false
- name: RedHat/CentOS/Fedora | Install OpenJDK 11
yum:
name: java-11-openjdk-devel
state: present
when:
- ansible_distribution != 'Amazon'
- name: Amazon Linux | Install OpenJDK 11
block:
- name: Install Amazon extras
yum:
name: amazon-linux-extras
state: present
- name: Install OpenJDK 11
shell: amazon-linux-extras install java-openjdk11 -y
when:
- ansible_distribution == 'Amazon'
- name: RedHat/CentOS/Fedora | Install OpenDistro dependencies
yum:
name: "{{ packages }}"
vars:
packages:
- wget
- unzip
- name: Install OpenDistro
package:
name: opendistroforelasticsearch-{{ opendistro_version }}
state: present
register: install
tags: install
tags:
- install

View File

@ -1,87 +0,0 @@
---
- name: Check if certificates already exists
stat:
path: "{{ local_certs_path }}"
register: certificates_folder
delegate_to: localhost
become: no
tags:
- generate-certs
- block:
- name: Local action | Create local temporary directory for certificates generation
file:
path: "{{ local_certs_path }}"
mode: 0755
state: directory
- name: Local action | Check that the generation tool exists
stat:
path: "{{ local_certs_path }}/search-guard-tlstool-{{ certs_gen_tool_version }}.zip"
register: tool_package
- name: Local action | Download certificates generation tool
get_url:
url: "{{ certs_gen_tool_url }}"
dest: "{{ local_certs_path }}/search-guard-tlstool-{{ certs_gen_tool_version }}.zip"
when: not tool_package.stat.exists
- name: Local action | Extract the certificates generation tool
unarchive:
src: "{{ local_certs_path }}/search-guard-tlstool-{{ certs_gen_tool_version }}.zip"
dest: "{{ local_certs_path }}/"
- name: Local action | Add the execution bit to the binary
file:
dest: "{{ local_certs_path }}/tools/sgtlstool.sh"
mode: a+x
- name: Local action | Prepare the certificates generation template file
template:
src: "templates/tlsconfig.yml.j2"
dest: "{{ local_certs_path }}/config/tlsconfig.yml"
mode: 0644
register: tlsconfig_template
- name: Create a directory if it does not exist
file:
path: "{{ local_certs_path }}/certs/"
state: directory
mode: '0755'
- name: Local action | Check if root CA file exists
stat:
path: "{{ local_certs_path }}/certs/root-ca.key"
register: root_ca_file
- name: Local action | Generate the node & admin certificates in local
command: >-
{{ local_certs_path }}/tools/sgtlstool.sh
-c {{ local_certs_path }}/config/tlsconfig.yml
-ca -crt
-t {{ local_certs_path }}/certs/
-f -o
when:
- not root_ca_file.stat.exists
- tlsconfig_template.changed
- name: Local action | Generate the node & admin certificates using an existing root CA
command: >-
{{ local_certs_path }}/tools/sgtlstool.sh
-c {{ local_certs_path }}/config/tlsconfig.yml
-crt
-t {{ local_certs_path }}/certs/
-f
when:
- root_ca_file.stat.exists
- tlsconfig_template.changed
run_once: true
delegate_to: localhost
become: no
tags:
- generate-certs
when:
- not certificates_folder.stat.exists

View File

@ -1,118 +0,0 @@
---
- import_tasks: local_actions.yml
when:
- generate_certs
- block:
- import_tasks: RedHat.yml
when: ansible_os_family == 'RedHat'
- import_tasks: Debian.yml
when: ansible_os_family == 'Debian'
- name: Remove performance analyzer plugin from elasticsearch
become: true
command: ./elasticsearch-plugin remove opendistro-performance-analyzer
ignore_errors: true
args:
chdir: /usr/share/elasticsearch/bin/
register: remove_elasticsearch_performance_analyzer
failed_when:
- remove_elasticsearch_performance_analyzer.rc != 0
- '"not found" not in remove_elasticsearch_performance_analyzer.stderr'
changed_when: "remove_elasticsearch_performance_analyzer.rc == 0"
- name: Remove elasticsearch configuration file
file:
path: "{{ opendistro_conf_path }}/elasticsearch.yml"
state: absent
tags: install
- name: Copy Configuration File
blockinfile:
block: "{{ lookup('template', 'elasticsearch.yml.j2') }}"
dest: "{{ opendistro_conf_path }}/elasticsearch.yml"
create: true
group: elasticsearch
mode: 0640
marker: "## {mark} Opendistro general settings ##"
tags: install
- include_tasks: security_actions.yml
tags:
- security
- name: Configure OpenDistro Elasticsearch JVM memmory.
template:
src: "templates/jvm.options.j2"
dest: /etc/elasticsearch/jvm.options
owner: root
group: elasticsearch
mode: 0644
force: yes
notify: restart elasticsearch
tags: install
- name: Ensure extra time for Elasticsearch to start on reboots
lineinfile:
path: /usr/lib/systemd/system/elasticsearch.service
regexp: '^TimeoutStartSec='
line: "TimeoutStartSec={{ elasticsearch_start_timeout }}"
become: yes
tags: configure
- name: Ensure Elasticsearch started and enabled
service:
name: elasticsearch
enabled: true
state: started
- name: Wait for Elasticsearch API
uri:
url: "https://{{ inventory_hostname if not single_node else elasticsearch_network_host }}:{{ opendistro_http_port }}/_cluster/health/"
user: "admin" # Default OpenDistro user is always "admin"
password: "{{ opendistro_admin_password }}"
validate_certs: no
status_code: 200,401
return_content: yes
timeout: 4
register: _result
until:
- _result.json is defined
- _result.json.status == "green" or ( _result.json.status == "yellow" and single_node )
retries: 24
delay: 5
tags: debug
when:
- hostvars[inventory_hostname]['private_ip'] is not defined or not hostvars[inventory_hostname]['private_ip']
- name: Wait for Elasticsearch API (Private IP)
uri:
url: "https://{{ hostvars[inventory_hostname]['private_ip'] if not single_node else elasticsearch_network_host }}:{{ opendistro_http_port }}/_cluster/health/"
user: "admin" # Default OpenDistro user is always "admin"
password: "{{ opendistro_admin_password }}"
validate_certs: no
status_code: 200,401
return_content: yes
timeout: 4
register: _result
until:
- _result.json is defined
- _result.json.status == "green" or ( _result.json.status == "yellow" and single_node )
retries: 24
delay: 5
tags: debug
when:
- hostvars[inventory_hostname]['private_ip'] is defined and hostvars[inventory_hostname]['private_ip']
- import_tasks: "RMRedHat.yml"
when: ansible_os_family == "RedHat"
- name: Reload systemd configuration
systemd:
daemon_reload: true
become: yes
notify: restart elasticsearch
when: perform_installation

View File

@ -1,129 +0,0 @@
- name: Remove demo certs
file:
path: "{{ item }}"
state: absent
with_items:
- "{{ opendistro_conf_path }}/kirk.pem"
- "{{ opendistro_conf_path }}/kirk-key.pem"
- "{{ opendistro_conf_path }}/esnode.pem"
- "{{ opendistro_conf_path }}/esnode-key.pem"
- name: Configure IP (Private address)
set_fact:
target_address: "{{ hostvars[inventory_hostname]['private_ip'] if not single_node else elasticsearch_network_host }}"
when:
- hostvars[inventory_hostname]['private_ip'] is defined
- name: Configure IP (Public address)
set_fact:
target_address: "{{ inventory_hostname if not single_node else elasticsearch_network_host }}"
when:
- hostvars[inventory_hostname]['private_ip'] is not defined
- name: Copy the node & admin certificates to Elasticsearch cluster
copy:
src: "{{ local_certs_path }}/certs/{{ item }}"
dest: /etc/elasticsearch/
mode: 0644
with_items:
- root-ca.pem
- root-ca.key
- "{{ elasticsearch_node_name }}.key"
- "{{ elasticsearch_node_name }}.pem"
- "{{ elasticsearch_node_name }}_http.key"
- "{{ elasticsearch_node_name }}_http.pem"
- "{{ elasticsearch_node_name }}_elasticsearch_config_snippet.yml"
- admin.key
- admin.pem
- name: Copy the OpenDistro security configuration file to cluster
blockinfile:
block: "{{ lookup('file', snippet_path ) }}"
dest: "{{ opendistro_conf_path }}/elasticsearch.yml"
insertafter: EOF
marker: "## {mark} Opendistro Security Node & Admin certificates configuration ##"
vars:
snippet_path: '{{ local_certs_path }}/certs/{{ elasticsearch_node_name }}_elasticsearch_config_snippet.yml'
- name: Prepare the OpenDistro security configuration file
replace:
path: "{{ opendistro_conf_path }}/elasticsearch.yml"
regexp: 'searchguard'
replace: 'opendistro_security'
tags: local
- name: Restart elasticsearch with security configuration
systemd:
name: elasticsearch
state: restarted
- name: Copy the OpenDistro security internal users template
template:
src: "templates/internal_users.yml.j2"
dest: "{{ opendistro_sec_plugin_conf_path }}/internal_users.yml"
mode: 0644
run_once: true
- name: Hashing the custom admin password
command: "{{ opendistro_sec_plugin_tools_path }}/hash.sh -p {{ opendistro_admin_password }}" # noqa 301
register: opendistro_admin_password_hashed
no_log: '{{ opendistro_nolog_sensible | bool }}'
run_once: true
- name: Set the Admin user password
replace:
path: "{{ opendistro_sec_plugin_conf_path }}/internal_users.yml"
regexp: '(?<=admin:\n hash: )(.*)(?=)'
replace: "{{ odfe_password_hash | quote }}"
vars:
odfe_password_hash: "{{ opendistro_admin_password_hashed.stdout_lines | last }}"
run_once: true
# this can also be achieved with password_hash, but it requires dependencies on the controller
- name: Hash the kibanaserver role/user pasword
command: "{{ opendistro_sec_plugin_tools_path }}/hash.sh -p {{ opendistro_kibana_password }}" # noqa 301
register: opendistro_kibanaserver_password_hashed
no_log: '{{ opendistro_nolog_sensible | bool }}'
run_once: true
- name: Set the kibanaserver user password
replace:
path: "{{ opendistro_sec_plugin_conf_path }}/internal_users.yml"
regexp: '(?<=kibanaserver:\n hash: )(.*)(?=)'
replace: "{{ odfe_password_hash | quote }}"
vars:
odfe_password_hash: "{{ opendistro_kibanaserver_password_hashed.stdout_lines | last }}"
run_once: true
- name: Initialize the OpenDistro security index in elasticsearch
command: >
{{ opendistro_sec_plugin_tools_path }}/securityadmin.sh
-cacert {{ opendistro_conf_path }}/root-ca.pem
-cert {{ opendistro_conf_path }}/admin.pem
-key {{ opendistro_conf_path }}/admin.key
-cd {{ opendistro_sec_plugin_conf_path }}/
-nhnv -icl
-h {{ target_address }}
run_once: true # noqa 301
- name: Create custom user
uri:
url: "https://{{ target_address }}:{{ opendistro_http_port }}/_opendistro/_security/api/internalusers/{{ opendistro_custom_user }}"
method: PUT
user: "admin" # Default OpenDistro user is always "admin"
password: "{{ opendistro_admin_password }}"
body: |
{
"password": "{{ opendistro_admin_password }}",
"backend_roles": ["{{ opendistro_custom_user_role }}"]
}
body_format: json
validate_certs: no
status_code: 200,201,401
return_content: yes
timeout: 4
when:
- opendistro_custom_user is defined and opendistro_custom_user

View File

@ -1,44 +0,0 @@
cluster.name: {{ opendistro_cluster_name }}
node.name: {{ elasticsearch_node_name }}
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: {{ elasticsearch_network_host }}
node.master: {{ elasticsearch_node_master|lower }}
{% if single_node == true %}
discovery.type: single-node
{% else %}
cluster.initial_master_nodes:
{% for item in elasticsearch_cluster_nodes %}
- {{ item }}
{% endfor %}
discovery.seed_hosts:
{% for item in elasticsearch_discovery_nodes %}
- {{ item }}
{% endfor %}
{% endif %}
{% if elasticsearch_node_data|lower == 'false' %}
node.data: false
{% endif %}
{% if elasticsearch_node_ingest|lower == 'false' %}
node.ingest: false
{% endif %}
{% if elasticsearch_lower_disk_requirements %}
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.flood_stage: 200mb
cluster.routing.allocation.disk.watermark.low: 500mb
cluster.routing.allocation.disk.watermark.high: 300mb
{% endif %}
discovery.zen.minimum_master_nodes: "{{ minimum_master_nodes }}"
opendistro_security.allow_default_init_securityindex: true
opendistro_security.audit.type: internal_elasticsearch
opendistro_security.enable_snapshot_restore_privilege: true
opendistro_security.check_snapshot_restore_write_privileges: true
opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]

View File

@ -1,3 +0,0 @@
---
- name: restart kibana
service: name=kibana state=restarted

View File

@ -1,23 +0,0 @@
---
- block:
- include_vars: debian.yml
- name: Add apt repository signing key
apt_key:
url: "{{ package_repos.apt.opendistro.gpg }}"
state: present
- name: Debian systems | Add OpenDistro repo
apt_repository:
repo: "{{ package_repos.apt.opendistro.baseurl }}"
state: present
update_cache: yes
- name: Install Kibana
apt:
name: "opendistroforelasticsearch-kibana={{ kibana_opendistro_version }}"
state: present
register: install
tags:
- install

View File

@ -1,6 +0,0 @@
---
- name: Remove Elasticsearch repository (and clean up left-over metadata)
yum_repository:
name: opendistro_repo
state: absent
changed_when: false

View File

@ -1,20 +0,0 @@
---
- block:
- name: RedHat/CentOS/Fedora | Add OpenDistro repo
yum_repository:
file: opendistro
name: opendistro_repo
description: Opendistro yum repository
baseurl: "{{ package_repos.yum.opendistro.baseurl }}"
gpgkey: "{{ package_repos.yum.opendistro.gpg }}"
gpgcheck: true
- name: Install Kibana
package:
name: "opendistroforelasticsearch-kibana-{{ kibana_opendistro_version }}"
state: present
register: install
tags:
- install

View File

@ -1,76 +0,0 @@
---
- name: Ensure the Git package is present
package:
name: git
state: present
- name: Modify repo url if host is in Debian family
set_fact:
node_js_repo_type: deb
when:
- ansible_os_family | lower == "debian"
- name: Download script to install Nodejs repository
get_url:
url: "https://{{ nodejs['repo_dict'][ansible_os_family|lower] }}.{{ nodejs['repo_url_ext'] }}"
dest: "/tmp/setup_nodejs_repo.sh"
mode: 0700
- name: Execute downloaded script to install Nodejs repo
command: /tmp/setup_nodejs_repo.sh
register: node_repo_installation_result
changed_when: false
- name: Install Nodejs
package:
name: nodejs
state: present
- name: Install yarn dependency to build the Wazuh Kibana Plugin
# Using shell due to errors when evaluating text between @ with command
shell: "npm install -g {{ 'yarn' }}{{ '@' }}{{ '1.10.1'}}" # noqa 305
register: install_yarn_result
changed_when: install_yarn_result == 0
- name: Remove old wazuh-kibana-app git directory
file:
path: /tmp/app
state: absent
changed_when: false
- name: Clone wazuh-kibana-app repository # Using command as git module doesn't cover single-branch nor depth
command: git clone https://github.com/wazuh/wazuh-kibana-app -b {{ wazuh_plugin_branch }} --single-branch --depth=1 app # noqa 303
register: clone_app_repo_result
changed_when: false
args:
chdir: "/tmp"
- name: Executing yarn to build the package
command: "{{ item }}"
with_items:
- "yarn"
- "yarn build"
register: yarn_execution_result
changed_when: false
args:
chdir: "/tmp/app/"
- name: Obtain name of generated package
shell: "find ./ -name 'wazuh-*.zip' -printf '%f\\n'"
register: wazuhapp_package_name
changed_when: false
args:
chdir: "/tmp/app/build"
- name: Install Wazuh Plugin (can take a while)
shell: NODE_OPTIONS="{{ node_options }}" /usr/share/kibana/bin/kibana-plugin install file:///tmp/app/build/{{ wazuhapp_package_name.stdout }}
args:
executable: /bin/bash
creates: /usr/share/kibana/plugins/wazuh/package.json
chdir: /usr/share/kibana
become: yes
become_user: kibana
notify: restart kibana
tags:
- install
- skip_ansible_lint

View File

@ -1,36 +0,0 @@
# {{ ansible_managed }}
# Description:
# Default Kibana configuration for Open Distro.
server.port: {{ kibana_server_port }}
#server.basePath: ""
server.maxPayloadBytes: {{ kibana_max_payload_bytes }}
server.name: {{ kibana_server_name }}
server.host: {{ kibana_server_host }}
{% if kibana_opendistro_security %}
elasticsearch.hosts: "https://{{ elasticsearch_network_host }}:{{ elasticsearch_http_port }}"
elasticsearch.username: {{ opendistro_kibana_user }}
elasticsearch.password: {{ opendistro_kibana_password }}
server.ssl.enabled: true
server.ssl.certificate: "/usr/share/kibana/{{ kibana_node_name }}_http.pem"
server.ssl.key: "/usr/share/kibana/{{ kibana_node_name }}_http.key"
elasticsearch.ssl.certificateAuthorities: ["/usr/share/kibana/root-ca.pem"]
elasticsearch.ssl.verificationMode: full
{% else %}
elasticsearch.hosts: "http://{{ elasticsearch_network_host }}:{{ elasticsearch_http_port }}"
{% endif %}
elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]
opendistro_security.multitenancy.enabled: true
opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
opendistro_security.readonly_mode.roles: ["kibana_read_only"]
newsfeed.enabled: {{ kibana_newsfeed_enabled }}
telemetry.optIn: {{ kibana_telemetry_optin }}
telemetry.enabled: {{ kibana_telemetry_enabled }}
server.defaultRoute: /app/wazuh?security_tenant=global

View File

@ -1,3 +0,0 @@
---
kibana_opendistro_version: 1.13.2

View File

@ -19,7 +19,7 @@ Role Variables
Available variables are listed below, along with default values (see `defaults/main.yml`):
```
filebeat_output_elasticsearch_hosts:
filebeat_output_indexer_hosts:
- "localhost:9200"
```

View File

@ -1,9 +1,11 @@
---
filebeat_version: 7.10.2
wazuh_template_branch: v4.4.0
wazuh_template_branch: 4.4
filebeat_output_elasticsearch_hosts:
filebeat_node_name: node-1
filebeat_output_indexer_hosts:
- "localhost:9200"
filebeat_module_package_url: https://packages.wazuh.com/4.x/filebeat
@ -11,17 +13,17 @@ filebeat_module_package_name: wazuh-filebeat-0.1.tar.gz
filebeat_module_package_path: /tmp/
filebeat_module_destination: /usr/share/filebeat/module
filebeat_module_folder: /usr/share/filebeat/module/wazuh
elasticsearch_security_user: admin
elasticsearch_security_password: changeme
indexer_security_user: admin
indexer_security_password: changeme
# Security plugin
filebeat_security: true
filebeat_ssl_dir: /etc/pki/filebeat
# Local path to store the generated certificates (OpenDistro security plugin)
local_certs_path: ./opendistro/certificates
# Local path to store the generated certificates (Opensearch security plugin)
local_certs_path: "{{ playbook_dir }}/indexer/certificates"
elasticrepo:
apt: 'https://artifacts.elastic.co/packages/oss-7.x/apt'
yum: 'https://artifacts.elastic.co/packages/oss-7.x/yum'
gpg: 'https://artifacts.elastic.co/GPG-KEY-elasticsearch'
key_id: '46095ACC8548582C1A2699A9D27D666CD88E42B4'
filebeatrepo:
apt: 'deb https://packages.wazuh.com/4.x/apt/ stable main'
yum: 'https://packages.wazuh.com/4.x/yum/'
gpg: 'https://packages.wazuh.com/key/GPG-KEY-WAZUH'
key_id: '0DCFCA5547B19D2A6099506096B3EE5F29111145'

View File

@ -11,13 +11,13 @@
- name: Debian/Ubuntu | Add Elasticsearch apt key.
apt_key:
url: "{{ elasticrepo.gpg }}"
id: "{{ elasticrepo.key_id }}"
url: "{{ filebeatrepo.gpg }}"
id: "{{ filebeatrepo.key_id }}"
state: present
- name: Debian/Ubuntu | Add Filebeat-oss repository.
apt_repository:
repo: "deb {{ elasticrepo.apt }} stable main"
repo: "{{ filebeatrepo.apt }}"
state: present
update_cache: true
changed_when: false

View File

@ -1,6 +1,6 @@
---
- name: Debian/Ubuntu | Remove Filebeat repository (and clean up left-over metadata)
apt_repository:
repo: "deb {{ elasticrepo.apt }} stable main"
repo: "{{ filebeatrepo.apt }}"
state: absent
changed_when: false

View File

@ -1,6 +1,6 @@
---
- name: RedHat/CentOS/Fedora | Remove Filebeat repository (and clean up left-over metadata)
yum_repository:
name: elastic_oss-repo_7
name: wazuh_repo
state: absent
changed_when: false

View File

@ -1,9 +1,9 @@
---
- name: RedHat/CentOS/Fedora/Amazon Linux | Install Filebeats repo
yum_repository:
name: elastic_oss-repo_7
description: Elastic repository for 7.x packages
baseurl: "{{ elasticrepo.yum }}"
gpgkey: "{{ elasticrepo.gpg }}"
name: wazuh_repo
description: Wazuh Repo
baseurl: "{{ filebeatrepo.yum }}"
gpgkey: "{{ filebeatrepo.gpg }}"
gpgcheck: true
changed_when: false

View File

@ -6,17 +6,17 @@
state: directory
owner: root
group: root
mode: 0774
mode: 500
- name: Copy the certificates from local to the Manager instance
copy:
src: "{{ local_certs_path }}/certs/{{ item }}"
src: "{{ local_certs_path }}/wazuh-certificates/{{ item }}"
dest: "{{ filebeat_ssl_dir }}"
owner: root
group: root
mode: 0644
mode: 400
with_items:
- "{{ filebeat_node_name }}.key"
- "{{ filebeat_node_name }}-key.pem"
- "{{ filebeat_node_name }}.pem"
- "root-ca.pem"

View File

@ -1,5 +1,3 @@
# Wazuh - Filebeat configuration file
# Wazuh - Filebeat configuration file
filebeat.modules:
- module: wazuh
@ -14,19 +12,22 @@ setup.template.json.name: 'wazuh'
setup.template.overwrite: true
setup.ilm.enabled: false
# Send events directly to Elasticsearch
# Send events directly to Wazuh indexer
output.elasticsearch:
hosts: {{ filebeat_output_elasticsearch_hosts | to_json }}
hosts:
{% for item in filebeat_output_indexer_hosts %}
- {{ item }}:9200
{% endfor %}
{% if filebeat_security %}
username: {{ elasticsearch_security_user }}
password: {{ elasticsearch_security_password }}
username: {{ indexer_security_user }}
password: {{ indexer_security_password }}
protocol: https
ssl.certificate_authorities:
- {{ filebeat_ssl_dir }}/root-ca.pem
ssl.certificate: "{{ filebeat_ssl_dir }}/{{ filebeat_node_name }}.pem"
ssl.key: "{{ filebeat_ssl_dir }}/{{ filebeat_node_name }}.key"
ssl.key: "{{ filebeat_ssl_dir }}/{{ filebeat_node_name }}-key.pem"
{% endif %}
# Optional. Send events to Logstash instead of Elasticsearch
# Optional. Send events to Logstash instead of Wazuh indexer
#output.logstash.hosts: ["YOUR_LOGSTASH_SERVER_IP:5000"]

View File

@ -1,38 +0,0 @@
Ansible Role: Filebeat for Elastic Stack
------------------------------------
An Ansible Role that installs [Filebeat](https://www.elastic.co/products/beats/filebeat), this can be used in conjunction with [ansible-wazuh-manager](https://github.com/wazuh/wazuh-ansible/ansible-wazuh-server).
Requirements
------------
This role will work on:
* Red Hat
* CentOS
* Fedora
* Debian
* Ubuntu
Role Variables
--------------
Available variables are listed below, along with default values (see `defaults/main.yml`):
```
filebeat_output_elasticsearch_hosts:
- "localhost:9200"
```
License and copyright
---------------------
WAZUH Copyright (C) 2021 Wazuh Inc. (License GPLv3)
### Based on previous work from geerlingguy
- https://github.com/geerlingguy/ansible-role-filebeat
### Modified by Wazuh
The playbooks have been modified by Wazuh, including some specific requirements, templates and configuration to improve integration with Wazuh ecosystem.

View File

@ -1,5 +0,0 @@
---
- name: restart filebeat
service:
name: filebeat
state: restarted

View File

@ -1,29 +0,0 @@
---
dependencies: []
galaxy_info:
author: Wazuh
description: Installing and maintaining filebeat server.
company: wazuh.com
license: license (GPLv3)
min_ansible_version: 2.0
platforms:
- name: EL
versions:
- 6
- 7
- name: Fedora
versions:
- all
- name: Debian
versions:
- jessie
- name: Ubuntu
versions:
- precise
- trusty
- xenial
galaxy_tags:
- web
- system
- monitoring

View File

@ -1,23 +0,0 @@
---
- name: Debian/Ubuntu | Install apt-transport-https, ca-certificates and acl
apt:
name:
- apt-transport-https
- ca-certificates
- acl
state: present
register: filebeat_ca_packages_install
until: filebeat_ca_packages_install is succeeded
- name: Debian/Ubuntu | Add Elasticsearch apt key.
apt_key:
url: "{{ elasticrepo.gpg }}"
id: "{{ elasticrepo.key_id }}"
state: present
- name: Debian/Ubuntu | Add Filebeat repository.
apt_repository:
repo: "deb {{ elasticrepo.apt }} stable main"
state: present
update_cache: true
changed_when: false

View File

@ -1,6 +0,0 @@
---
- name: Debian/Ubuntu | Remove Filebeat repository (and clean up left-over metadata)
apt_repository:
repo: "deb {{ elasticrepo.apt }} stable main"
state: absent
changed_when: false

View File

@ -1,6 +0,0 @@
---
- name: RedHat/CentOS/Fedora | Remove Filebeat repository (and clean up left-over metadata)
yum_repository:
name: elastic_repo_7
state: absent
changed_when: false

Some files were not shown because too many files have changed in this diff Show More