Remove modules which have ended their deprecation cycle
* Remove code but leave the metadata so that they can be listed as removed in documentation. * Remove removed modules from validate-modules ignore * Remove unittests for the removed nodules * Remove links to removed modules and add list of removed moduels to the 2.9 porting guide
This commit is contained in:
parent
e5a31e81b6
commit
a1c8fc37e8
29 changed files with 135 additions and 9130 deletions
|
@ -196,12 +196,12 @@ Deprecation notices
|
|||
The following modules will be removed in Ansible 2.9. Please update your playbooks accordingly.
|
||||
|
||||
* Apstra's ``aos_*`` modules are deprecated as they do not work with AOS 2.1 or higher. See new modules at `https://github.com/apstra <https://github.com/apstra>`_.
|
||||
* :ref:`nxos_ip_interface <nxos_ip_interface_module>` use :ref:`nxos_l3_interface <nxos_l3_interface_module>` instead.
|
||||
* :ref:`nxos_portchannel <nxos_portchannel_module>` use :ref:`nxos_linkagg <nxos_linkagg_module>` instead.
|
||||
* :ref:`nxos_switchport <nxos_switchport_module>` use :ref:`nxos_l2_interface <nxos_l2_interface_module>` instead.
|
||||
* :ref:`panos_security_policy <panos_security_policy_module>` use :ref:`panos_security_rule <panos_security_rule_module>` instead.
|
||||
* :ref:`panos_nat_policy <panos_nat_policy_module>` use :ref:`panos_nat_rule <panos_nat_rule_module>` instead.
|
||||
* :ref:`vsphere_guest <vsphere_guest_module>` use :ref:`vmware_guest <vmware_guest_module>` instead.
|
||||
* nxos_ip_interface use :ref:`nxos_l3_interface <nxos_l3_interface_module>` instead.
|
||||
* nxos_portchannel use :ref:`nxos_linkagg <nxos_linkagg_module>` instead.
|
||||
* nxos_switchport use :ref:`nxos_l2_interface <nxos_l2_interface_module>` instead.
|
||||
* panos_security_policy use :ref:`panos_security_rule <panos_security_rule_module>` instead.
|
||||
* panos_nat_policy use :ref:`panos_nat_rule <panos_nat_rule_module>` instead.
|
||||
* vsphere_guest use :ref:`vmware_guest <vmware_guest_module>` instead.
|
||||
|
||||
Noteworthy module changes
|
||||
-------------------------
|
||||
|
|
|
@ -45,7 +45,16 @@ Modules removed
|
|||
|
||||
The following modules no longer exist:
|
||||
|
||||
* No notable changes
|
||||
* Apstra's ``aos_*`` modules. See the new modules at `https://github.com/apstra <https://github.com/apstra>`_.
|
||||
* ec2_ami_find
|
||||
* kubernetes
|
||||
* nxos_ip_interface use :ref:`nxos_l3_interface <nxos_l3_interface_module>` instead.
|
||||
* nxos_portchannel use :ref:`nxos_linkagg <nxos_linkagg_module>` instead.
|
||||
* nxos_switchport use :ref:`nxos_l2_interface <nxos_l2_interface_module>` instead.
|
||||
* oc
|
||||
* panos_nat_policy use :ref:`panos_nat_rule <panos_nat_rule_module>` instead.
|
||||
* panos_security_policy use :ref:`panos_security_rule <panos_security_rule_module>` instead.
|
||||
* vsphere_guest use :ref:`vmware_guest <vmware_guest_module>` instead.
|
||||
|
||||
|
||||
Deprecation notices
|
||||
|
|
|
@ -18,3 +18,4 @@ Please note that this is not a complete list. If you believe any extra informati
|
|||
porting_guide_2.6
|
||||
porting_guide_2.7
|
||||
porting_guide_2.8
|
||||
porting_guide_2.9
|
||||
|
|
|
@ -7,417 +7,12 @@ __metaclass__ = type
|
|||
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = r'''
|
||||
---
|
||||
module: ec2_ami_find
|
||||
version_added: '2.0'
|
||||
short_description: Searches for AMIs to obtain the AMI ID and other information
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: Various AWS modules have been combined and replaced with M(ec2_ami_facts).
|
||||
alternative: Use M(ec2_ami_facts) instead.
|
||||
description:
|
||||
- Returns list of matching AMIs with AMI ID, along with other useful information
|
||||
- Can search AMIs with different owners
|
||||
- Can search by matching tag(s), by AMI name and/or other criteria
|
||||
- Results can be sorted and sliced
|
||||
author: "Tom Bamford (@tombamford)"
|
||||
notes:
|
||||
- This module is not backwards compatible with the previous version of the ec2_search_ami module which worked only for Ubuntu AMIs listed on
|
||||
cloud-images.ubuntu.com.
|
||||
- See the example below for a suggestion of how to search by distro/release.
|
||||
options:
|
||||
region:
|
||||
description:
|
||||
- The AWS region to use.
|
||||
required: true
|
||||
aliases: [ 'aws_region', 'ec2_region' ]
|
||||
owner:
|
||||
description:
|
||||
- Search AMIs owned by the specified owner
|
||||
- Can specify an AWS account ID, or one of the special IDs 'self', 'amazon' or 'aws-marketplace'
|
||||
- If not specified, all EC2 AMIs in the specified region will be searched.
|
||||
- You can include wildcards in many of the search options. An asterisk (*) matches zero or more characters, and a question mark (?) matches exactly one
|
||||
character. You can escape special characters using a backslash (\) before the character. For example, a value of \*amazon\?\\ searches for the
|
||||
literal string *amazon?\.
|
||||
ami_id:
|
||||
description:
|
||||
- An AMI ID to match.
|
||||
ami_tags:
|
||||
description:
|
||||
- A hash/dictionary of tags to match for the AMI.
|
||||
architecture:
|
||||
description:
|
||||
- An architecture type to match (e.g. x86_64).
|
||||
hypervisor:
|
||||
description:
|
||||
- A hypervisor type type to match (e.g. xen).
|
||||
is_public:
|
||||
description:
|
||||
- Whether or not the image(s) are public.
|
||||
type: bool
|
||||
name:
|
||||
description:
|
||||
- An AMI name to match.
|
||||
platform:
|
||||
description:
|
||||
- Platform type to match.
|
||||
product_code:
|
||||
description:
|
||||
- Marketplace product code to match.
|
||||
version_added: "2.3"
|
||||
sort:
|
||||
description:
|
||||
- Optional attribute which with to sort the results.
|
||||
- If specifying 'tag', the 'tag_name' parameter is required.
|
||||
- Starting at version 2.1, additional sort choices of architecture, block_device_mapping, creationDate, hypervisor, is_public, location, owner_id,
|
||||
platform, root_device_name, root_device_type, state, and virtualization_type are supported.
|
||||
choices:
|
||||
- 'name'
|
||||
- 'description'
|
||||
- 'tag'
|
||||
- 'architecture'
|
||||
- 'block_device_mapping'
|
||||
- 'creationDate'
|
||||
- 'hypervisor'
|
||||
- 'is_public'
|
||||
- 'location'
|
||||
- 'owner_id'
|
||||
- 'platform'
|
||||
- 'root_device_name'
|
||||
- 'root_device_type'
|
||||
- 'state'
|
||||
- 'virtualization_type'
|
||||
sort_tag:
|
||||
description:
|
||||
- Tag name with which to sort results.
|
||||
- Required when specifying 'sort=tag'.
|
||||
sort_order:
|
||||
description:
|
||||
- Order in which to sort results.
|
||||
- Only used when the 'sort' parameter is specified.
|
||||
choices: ['ascending', 'descending']
|
||||
default: 'ascending'
|
||||
sort_start:
|
||||
description:
|
||||
- Which result to start with (when sorting).
|
||||
- Corresponds to Python slice notation.
|
||||
sort_end:
|
||||
description:
|
||||
- Which result to end with (when sorting).
|
||||
- Corresponds to Python slice notation.
|
||||
state:
|
||||
description:
|
||||
- AMI state to match.
|
||||
default: 'available'
|
||||
virtualization_type:
|
||||
description:
|
||||
- Virtualization type to match (e.g. hvm).
|
||||
root_device_type:
|
||||
description:
|
||||
- Root device type to match (e.g. ebs, instance-store).
|
||||
version_added: "2.5"
|
||||
no_result_action:
|
||||
description:
|
||||
- What to do when no results are found.
|
||||
- "'success' reports success and returns an empty array"
|
||||
- "'fail' causes the module to report failure"
|
||||
choices: ['success', 'fail']
|
||||
default: 'success'
|
||||
extends_documentation_fragment:
|
||||
- aws
|
||||
requirements:
|
||||
- "python >= 2.6"
|
||||
- boto
|
||||
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
# Note: These examples do not set authentication details, see the AWS Guide for details.
|
||||
|
||||
# Search for the AMI tagged "project:website"
|
||||
- ec2_ami_find:
|
||||
owner: self
|
||||
ami_tags:
|
||||
project: website
|
||||
no_result_action: fail
|
||||
register: ami_find
|
||||
|
||||
# Search for the latest Ubuntu 14.04 AMI
|
||||
- ec2_ami_find:
|
||||
name: "ubuntu/images/ebs/ubuntu-trusty-14.04-amd64-server-*"
|
||||
owner: 099720109477
|
||||
sort: name
|
||||
sort_order: descending
|
||||
sort_end: 1
|
||||
register: ami_find
|
||||
|
||||
# Launch an EC2 instance
|
||||
- ec2:
|
||||
image: "{{ ami_find.results[0].ami_id }}"
|
||||
instance_type: m3.medium
|
||||
key_name: mykey
|
||||
wait: yes
|
||||
'''
|
||||
|
||||
RETURN = '''
|
||||
ami_id:
|
||||
description: id of found amazon image
|
||||
returned: when AMI found
|
||||
type: str
|
||||
sample: "ami-e9095e8c"
|
||||
architecture:
|
||||
description: architecture of image
|
||||
returned: when AMI found
|
||||
type: str
|
||||
sample: "x86_64"
|
||||
block_device_mapping:
|
||||
description: block device mapping associated with image
|
||||
returned: when AMI found
|
||||
type: dict
|
||||
sample: "{
|
||||
'/dev/xvda': {
|
||||
'delete_on_termination': true,
|
||||
'encrypted': false,
|
||||
'size': 8,
|
||||
'snapshot_id': 'snap-ca0330b8',
|
||||
'volume_type': 'gp2'
|
||||
}"
|
||||
creationDate:
|
||||
description: creation date of image
|
||||
returned: when AMI found
|
||||
type: str
|
||||
sample: "2015-10-15T22:43:44.000Z"
|
||||
description:
|
||||
description: description of image
|
||||
returned: when AMI found
|
||||
type: str
|
||||
sample: "test-server01"
|
||||
hypervisor:
|
||||
description: type of hypervisor
|
||||
returned: when AMI found
|
||||
type: str
|
||||
sample: "xen"
|
||||
is_public:
|
||||
description: whether image is public
|
||||
returned: when AMI found
|
||||
type: bool
|
||||
sample: false
|
||||
location:
|
||||
description: location of image
|
||||
returned: when AMI found
|
||||
type: str
|
||||
sample: "435210894375/test-server01-20151015-234343"
|
||||
name:
|
||||
description: ami name of image
|
||||
returned: when AMI found
|
||||
type: str
|
||||
sample: "test-server01-20151015-234343"
|
||||
owner_id:
|
||||
description: owner of image
|
||||
returned: when AMI found
|
||||
type: str
|
||||
sample: "435210894375"
|
||||
platform:
|
||||
description: platform of image
|
||||
returned: when AMI found
|
||||
type: str
|
||||
sample: null
|
||||
root_device_name:
|
||||
description: root device name of image
|
||||
returned: when AMI found
|
||||
type: str
|
||||
sample: "/dev/xvda"
|
||||
root_device_type:
|
||||
description: root device type of image
|
||||
returned: when AMI found
|
||||
type: str
|
||||
sample: "ebs"
|
||||
state:
|
||||
description: state of image
|
||||
returned: when AMI found
|
||||
type: str
|
||||
sample: "available"
|
||||
tags:
|
||||
description: tags assigned to image
|
||||
returned: when AMI found
|
||||
type: dict
|
||||
sample: "{
|
||||
'Environment': 'devel',
|
||||
'Name': 'test-server01',
|
||||
'Role': 'web'
|
||||
}"
|
||||
virtualization_type:
|
||||
description: image virtualization type
|
||||
returned: when AMI found
|
||||
type: str
|
||||
sample: "hvm"
|
||||
'''
|
||||
|
||||
import json
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.ec2 import HAS_BOTO, ec2_argument_spec, ec2_connect
|
||||
|
||||
|
||||
def get_block_device_mapping(image):
|
||||
"""
|
||||
Retrieves block device mapping from AMI
|
||||
"""
|
||||
|
||||
bdm_dict = dict()
|
||||
bdm = getattr(image, 'block_device_mapping')
|
||||
for device_name in bdm.keys():
|
||||
bdm_dict[device_name] = {
|
||||
'size': bdm[device_name].size,
|
||||
'snapshot_id': bdm[device_name].snapshot_id,
|
||||
'volume_type': bdm[device_name].volume_type,
|
||||
'encrypted': bdm[device_name].encrypted,
|
||||
'delete_on_termination': bdm[device_name].delete_on_termination
|
||||
}
|
||||
|
||||
return bdm_dict
|
||||
|
||||
|
||||
def main():
|
||||
argument_spec = ec2_argument_spec()
|
||||
argument_spec.update(dict(
|
||||
owner=dict(required=False, default=None),
|
||||
ami_id=dict(required=False),
|
||||
ami_tags=dict(required=False, type='dict',
|
||||
aliases=['search_tags', 'image_tags']),
|
||||
architecture=dict(required=False),
|
||||
hypervisor=dict(required=False),
|
||||
is_public=dict(required=False, type='bool'),
|
||||
name=dict(required=False),
|
||||
platform=dict(required=False),
|
||||
product_code=dict(required=False),
|
||||
sort=dict(required=False, default=None,
|
||||
choices=['name', 'description', 'tag', 'architecture', 'block_device_mapping', 'creationDate', 'hypervisor', 'is_public', 'location',
|
||||
'owner_id', 'platform', 'root_device_name', 'root_device_type', 'state', 'virtualization_type']),
|
||||
sort_tag=dict(required=False),
|
||||
sort_order=dict(required=False, default='ascending',
|
||||
choices=['ascending', 'descending']),
|
||||
sort_start=dict(required=False),
|
||||
sort_end=dict(required=False),
|
||||
state=dict(required=False, default='available'),
|
||||
virtualization_type=dict(required=False),
|
||||
no_result_action=dict(required=False, default='success',
|
||||
choices=['success', 'fail']),
|
||||
)
|
||||
)
|
||||
|
||||
module = AnsibleModule(
|
||||
argument_spec=argument_spec,
|
||||
supports_check_mode=True,
|
||||
)
|
||||
|
||||
module.deprecate("The 'ec2_ami_find' module has been deprecated. Use 'ec2_ami_facts' instead.", version=2.9)
|
||||
|
||||
if not HAS_BOTO:
|
||||
module.fail_json(msg='boto required for this module, install via pip or your package manager')
|
||||
|
||||
ami_id = module.params.get('ami_id')
|
||||
ami_tags = module.params.get('ami_tags')
|
||||
architecture = module.params.get('architecture')
|
||||
hypervisor = module.params.get('hypervisor')
|
||||
is_public = module.params.get('is_public')
|
||||
name = module.params.get('name')
|
||||
owner = module.params.get('owner')
|
||||
platform = module.params.get('platform')
|
||||
product_code = module.params.get('product_code')
|
||||
root_device_type = module.params.get('root_device_type')
|
||||
sort = module.params.get('sort')
|
||||
sort_tag = module.params.get('sort_tag')
|
||||
sort_order = module.params.get('sort_order')
|
||||
sort_start = module.params.get('sort_start')
|
||||
sort_end = module.params.get('sort_end')
|
||||
state = module.params.get('state')
|
||||
virtualization_type = module.params.get('virtualization_type')
|
||||
no_result_action = module.params.get('no_result_action')
|
||||
|
||||
filter = {'state': state}
|
||||
|
||||
if ami_id:
|
||||
filter['image_id'] = ami_id
|
||||
if ami_tags:
|
||||
for tag in ami_tags:
|
||||
filter['tag:' + tag] = ami_tags[tag]
|
||||
if architecture:
|
||||
filter['architecture'] = architecture
|
||||
if hypervisor:
|
||||
filter['hypervisor'] = hypervisor
|
||||
if is_public:
|
||||
filter['is_public'] = 'true'
|
||||
if name:
|
||||
filter['name'] = name
|
||||
if platform:
|
||||
filter['platform'] = platform
|
||||
if product_code:
|
||||
filter['product-code'] = product_code
|
||||
if root_device_type:
|
||||
filter['root_device_type'] = root_device_type
|
||||
if virtualization_type:
|
||||
filter['virtualization_type'] = virtualization_type
|
||||
|
||||
ec2 = ec2_connect(module)
|
||||
|
||||
images_result = ec2.get_all_images(owners=owner, filters=filter)
|
||||
|
||||
if no_result_action == 'fail' and len(images_result) == 0:
|
||||
module.fail_json(msg="No AMIs matched the attributes: %s" % json.dumps(filter))
|
||||
|
||||
results = []
|
||||
for image in images_result:
|
||||
data = {
|
||||
'ami_id': image.id,
|
||||
'architecture': image.architecture,
|
||||
'block_device_mapping': get_block_device_mapping(image),
|
||||
'creationDate': image.creationDate,
|
||||
'description': image.description,
|
||||
'hypervisor': image.hypervisor,
|
||||
'is_public': image.is_public,
|
||||
'location': image.location,
|
||||
'name': image.name,
|
||||
'owner_id': image.owner_id,
|
||||
'platform': image.platform,
|
||||
'root_device_name': image.root_device_name,
|
||||
'root_device_type': image.root_device_type,
|
||||
'state': image.state,
|
||||
'tags': image.tags,
|
||||
'virtualization_type': image.virtualization_type,
|
||||
}
|
||||
|
||||
if image.kernel_id:
|
||||
data['kernel_id'] = image.kernel_id
|
||||
if image.ramdisk_id:
|
||||
data['ramdisk_id'] = image.ramdisk_id
|
||||
|
||||
results.append(data)
|
||||
|
||||
if sort == 'tag':
|
||||
if not sort_tag:
|
||||
module.fail_json(msg="'sort_tag' option must be given with 'sort=tag'")
|
||||
results.sort(key=lambda e: e['tags'][sort_tag], reverse=(sort_order == 'descending'))
|
||||
elif sort:
|
||||
results.sort(key=lambda e: e[sort], reverse=(sort_order == 'descending'))
|
||||
|
||||
try:
|
||||
if sort and sort_start and sort_end:
|
||||
results = results[int(sort_start):int(sort_end)]
|
||||
elif sort and sort_start:
|
||||
results = results[int(sort_start):]
|
||||
elif sort and sort_end:
|
||||
results = results[:int(sort_end)]
|
||||
except TypeError:
|
||||
module.fail_json(msg="Please supply numeric values for sort_start and/or sort_end")
|
||||
|
||||
module.exit_json(results=results)
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -7,422 +7,12 @@ from __future__ import absolute_import, division, print_function
|
|||
__metaclass__ = type
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: kubernetes
|
||||
version_added: "2.1"
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module used the oc command line tool, where as M(k8s_raw) goes over the REST API.
|
||||
alternative: Use M(k8s_raw) instead.
|
||||
short_description: Manage Kubernetes resources
|
||||
description:
|
||||
- This module can manage Kubernetes resources on an existing cluster using
|
||||
the Kubernetes server API. Users can specify in-line API data, or
|
||||
specify an existing Kubernetes YAML file.
|
||||
- Currently, this module
|
||||
(1) Only supports HTTP Basic Auth
|
||||
(2) Only supports 'strategic merge' for update, http://goo.gl/fCPYxT
|
||||
SSL certs are not working, use C(validate_certs=off) to disable.
|
||||
options:
|
||||
api_endpoint:
|
||||
description:
|
||||
- The IPv4 API endpoint of the Kubernetes cluster.
|
||||
required: true
|
||||
aliases: [ endpoint ]
|
||||
inline_data:
|
||||
description:
|
||||
- The Kubernetes YAML data to send to the API I(endpoint). This option is
|
||||
mutually exclusive with C('file_reference').
|
||||
required: true
|
||||
file_reference:
|
||||
description:
|
||||
- Specify full path to a Kubernets YAML file to send to API I(endpoint).
|
||||
This option is mutually exclusive with C('inline_data').
|
||||
patch_operation:
|
||||
description:
|
||||
- Specify patch operation for Kubernetes resource update.
|
||||
- For details, see the description of PATCH operations at
|
||||
U(https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/devel/api-conventions.md#patch-operations).
|
||||
default: Strategic Merge Patch
|
||||
choices: [ JSON Patch, Merge Patch, Strategic Merge Patch ]
|
||||
aliases: [ patch_strategy ]
|
||||
version_added: 2.4
|
||||
certificate_authority_data:
|
||||
description:
|
||||
- Certificate Authority data for Kubernetes server. Should be in either
|
||||
standard PEM format or base64 encoded PEM data. Note that certificate
|
||||
verification is broken until ansible supports a version of
|
||||
'match_hostname' that can match the IP address against the CA data.
|
||||
state:
|
||||
description:
|
||||
- The desired action to take on the Kubernetes data.
|
||||
required: true
|
||||
choices: [ absent, present, replace, update ]
|
||||
default: present
|
||||
url_password:
|
||||
description:
|
||||
- The HTTP Basic Auth password for the API I(endpoint). This should be set
|
||||
unless using the C('insecure') option.
|
||||
aliases: [ password ]
|
||||
url_username:
|
||||
description:
|
||||
- The HTTP Basic Auth username for the API I(endpoint). This should be set
|
||||
unless using the C('insecure') option.
|
||||
default: admin
|
||||
aliases: [ username ]
|
||||
insecure:
|
||||
description:
|
||||
- Reverts the connection to using HTTP instead of HTTPS. This option should
|
||||
only be used when execuing the M('kubernetes') module local to the Kubernetes
|
||||
cluster using the insecure local port (locahost:8080 by default).
|
||||
validate_certs:
|
||||
description:
|
||||
- Enable/disable certificate validation. Note that this is set to
|
||||
C(false) until Ansible can support IP address based certificate
|
||||
hostname matching (exists in >= python3.5.0).
|
||||
type: bool
|
||||
default: 'no'
|
||||
author:
|
||||
- Eric Johnson (@erjohnso) <erjohnso@google.com>
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
# Create a new namespace with in-line YAML.
|
||||
- name: Create a kubernetes namespace
|
||||
kubernetes:
|
||||
api_endpoint: 123.45.67.89
|
||||
url_username: admin
|
||||
url_password: redacted
|
||||
inline_data:
|
||||
kind: Namespace
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: ansible-test
|
||||
labels:
|
||||
label_env: production
|
||||
label_ver: latest
|
||||
annotations:
|
||||
a1: value1
|
||||
a2: value2
|
||||
state: present
|
||||
|
||||
# Create a new namespace from a YAML file.
|
||||
- name: Create a kubernetes namespace
|
||||
kubernetes:
|
||||
api_endpoint: 123.45.67.89
|
||||
url_username: admin
|
||||
url_password: redacted
|
||||
file_reference: /path/to/create_namespace.yaml
|
||||
state: present
|
||||
|
||||
# Do the same thing, but using the insecure localhost port
|
||||
- name: Create a kubernetes namespace
|
||||
kubernetes:
|
||||
api_endpoint: 123.45.67.89
|
||||
insecure: true
|
||||
file_reference: /path/to/create_namespace.yaml
|
||||
state: present
|
||||
|
||||
'''
|
||||
|
||||
RETURN = '''
|
||||
# Example response from creating a Kubernetes Namespace.
|
||||
api_response:
|
||||
description: Raw response from Kubernetes API, content varies with API.
|
||||
returned: success
|
||||
type: complex
|
||||
contains:
|
||||
apiVersion: "v1"
|
||||
kind: "Namespace"
|
||||
metadata:
|
||||
creationTimestamp: "2016-01-04T21:16:32Z"
|
||||
name: "test-namespace"
|
||||
resourceVersion: "509635"
|
||||
selfLink: "/api/v1/namespaces/test-namespace"
|
||||
uid: "6dbd394e-b328-11e5-9a02-42010af0013a"
|
||||
spec:
|
||||
finalizers:
|
||||
- kubernetes
|
||||
status:
|
||||
phase: "Active"
|
||||
'''
|
||||
|
||||
import base64
|
||||
import json
|
||||
import traceback
|
||||
|
||||
YAML_IMP_ERR = None
|
||||
try:
|
||||
import yaml
|
||||
HAS_LIB_YAML = True
|
||||
except ImportError:
|
||||
YAML_IMP_ERR = traceback.format_exc()
|
||||
HAS_LIB_YAML = False
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule, missing_required_lib
|
||||
from ansible.module_utils.urls import fetch_url
|
||||
|
||||
|
||||
############################################################################
|
||||
############################################################################
|
||||
# For API coverage, this Anislbe module provides capability to operate on
|
||||
# all Kubernetes objects that support a "create" call (except for 'Events').
|
||||
# In order to obtain a valid list of Kubernetes objects, the v1 spec file
|
||||
# was referenced and the below python script was used to parse the JSON
|
||||
# spec file, extract only the objects with a description starting with
|
||||
# 'create a'. The script then iterates over all of these base objects
|
||||
# to get the endpoint URL and was used to generate the KIND_URL map.
|
||||
#
|
||||
# import json
|
||||
# from urllib2 import urlopen
|
||||
#
|
||||
# r = urlopen("https://raw.githubusercontent.com/kubernetes"
|
||||
# "/kubernetes/master/api/swagger-spec/v1.json")
|
||||
# v1 = json.load(r)
|
||||
#
|
||||
# apis = {}
|
||||
# for a in v1['apis']:
|
||||
# p = a['path']
|
||||
# for o in a['operations']:
|
||||
# if o["summary"].startswith("create a") and o["type"] != "v1.Event":
|
||||
# apis[o["type"]] = p
|
||||
#
|
||||
# def print_kind_url_map():
|
||||
# results = []
|
||||
# for a in apis.keys():
|
||||
# results.append('"%s": "%s"' % (a[3:].lower(), apis[a]))
|
||||
# results.sort()
|
||||
# print("KIND_URL = {")
|
||||
# print(",\n".join(results))
|
||||
# print("}")
|
||||
#
|
||||
# if __name__ == '__main__':
|
||||
# print_kind_url_map()
|
||||
############################################################################
|
||||
############################################################################
|
||||
|
||||
KIND_URL = {
|
||||
"binding": "/api/v1/namespaces/{namespace}/bindings",
|
||||
"configmap": "/api/v1/namespaces/{namespace}/configmaps",
|
||||
"endpoints": "/api/v1/namespaces/{namespace}/endpoints",
|
||||
"limitrange": "/api/v1/namespaces/{namespace}/limitranges",
|
||||
"namespace": "/api/v1/namespaces",
|
||||
"node": "/api/v1/nodes",
|
||||
"persistentvolume": "/api/v1/persistentvolumes",
|
||||
"persistentvolumeclaim": "/api/v1/namespaces/{namespace}/persistentvolumeclaims", # NOQA
|
||||
"pod": "/api/v1/namespaces/{namespace}/pods",
|
||||
"podtemplate": "/api/v1/namespaces/{namespace}/podtemplates",
|
||||
"replicationcontroller": "/api/v1/namespaces/{namespace}/replicationcontrollers", # NOQA
|
||||
"resourcequota": "/api/v1/namespaces/{namespace}/resourcequotas",
|
||||
"secret": "/api/v1/namespaces/{namespace}/secrets",
|
||||
"service": "/api/v1/namespaces/{namespace}/services",
|
||||
"serviceaccount": "/api/v1/namespaces/{namespace}/serviceaccounts",
|
||||
"daemonset": "/apis/extensions/v1beta1/namespaces/{namespace}/daemonsets",
|
||||
"deployment": "/apis/extensions/v1beta1/namespaces/{namespace}/deployments",
|
||||
"horizontalpodautoscaler": "/apis/extensions/v1beta1/namespaces/{namespace}/horizontalpodautoscalers", # NOQA
|
||||
"ingress": "/apis/extensions/v1beta1/namespaces/{namespace}/ingresses",
|
||||
"job": "/apis/extensions/v1beta1/namespaces/{namespace}/jobs",
|
||||
}
|
||||
USER_AGENT = "ansible-k8s-module/0.0.1"
|
||||
|
||||
|
||||
# TODO(erjohnso): SSL Certificate validation is currently unsupported.
|
||||
# It can be made to work when the following are true:
|
||||
# - Ansible consistently uses a "match_hostname" that supports IP Address
|
||||
# matching. This is now true in >= python3.5.0. Currently, this feature
|
||||
# is not yet available in backports.ssl_match_hostname (still 3.4).
|
||||
# - Ansible allows passing in the self-signed CA cert that is created with
|
||||
# a kubernetes master. The lib/ansible/module_utils/urls.py method,
|
||||
# SSLValidationHandler.get_ca_certs() needs a way for the Kubernetes
|
||||
# CA cert to be passed in and included in the generated bundle file.
|
||||
# When this is fixed, the following changes can be made to this module,
|
||||
# - Remove the 'return' statement in line 254 below
|
||||
# - Set 'required=true' for certificate_authority_data and ensure that
|
||||
# ansible's SSLValidationHandler.get_ca_certs() can pick up this CA cert
|
||||
# - Set 'required=true' for the validate_certs param.
|
||||
|
||||
def decode_cert_data(module):
|
||||
return
|
||||
# pylint: disable=unreachable
|
||||
d = module.params.get("certificate_authority_data")
|
||||
if d and not d.startswith("-----BEGIN"):
|
||||
module.params["certificate_authority_data"] = base64.b64decode(d)
|
||||
|
||||
|
||||
def api_request(module, url, method="GET", headers=None, data=None):
|
||||
body = None
|
||||
if data:
|
||||
data = json.dumps(data)
|
||||
response, info = fetch_url(module, url, method=method, headers=headers, data=data)
|
||||
if int(info['status']) == -1:
|
||||
module.fail_json(msg="Failed to execute the API request: %s" % info['msg'], url=url, method=method, headers=headers)
|
||||
if response is not None:
|
||||
body = json.loads(response.read())
|
||||
return info, body
|
||||
|
||||
|
||||
def k8s_create_resource(module, url, data):
|
||||
info, body = api_request(module, url, method="POST", data=data, headers={"Content-Type": "application/json"})
|
||||
if info['status'] == 409:
|
||||
name = data["metadata"].get("name", None)
|
||||
info, body = api_request(module, url + "/" + name)
|
||||
return False, body
|
||||
elif info['status'] >= 400:
|
||||
module.fail_json(msg="failed to create the resource: %s" % info['msg'], url=url)
|
||||
return True, body
|
||||
|
||||
|
||||
def k8s_delete_resource(module, url, data):
|
||||
name = data.get('metadata', {}).get('name')
|
||||
if name is None:
|
||||
module.fail_json(msg="Missing a named resource in object metadata when trying to remove a resource")
|
||||
|
||||
url = url + '/' + name
|
||||
info, body = api_request(module, url, method="DELETE")
|
||||
if info['status'] == 404:
|
||||
return False, "Resource name '%s' already absent" % name
|
||||
elif info['status'] >= 400:
|
||||
module.fail_json(msg="failed to delete the resource '%s': %s" % (name, info['msg']), url=url)
|
||||
return True, "Successfully deleted resource name '%s'" % name
|
||||
|
||||
|
||||
def k8s_replace_resource(module, url, data):
|
||||
name = data.get('metadata', {}).get('name')
|
||||
if name is None:
|
||||
module.fail_json(msg="Missing a named resource in object metadata when trying to replace a resource")
|
||||
|
||||
headers = {"Content-Type": "application/json"}
|
||||
url = url + '/' + name
|
||||
info, body = api_request(module, url, method="PUT", data=data, headers=headers)
|
||||
if info['status'] == 409:
|
||||
name = data["metadata"].get("name", None)
|
||||
info, body = api_request(module, url + "/" + name)
|
||||
return False, body
|
||||
elif info['status'] >= 400:
|
||||
module.fail_json(msg="failed to replace the resource '%s': %s" % (name, info['msg']), url=url)
|
||||
return True, body
|
||||
|
||||
|
||||
def k8s_update_resource(module, url, data, patch_operation):
|
||||
# PATCH operations are explained in details at:
|
||||
# https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/devel/api-conventions.md#patch-operations
|
||||
PATCH_OPERATIONS_MAP = {
|
||||
'JSON Patch': 'application/json-patch+json',
|
||||
'Merge Patch': 'application/merge-patch+json',
|
||||
'Strategic Merge Patch': 'application/strategic-merge-patch+json',
|
||||
}
|
||||
|
||||
name = data.get('metadata', {}).get('name')
|
||||
if name is None:
|
||||
module.fail_json(msg="Missing a named resource in object metadata when trying to update a resource")
|
||||
|
||||
headers = {"Content-Type": PATCH_OPERATIONS_MAP[patch_operation]}
|
||||
url = url + '/' + name
|
||||
info, body = api_request(module, url, method="PATCH", data=data, headers=headers)
|
||||
if info['status'] == 409:
|
||||
name = data["metadata"].get("name", None)
|
||||
info, body = api_request(module, url + "/" + name)
|
||||
return False, body
|
||||
elif info['status'] >= 400:
|
||||
module.fail_json(msg="failed to update the resource '%s': %s" % (name, info['msg']), url=url)
|
||||
return True, body
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
http_agent=dict(type='str', default=USER_AGENT),
|
||||
url_username=dict(type='str', default='admin', aliases=['username']),
|
||||
url_password=dict(type='str', default='', no_log=True, aliases=['password']),
|
||||
force_basic_auth=dict(type='bool', default=True),
|
||||
validate_certs=dict(type='bool', default=False),
|
||||
certificate_authority_data=dict(type='str'),
|
||||
insecure=dict(type='bool', default=False),
|
||||
api_endpoint=dict(type='str', required=True),
|
||||
patch_operation=dict(type='str', default='Strategic Merge Patch', aliases=['patch_strategy'],
|
||||
choices=['JSON Patch', 'Merge Patch', 'Strategic Merge Patch']),
|
||||
file_reference=dict(type='str'),
|
||||
inline_data=dict(type='str'),
|
||||
state=dict(type='str', default='present', choices=['absent', 'present', 'replace', 'update'])
|
||||
),
|
||||
mutually_exclusive=(('file_reference', 'inline_data'),
|
||||
('url_username', 'insecure'),
|
||||
('url_password', 'insecure')),
|
||||
required_one_of=(('file_reference', 'inline_data'),),
|
||||
)
|
||||
|
||||
if not HAS_LIB_YAML:
|
||||
module.fail_json(msg=missing_required_lib('PyYAML'), exception=YAML_IMP_ERR)
|
||||
|
||||
decode_cert_data(module)
|
||||
|
||||
api_endpoint = module.params.get('api_endpoint')
|
||||
state = module.params.get('state')
|
||||
insecure = module.params.get('insecure')
|
||||
inline_data = module.params.get('inline_data')
|
||||
file_reference = module.params.get('file_reference')
|
||||
patch_operation = module.params.get('patch_operation')
|
||||
|
||||
if inline_data:
|
||||
if not isinstance(inline_data, dict) and not isinstance(inline_data, list):
|
||||
data = yaml.safe_load(inline_data)
|
||||
else:
|
||||
data = inline_data
|
||||
else:
|
||||
try:
|
||||
f = open(file_reference, "r")
|
||||
data = [x for x in yaml.safe_load_all(f)]
|
||||
f.close()
|
||||
if not data:
|
||||
module.fail_json(msg="No valid data could be found.")
|
||||
except Exception:
|
||||
module.fail_json(msg="The file '%s' was not found or contained invalid YAML/JSON data" % file_reference)
|
||||
|
||||
# set the transport type and build the target endpoint url
|
||||
transport = 'https'
|
||||
if insecure:
|
||||
transport = 'http'
|
||||
|
||||
target_endpoint = "%s://%s" % (transport, api_endpoint)
|
||||
|
||||
body = []
|
||||
changed = False
|
||||
|
||||
# make sure the data is a list
|
||||
if not isinstance(data, list):
|
||||
data = [data]
|
||||
|
||||
for item in data:
|
||||
namespace = "default"
|
||||
if item and 'metadata' in item:
|
||||
namespace = item.get('metadata', {}).get('namespace', "default")
|
||||
kind = item.get('kind', '').lower()
|
||||
try:
|
||||
url = target_endpoint + KIND_URL[kind]
|
||||
except KeyError:
|
||||
module.fail_json(msg="invalid resource kind specified in the data: '%s'" % kind)
|
||||
url = url.replace("{namespace}", namespace)
|
||||
else:
|
||||
url = target_endpoint
|
||||
|
||||
if state == 'present':
|
||||
item_changed, item_body = k8s_create_resource(module, url, item)
|
||||
elif state == 'absent':
|
||||
item_changed, item_body = k8s_delete_resource(module, url, item)
|
||||
elif state == 'replace':
|
||||
item_changed, item_body = k8s_replace_resource(module, url, item)
|
||||
elif state == 'update':
|
||||
item_changed, item_body = k8s_update_resource(module, url, item, patch_operation)
|
||||
|
||||
changed |= item_changed
|
||||
body.append(item_body)
|
||||
|
||||
module.exit_json(changed=changed, api_response=body)
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -9,457 +9,13 @@ __metaclass__ = type
|
|||
|
||||
ANSIBLE_METADATA = {
|
||||
'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'
|
||||
}
|
||||
|
||||
|
||||
DOCUMENTATION = """
|
||||
author:
|
||||
- "Kenneth D. Evensen (@kevensen)"
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module used the oc command line tool, where as M(openshift_raw) goes over the REST API.
|
||||
alternative: Use M(openshift_raw) instead.
|
||||
description:
|
||||
- This module allows management of resources in an OpenShift cluster. The
|
||||
inventory host can be any host with network connectivity to the OpenShift
|
||||
cluster; the default port being 8443/TCP.
|
||||
- This module relies on a token to authenticate to OpenShift. This can either
|
||||
be a user or a service account.
|
||||
module: oc
|
||||
options:
|
||||
host:
|
||||
description:
|
||||
- "Hostname or address of the OpenShift API endpoint. By default, this is expected to be the current inventory host."
|
||||
required: false
|
||||
default: 127.0.0.1
|
||||
port:
|
||||
description:
|
||||
- "The port number of the API endpoint."
|
||||
required: false
|
||||
default: 8443
|
||||
inline:
|
||||
description:
|
||||
- "The inline definition of the resource. This is mutually exclusive with name, namespace and kind."
|
||||
required: false
|
||||
aliases: ['def', 'definition']
|
||||
kind:
|
||||
description: The kind of the resource upon which to take action.
|
||||
required: true
|
||||
name:
|
||||
description:
|
||||
- "The name of the resource on which to take action."
|
||||
required: false
|
||||
namespace:
|
||||
description:
|
||||
- "The namespace of the resource upon which to take action."
|
||||
required: false
|
||||
token:
|
||||
description:
|
||||
- "The token with which to authenticate against the OpenShift cluster."
|
||||
required: true
|
||||
validate_certs:
|
||||
description:
|
||||
- If C(no), SSL certificates for the target url will not be validated.
|
||||
This should only be used on personally controlled sites using
|
||||
self-signed certificates.
|
||||
type: bool
|
||||
default: yes
|
||||
state:
|
||||
choices:
|
||||
- present
|
||||
- absent
|
||||
description:
|
||||
- "If the state is present, and the resource doesn't exist, it shall be created using the inline definition. If the state is present and the
|
||||
resource exists, the definition will be updated, again using an inline definition. If the state is absent, the resource will be deleted if it exists."
|
||||
required: true
|
||||
short_description: Manage OpenShift Resources
|
||||
version_added: 2.4
|
||||
|
||||
"""
|
||||
|
||||
EXAMPLES = """
|
||||
- name: Create project
|
||||
oc:
|
||||
state: present
|
||||
inline:
|
||||
kind: ProjectRequest
|
||||
metadata:
|
||||
name: ansibletestproject
|
||||
displayName: Ansible Test Project
|
||||
description: This project was created using Ansible
|
||||
token: << redacted >>
|
||||
|
||||
- name: Delete a service
|
||||
oc:
|
||||
state: absent
|
||||
name: myservice
|
||||
namespace: mynamespace
|
||||
kind: Service
|
||||
token: << redacted >>
|
||||
|
||||
- name: Add project role Admin to a user
|
||||
oc:
|
||||
state: present
|
||||
inline:
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: admin
|
||||
namespace: mynamespace
|
||||
roleRef:
|
||||
name: admin
|
||||
userNames:
|
||||
- "myuser"
|
||||
token: << redacted >>
|
||||
|
||||
- name: Obtain an object definition
|
||||
oc:
|
||||
state: present
|
||||
name: myroute
|
||||
namespace: mynamespace
|
||||
kind: Route
|
||||
token: << redacted >>
|
||||
"""
|
||||
|
||||
RETURN = '''
|
||||
result:
|
||||
description:
|
||||
The resource that was created, changed, or otherwise determined to be present.
|
||||
In the case of a deletion, this is the response from the delete request.
|
||||
returned: success
|
||||
type: str
|
||||
url:
|
||||
description: The URL to the requested resource.
|
||||
returned: success
|
||||
type: str
|
||||
method:
|
||||
description: The HTTP method that was used to take action upon the resource
|
||||
returned: success
|
||||
type: str
|
||||
...
|
||||
'''
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils import urls
|
||||
|
||||
|
||||
class ApiEndpoint(object):
|
||||
def __init__(self, host, port, api, version):
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.api = api
|
||||
self.version = version
|
||||
|
||||
def __str__(self):
|
||||
url = "https://"
|
||||
url += self.host
|
||||
url += ":"
|
||||
url += str(self.port)
|
||||
url += "/"
|
||||
url += self.api
|
||||
url += "/"
|
||||
url += self.version
|
||||
return url
|
||||
|
||||
|
||||
class ResourceEndpoint(ApiEndpoint):
|
||||
def __init__(self, name, namespaced, api_endpoint):
|
||||
super(ResourceEndpoint, self).__init__(api_endpoint.host,
|
||||
api_endpoint.port,
|
||||
api_endpoint.api,
|
||||
api_endpoint.version)
|
||||
self.name = name
|
||||
self.namespaced = namespaced
|
||||
|
||||
|
||||
class NamedResource(object):
|
||||
def __init__(self, module, definition, resource_endpoint):
|
||||
self.module = module
|
||||
self.set_definition(definition)
|
||||
self.resource_endpoint = resource_endpoint
|
||||
|
||||
def name(self):
|
||||
if 'name' in self.definition['metadata'].keys():
|
||||
return self.definition['metadata']['name']
|
||||
return None
|
||||
|
||||
def namespace(self):
|
||||
if 'namespace' in self.definition['metadata'].keys():
|
||||
return self.definition['metadata']['namespace']
|
||||
return None
|
||||
|
||||
def set_definition(self, definition):
|
||||
if isinstance(definition, str):
|
||||
self.definition = self.module.from_json(definition)
|
||||
else:
|
||||
self.definition = definition
|
||||
|
||||
def url(self, create=False):
|
||||
url = str(self.resource_endpoint)
|
||||
url += '/'
|
||||
if self.resource_endpoint.namespaced:
|
||||
url += 'namespaces/'
|
||||
url += self.namespace()
|
||||
url += '/'
|
||||
url += self.resource_endpoint.name
|
||||
if not create:
|
||||
url += '/'
|
||||
url += self.name()
|
||||
return url
|
||||
|
||||
def __dict__(self):
|
||||
return self.definition
|
||||
|
||||
def __str__(self):
|
||||
return self.module.jsonify(self.definition)
|
||||
|
||||
|
||||
class OC(object):
|
||||
def __init__(self, module, token, host, port,
|
||||
apis=None):
|
||||
apis = ['api', 'oapi'] if apis is None else apis
|
||||
|
||||
self.apis = apis
|
||||
self.version = 'v1'
|
||||
self.token = token
|
||||
self.module = module
|
||||
self.host = host
|
||||
self.port = port
|
||||
self.kinds = {}
|
||||
|
||||
self.bearer = "Bearer " + self.token
|
||||
self.headers = {"Authorization": self.bearer,
|
||||
"Content-type": "application/json"}
|
||||
# Build Endpoints
|
||||
for api in self.apis:
|
||||
endpoint = ApiEndpoint(self.host,
|
||||
self.port,
|
||||
api,
|
||||
self.version)
|
||||
# Create resource facts
|
||||
response, code = self.connect(str(endpoint), "get")
|
||||
|
||||
if code < 300:
|
||||
self.build_kinds(response['resources'], endpoint)
|
||||
|
||||
def build_kinds(self, resources, endpoint):
|
||||
for resource in resources:
|
||||
if 'generated' not in resource['name']:
|
||||
self.kinds[resource['kind']] = \
|
||||
ResourceEndpoint(resource['name'].split('/')[0],
|
||||
resource['namespaced'],
|
||||
endpoint)
|
||||
|
||||
def get(self, named_resource):
|
||||
changed = False
|
||||
response, code = self.connect(named_resource.url(), 'get')
|
||||
return response, changed
|
||||
|
||||
def exists(self, named_resource):
|
||||
x, code = self.connect(named_resource.url(), 'get')
|
||||
if code == 200:
|
||||
return True
|
||||
return False
|
||||
|
||||
def delete(self, named_resource):
|
||||
changed = False
|
||||
response, code = self.connect(named_resource.url(), 'delete')
|
||||
if code == 404:
|
||||
return None, changed
|
||||
elif code >= 300:
|
||||
self.module.fail_json(msg='Failed to delete resource %s in \
|
||||
namespace %s with msg %s'
|
||||
% (named_resource.name(),
|
||||
named_resource.namespace(),
|
||||
response))
|
||||
changed = True
|
||||
return response, changed
|
||||
|
||||
def create(self, named_resource):
|
||||
changed = False
|
||||
response, code = self.connect(named_resource.url(create=True),
|
||||
'post',
|
||||
data=str(named_resource))
|
||||
if code == 404:
|
||||
return None, changed
|
||||
elif code == 409:
|
||||
return self.get(named_resource)
|
||||
elif code >= 300:
|
||||
self.module.fail_json(
|
||||
msg='Failed to create resource %s in \
|
||||
namespace %s with msg %s' % (named_resource.name(),
|
||||
named_resource.namespace(),
|
||||
response))
|
||||
changed = True
|
||||
return response, changed
|
||||
|
||||
def replace(self, named_resource, check_mode):
|
||||
changed = False
|
||||
|
||||
existing_definition, x = self.get(named_resource)
|
||||
|
||||
new_definition, changed = self.merge(named_resource.definition,
|
||||
existing_definition,
|
||||
changed)
|
||||
if changed and not check_mode:
|
||||
named_resource.set_definition(new_definition)
|
||||
response, code = self.connect(named_resource.url(),
|
||||
'put',
|
||||
data=str(named_resource))
|
||||
|
||||
return response, changed
|
||||
return existing_definition, changed
|
||||
|
||||
def connect(self, url, method, data=None):
|
||||
body = None
|
||||
json_body = ""
|
||||
if data is not None:
|
||||
self.module.log(msg="Payload is %s" % data)
|
||||
response, info = urls.fetch_url(module=self.module,
|
||||
url=url,
|
||||
headers=self.headers,
|
||||
method=method,
|
||||
data=data)
|
||||
if response is not None:
|
||||
body = response.read()
|
||||
if info['status'] >= 300:
|
||||
body = info['body']
|
||||
|
||||
message = "The URL, method, and code for connect is %s, %s, %d." % (url, method, info['status'])
|
||||
if info['status'] == 401:
|
||||
self.module.fail_json(msg=message + " Unauthorized. Check that you have a valid serivce account and token.")
|
||||
|
||||
self.module.log(msg=message)
|
||||
|
||||
try:
|
||||
json_body = self.module.from_json(body)
|
||||
except TypeError:
|
||||
self.module.fail_json(msg="Response from %s expected to be a " +
|
||||
"expected string or buffer." % url)
|
||||
except ValueError:
|
||||
return body, info['status']
|
||||
|
||||
return json_body, info['status']
|
||||
|
||||
def get_resource_endpoint(self, kind):
|
||||
return self.kinds[kind]
|
||||
|
||||
# Attempts to 'kindly' merge the dictionaries into a new object definition
|
||||
def merge(self, source, destination, changed):
|
||||
|
||||
for key, value in source.items():
|
||||
if isinstance(value, dict):
|
||||
# get node or create one
|
||||
try:
|
||||
node = destination.setdefault(key, {})
|
||||
except AttributeError:
|
||||
node = {}
|
||||
finally:
|
||||
x, changed = self.merge(value, node, changed)
|
||||
|
||||
elif isinstance(value, list) and key in destination.keys():
|
||||
if destination[key] != source[key]:
|
||||
destination[key] = source[key]
|
||||
changed = True
|
||||
|
||||
elif (key not in destination.keys() or
|
||||
destination[key] != source[key]):
|
||||
destination[key] = value
|
||||
changed = True
|
||||
return destination, changed
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
host=dict(type='str', default='127.0.0.1'),
|
||||
port=dict(type='int', default=8443),
|
||||
definition=dict(aliases=['def', 'inline'],
|
||||
type='dict'),
|
||||
kind=dict(type='str'),
|
||||
name=dict(type='str'),
|
||||
namespace=dict(type='str'),
|
||||
token=dict(required=True, type='str', no_log=True),
|
||||
state=dict(required=True,
|
||||
choices=['present', 'absent']),
|
||||
validate_certs=dict(type='bool', default='yes')
|
||||
),
|
||||
mutually_exclusive=(['kind', 'definition'],
|
||||
['name', 'definition'],
|
||||
['namespace', 'definition']),
|
||||
required_if=([['state', 'absent', ['kind']]]),
|
||||
required_one_of=([['kind', 'definition']]),
|
||||
no_log=False,
|
||||
supports_check_mode=True
|
||||
)
|
||||
kind = None
|
||||
definition = None
|
||||
name = None
|
||||
namespace = None
|
||||
|
||||
host = module.params['host']
|
||||
port = module.params['port']
|
||||
definition = module.params['definition']
|
||||
state = module.params['state']
|
||||
kind = module.params['kind']
|
||||
name = module.params['name']
|
||||
namespace = module.params['namespace']
|
||||
token = module.params['token']
|
||||
|
||||
if definition is None:
|
||||
definition = {}
|
||||
definition['metadata'] = {}
|
||||
definition['metadata']['name'] = name
|
||||
definition['metadata']['namespace'] = namespace
|
||||
|
||||
if "apiVersion" not in definition.keys():
|
||||
definition['apiVersion'] = 'v1'
|
||||
if "kind" not in definition.keys():
|
||||
definition['kind'] = kind
|
||||
|
||||
result = None
|
||||
oc = OC(module, token, host, port)
|
||||
resource = NamedResource(module,
|
||||
definition,
|
||||
oc.get_resource_endpoint(definition['kind']))
|
||||
|
||||
changed = False
|
||||
method = ''
|
||||
exists = oc.exists(resource)
|
||||
module.log(msg="URL %s" % resource.url())
|
||||
|
||||
if state == 'present' and exists:
|
||||
method = 'put'
|
||||
result, changed = oc.replace(resource, module.check_mode)
|
||||
elif state == 'present' and not exists and definition is not None:
|
||||
method = 'create'
|
||||
if not module.check_mode:
|
||||
result, changed = oc.create(resource)
|
||||
else:
|
||||
changed = True
|
||||
result = definition
|
||||
elif state == 'absent' and exists:
|
||||
method = 'delete'
|
||||
if not module.check_mode:
|
||||
result, changed = oc.delete(resource)
|
||||
else:
|
||||
changed = True
|
||||
result = definition
|
||||
|
||||
facts = {}
|
||||
|
||||
if result is not None and "items" in result:
|
||||
result['item_list'] = result.pop('items')
|
||||
elif result is None and state == 'present':
|
||||
result = 'Resource not present and no inline provided.'
|
||||
facts['oc'] = {'definition': result,
|
||||
'url': resource.url(),
|
||||
'method': method}
|
||||
|
||||
module.exit_json(changed=changed, ansible_facts=facts)
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,346 +1,15 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# (c) 2017 Apstra Inc, <community@apstra.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: aos_asn_pool
|
||||
author: Damien Garros (@dgarros)
|
||||
version_added: "2.3"
|
||||
short_description: Manage AOS ASN Pool
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module does not support AOS 2.1 or later
|
||||
alternative: See new modules at U(https://www.ansible.com/ansible-apstra).
|
||||
description:
|
||||
- Apstra AOS ASN Pool module let you manage your ASN Pool easily. You can create
|
||||
and delete ASN Pool by Name, ID or by using a JSON File. This module
|
||||
is idempotent and support the I(check) mode. It's using the AOS REST API.
|
||||
requirements:
|
||||
- "aos-pyez >= 0.6.0"
|
||||
options:
|
||||
session:
|
||||
description:
|
||||
- An existing AOS session as obtained by M(aos_login) module.
|
||||
required: true
|
||||
name:
|
||||
description:
|
||||
- Name of the ASN Pool to manage.
|
||||
Only one of I(name), I(id) or I(content) can be set.
|
||||
id:
|
||||
description:
|
||||
- AOS Id of the ASN Pool to manage.
|
||||
Only one of I(name), I(id) or I(content) can be set.
|
||||
content:
|
||||
description:
|
||||
- Datastructure of the ASN Pool to manage. The data can be in YAML / JSON or
|
||||
directly a variable. It's the same datastructure that is returned
|
||||
on success in I(value).
|
||||
state:
|
||||
description:
|
||||
- Indicate what is the expected state of the ASN Pool (present or not).
|
||||
default: present
|
||||
choices: ['present', 'absent']
|
||||
ranges:
|
||||
description:
|
||||
- List of ASNs ranges to add to the ASN Pool. Each range must have 2 values.
|
||||
'''
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
EXAMPLES = '''
|
||||
|
||||
- name: "Create ASN Pool"
|
||||
aos_asn_pool:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-asn-pool"
|
||||
ranges:
|
||||
- [ 100, 200 ]
|
||||
state: present
|
||||
register: asnpool
|
||||
|
||||
- name: "Save ASN Pool into a file in JSON"
|
||||
copy:
|
||||
content: "{{ asnpool.value | to_nice_json }}"
|
||||
dest: resources/asn_pool_saved.json
|
||||
|
||||
- name: "Save ASN Pool into a file in YAML"
|
||||
copy:
|
||||
content: "{{ asnpool.value | to_nice_yaml }}"
|
||||
dest: resources/asn_pool_saved.yaml
|
||||
|
||||
|
||||
- name: "Delete ASN Pool"
|
||||
aos_asn_pool:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-asn-pool"
|
||||
state: absent
|
||||
|
||||
- name: "Load ASN Pool from File(JSON)"
|
||||
aos_asn_pool:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/asn_pool_saved.json') }}"
|
||||
state: present
|
||||
|
||||
- name: "Delete ASN Pool from File(JSON)"
|
||||
aos_asn_pool:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/asn_pool_saved.json') }}"
|
||||
state: absent
|
||||
|
||||
- name: "Load ASN Pool from File(Yaml)"
|
||||
aos_asn_pool:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/asn_pool_saved.yaml') }}"
|
||||
state: present
|
||||
register: test
|
||||
|
||||
- name: "Delete ASN Pool from File(Yaml)"
|
||||
aos_asn_pool:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/asn_pool_saved.yaml') }}"
|
||||
state: absent
|
||||
'''
|
||||
|
||||
RETURNS = '''
|
||||
name:
|
||||
description: Name of the ASN Pool
|
||||
returned: always
|
||||
type: str
|
||||
sample: Private-ASN-pool
|
||||
|
||||
id:
|
||||
description: AOS unique ID assigned to the ASN Pool
|
||||
returned: always
|
||||
type: str
|
||||
sample: fcc4ac1c-e249-4fe7-b458-2138bfb44c06
|
||||
|
||||
value:
|
||||
description: Value of the object as returned by the AOS Server
|
||||
returned: always
|
||||
type: dict
|
||||
sample: {'...'}
|
||||
'''
|
||||
|
||||
import json
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.network.aos.aos import get_aos_session, find_collection_item, do_load_resource, check_aos_version, content_to_dict
|
||||
|
||||
|
||||
def check_ranges_are_valid(module, ranges):
|
||||
|
||||
i = 1
|
||||
for range in ranges:
|
||||
if not isinstance(range, list):
|
||||
module.fail_json(msg="Range (%i) must be a list not %s" % (i, type(range)))
|
||||
elif len(range) != 2:
|
||||
module.fail_json(msg="Range (%i) must be a list of 2 members, not %i" % (i, len(range)))
|
||||
elif not isinstance(range[0], int):
|
||||
module.fail_json(msg="1st element of range (%i) must be integer instead of %s " % (i, type(range[0])))
|
||||
elif not isinstance(range[1], int):
|
||||
module.fail_json(msg="2nd element of range (%i) must be integer instead of %s " % (i, type(range[1])))
|
||||
elif range[1] <= range[0]:
|
||||
module.fail_json(msg="2nd element of range (%i) must be bigger than 1st " % (i))
|
||||
|
||||
i += 1
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def get_list_of_range(asn_pool):
|
||||
ranges = []
|
||||
|
||||
for range in asn_pool.value['ranges']:
|
||||
ranges.append([range['first'], range['last']])
|
||||
|
||||
return ranges
|
||||
|
||||
|
||||
def create_new_asn_pool(asn_pool, name, ranges):
|
||||
|
||||
# Create value
|
||||
datum = dict(display_name=name, ranges=[])
|
||||
for range in ranges:
|
||||
datum['ranges'].append(dict(first=range[0], last=range[1]))
|
||||
|
||||
asn_pool.datum = datum
|
||||
|
||||
# Write to AOS
|
||||
return asn_pool.write()
|
||||
|
||||
|
||||
def asn_pool_absent(module, aos, my_pool):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# If the module do not exist, return directly
|
||||
if my_pool.exists is False:
|
||||
module.exit_json(changed=False, name=margs['name'], id='', value={})
|
||||
|
||||
# Check if object is currently in Use or Not
|
||||
# If in Use, return an error
|
||||
if my_pool.value:
|
||||
if my_pool.value['status'] != 'not_in_use':
|
||||
module.fail_json(msg="Unable to delete ASN Pool '%s' is currently in use" % my_pool.name)
|
||||
else:
|
||||
module.fail_json(msg="ASN Pool object has an invalid format, value['status'] must be defined")
|
||||
|
||||
# If not in check mode, delete Ip Pool
|
||||
if not module.check_mode:
|
||||
try:
|
||||
my_pool.delete()
|
||||
except Exception:
|
||||
module.fail_json(msg="An error occurred, while trying to delete the ASN Pool")
|
||||
|
||||
module.exit_json(changed=True,
|
||||
name=my_pool.name,
|
||||
id=my_pool.id,
|
||||
value={})
|
||||
|
||||
|
||||
def asn_pool_present(module, aos, my_pool):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# if content is defined, create object from Content
|
||||
if margs['content'] is not None:
|
||||
|
||||
if 'display_name' in module.params['content'].keys():
|
||||
do_load_resource(module, aos.AsnPools, module.params['content']['display_name'])
|
||||
else:
|
||||
module.fail_json(msg="Unable to find display_name in 'content', Mandatory")
|
||||
|
||||
# if asn_pool doesn't exist already, create a new one
|
||||
if my_pool.exists is False and 'name' not in margs.keys():
|
||||
module.fail_json(msg="name is mandatory for module that don't exist currently")
|
||||
|
||||
elif my_pool.exists is False:
|
||||
|
||||
if not module.check_mode:
|
||||
try:
|
||||
my_new_pool = create_new_asn_pool(my_pool, margs['name'], margs['ranges'])
|
||||
my_pool = my_new_pool
|
||||
except Exception:
|
||||
module.fail_json(msg="An error occurred while trying to create a new ASN Pool ")
|
||||
|
||||
module.exit_json(changed=True,
|
||||
name=my_pool.name,
|
||||
id=my_pool.id,
|
||||
value=my_pool.value)
|
||||
|
||||
# Currently only check if the pool exist or not
|
||||
# if exist return change false
|
||||
#
|
||||
# Later it would be good to check if the list of ASN are same
|
||||
# if pool already exist, check if list of ASN is the same
|
||||
# if same just return the object and report change false
|
||||
# if set(get_list_of_range(my_pool)) == set(margs['ranges']):
|
||||
module.exit_json(changed=False,
|
||||
name=my_pool.name,
|
||||
id=my_pool.id,
|
||||
value=my_pool.value)
|
||||
|
||||
# ########################################################
|
||||
# Main Function
|
||||
# ########################################################
|
||||
|
||||
|
||||
def asn_pool(module):
|
||||
|
||||
margs = module.params
|
||||
|
||||
try:
|
||||
aos = get_aos_session(module, margs['session'])
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to login to the AOS server")
|
||||
|
||||
item_name = False
|
||||
item_id = False
|
||||
|
||||
# Check ID / Name and Content
|
||||
if margs['content'] is not None:
|
||||
|
||||
content = content_to_dict(module, margs['content'])
|
||||
|
||||
if 'display_name' in content.keys():
|
||||
item_name = content['display_name']
|
||||
else:
|
||||
module.fail_json(msg="Unable to extract 'display_name' from 'content'")
|
||||
|
||||
elif margs['name'] is not None:
|
||||
item_name = margs['name']
|
||||
|
||||
elif margs['id'] is not None:
|
||||
item_id = margs['id']
|
||||
|
||||
# If ranges are provided, check if they are valid
|
||||
if 'ranges' in margs.keys():
|
||||
check_ranges_are_valid(module, margs['ranges'])
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Find Object if available based on ID or Name
|
||||
# ----------------------------------------------------
|
||||
try:
|
||||
my_pool = find_collection_item(aos.AsnPools,
|
||||
item_name=item_name,
|
||||
item_id=item_id)
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to find the IP Pool based on name or ID, something went wrong")
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Proceed based on State value
|
||||
# ----------------------------------------------------
|
||||
if margs['state'] == 'absent':
|
||||
|
||||
asn_pool_absent(module, aos, my_pool)
|
||||
|
||||
elif margs['state'] == 'present':
|
||||
|
||||
asn_pool_present(module, aos, my_pool)
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
session=dict(required=True, type="dict"),
|
||||
name=dict(required=False),
|
||||
id=dict(required=False),
|
||||
content=dict(required=False, type="json"),
|
||||
state=dict(required=False,
|
||||
choices=['present', 'absent'],
|
||||
default="present"),
|
||||
ranges=dict(required=False, type="list", default=[])
|
||||
),
|
||||
mutually_exclusive=[('name', 'id', 'content')],
|
||||
required_one_of=[('name', 'id', 'content')],
|
||||
supports_check_mode=True
|
||||
)
|
||||
|
||||
# Check if aos-pyez is present and match the minimum version
|
||||
check_aos_version(module, '0.6.0')
|
||||
|
||||
asn_pool(module)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
if __name__ == '__main__':
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,300 +1,15 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# (c) 2017 Apstra Inc, <community@apstra.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: aos_blueprint
|
||||
author: jeremy@apstra.com (@jeremyschulman)
|
||||
version_added: "2.3"
|
||||
short_description: Manage AOS blueprint instance
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module does not support AOS 2.1 or later
|
||||
alternative: See new modules at U(https://www.ansible.com/ansible-apstra).
|
||||
description:
|
||||
- Apstra AOS Blueprint module let you manage your Blueprint easily. You can create
|
||||
create and delete Blueprint by Name or ID. You can also use it to retrieve
|
||||
all data from a blueprint. This module is idempotent
|
||||
and support the I(check) mode. It's using the AOS REST API.
|
||||
requirements:
|
||||
- "aos-pyez >= 0.6.0"
|
||||
options:
|
||||
session:
|
||||
description:
|
||||
- An existing AOS session as obtained by M(aos_login) module.
|
||||
required: true
|
||||
name:
|
||||
description:
|
||||
- Name of the Blueprint to manage.
|
||||
Only one of I(name) or I(id) can be set.
|
||||
id:
|
||||
description:
|
||||
- AOS Id of the IP Pool to manage (can't be used to create a new IP Pool).
|
||||
Only one of I(name) or I(id) can be set.
|
||||
state:
|
||||
description:
|
||||
- Indicate what is the expected state of the Blueprint.
|
||||
choices: ['present', 'absent', 'build-ready']
|
||||
default: present
|
||||
timeout:
|
||||
description:
|
||||
- When I(state=build-ready), this timeout identifies timeout in seconds to wait before
|
||||
declaring a failure.
|
||||
default: 5
|
||||
template:
|
||||
description:
|
||||
- When creating a blueprint, this value identifies, by name, an existing engineering
|
||||
design template within the AOS-server.
|
||||
reference_arch:
|
||||
description:
|
||||
- When creating a blueprint, this value identifies a known AOS reference
|
||||
architecture value. I(Refer to AOS-server documentation for available values).
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
- name: Creating blueprint
|
||||
aos_blueprint:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-blueprint"
|
||||
template: "my-template"
|
||||
reference_arch: two_stage_l3clos
|
||||
state: present
|
||||
|
||||
- name: Access a blueprint and get content
|
||||
aos_blueprint:
|
||||
session: "{{ aos_session }}"
|
||||
name: "{{ blueprint_name }}"
|
||||
template: "{{ blueprint_template }}"
|
||||
state: present
|
||||
register: bp
|
||||
|
||||
- name: Delete a blueprint
|
||||
aos_blueprint:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-blueprint"
|
||||
state: absent
|
||||
|
||||
- name: Await blueprint build-ready, and obtain contents
|
||||
aos_blueprint:
|
||||
session: "{{ aos_session }}"
|
||||
name: "{{ blueprint_name }}"
|
||||
state: build-ready
|
||||
register: bp
|
||||
'''
|
||||
|
||||
RETURNS = '''
|
||||
name:
|
||||
description: Name of the Blueprint
|
||||
returned: always
|
||||
type: str
|
||||
sample: My-Blueprint
|
||||
|
||||
id:
|
||||
description: AOS unique ID assigned to the Blueprint
|
||||
returned: always
|
||||
type: str
|
||||
sample: fcc4ac1c-e249-4fe7-b458-2138bfb44c06
|
||||
|
||||
value:
|
||||
description: Information about the Blueprint
|
||||
returned: always
|
||||
type: dict
|
||||
sample: {'...'}
|
||||
|
||||
contents:
|
||||
description: Blueprint contents data-dictionary
|
||||
returned: always
|
||||
type: dict
|
||||
sample: { ... }
|
||||
|
||||
build_errors:
|
||||
description: When state='build-ready', and build errors exist, this contains list of errors
|
||||
returned: only when build-ready returns fail
|
||||
type: list
|
||||
sample: [{...}, {...}]
|
||||
'''
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.network.aos.aos import get_aos_session, check_aos_version, find_collection_item
|
||||
|
||||
|
||||
def create_blueprint(module, aos, name):
|
||||
|
||||
margs = module.params
|
||||
|
||||
try:
|
||||
|
||||
template_id = aos.DesignTemplates[margs['template']].id
|
||||
|
||||
# Create a new Object based on the name
|
||||
blueprint = aos.Blueprints[name]
|
||||
blueprint.create(template_id, reference_arch=margs['reference_arch'])
|
||||
|
||||
except Exception as exc:
|
||||
msg = "Unable to create blueprint: %s" % exc.message
|
||||
if 'UNPROCESSABLE ENTITY' in exc.message:
|
||||
msg += ' (likely missing dependencies)'
|
||||
|
||||
module.fail_json(msg=msg)
|
||||
|
||||
return blueprint
|
||||
|
||||
|
||||
def ensure_absent(module, aos, blueprint):
|
||||
|
||||
if blueprint.exists is False:
|
||||
module.exit_json(changed=False)
|
||||
|
||||
else:
|
||||
|
||||
if not module.check_mode:
|
||||
try:
|
||||
blueprint.delete()
|
||||
except Exception as exc:
|
||||
module.fail_json(msg='Unable to delete blueprint, %s' % exc.message)
|
||||
|
||||
module.exit_json(changed=True,
|
||||
id=blueprint.id,
|
||||
name=blueprint.name)
|
||||
|
||||
|
||||
def ensure_present(module, aos, blueprint):
|
||||
margs = module.params
|
||||
|
||||
if blueprint.exists:
|
||||
module.exit_json(changed=False,
|
||||
id=blueprint.id,
|
||||
name=blueprint.name,
|
||||
value=blueprint.value,
|
||||
contents=blueprint.contents)
|
||||
|
||||
else:
|
||||
|
||||
# Check if template is defined and is valid
|
||||
if margs['template'] is None:
|
||||
module.fail_json(msg="You must define a 'template' name to create a new blueprint, currently missing")
|
||||
|
||||
elif aos.DesignTemplates.find(label=margs['template']) is None:
|
||||
module.fail_json(msg="You must define a Valid 'template' name to create a new blueprint, %s is not valid" % margs['template'])
|
||||
|
||||
# Check if reference_arch
|
||||
if margs['reference_arch'] is None:
|
||||
module.fail_json(msg="You must define a 'reference_arch' to create a new blueprint, currently missing")
|
||||
|
||||
if not module.check_mode:
|
||||
blueprint = create_blueprint(module, aos, margs['name'])
|
||||
module.exit_json(changed=True,
|
||||
id=blueprint.id,
|
||||
name=blueprint.name,
|
||||
value=blueprint.value,
|
||||
contents=blueprint.contents)
|
||||
else:
|
||||
module.exit_json(changed=True,
|
||||
name=margs['name'])
|
||||
|
||||
|
||||
def ensure_build_ready(module, aos, blueprint):
|
||||
margs = module.params
|
||||
|
||||
if not blueprint.exists:
|
||||
module.fail_json(msg='blueprint %s does not exist' % blueprint.name)
|
||||
|
||||
if blueprint.await_build_ready(timeout=margs['timeout'] * 1000):
|
||||
module.exit_json(contents=blueprint.contents)
|
||||
else:
|
||||
module.fail_json(msg='blueprint %s has build errors',
|
||||
build_erros=blueprint.build_errors)
|
||||
|
||||
|
||||
def aos_blueprint(module):
|
||||
|
||||
margs = module.params
|
||||
|
||||
try:
|
||||
aos = get_aos_session(module, margs['session'])
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to login to the AOS server")
|
||||
|
||||
item_name = False
|
||||
item_id = False
|
||||
|
||||
if margs['name'] is not None:
|
||||
item_name = margs['name']
|
||||
|
||||
elif margs['id'] is not None:
|
||||
item_id = margs['id']
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Find Object if available based on ID or Name
|
||||
# ----------------------------------------------------
|
||||
try:
|
||||
my_blueprint = find_collection_item(aos.Blueprints,
|
||||
item_name=item_name,
|
||||
item_id=item_id)
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to find the Blueprint based on name or ID, something went wrong")
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Proceed based on State value
|
||||
# ----------------------------------------------------
|
||||
if margs['state'] == 'absent':
|
||||
|
||||
ensure_absent(module, aos, my_blueprint)
|
||||
|
||||
elif margs['state'] == 'present':
|
||||
|
||||
ensure_present(module, aos, my_blueprint)
|
||||
|
||||
elif margs['state'] == 'build-ready':
|
||||
|
||||
ensure_build_ready(module, aos, my_blueprint)
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
session=dict(required=True, type="dict"),
|
||||
name=dict(required=False),
|
||||
id=dict(required=False),
|
||||
state=dict(choices=[
|
||||
'present', 'absent', 'build-ready'],
|
||||
default='present'),
|
||||
timeout=dict(type="int", default=5),
|
||||
template=dict(required=False),
|
||||
reference_arch=dict(required=False)
|
||||
),
|
||||
mutually_exclusive=[('name', 'id')],
|
||||
required_one_of=[('name', 'id')],
|
||||
supports_check_mode=True
|
||||
)
|
||||
|
||||
# Check if aos-pyez is present and match the minimum version
|
||||
check_aos_version(module, '0.6.0')
|
||||
|
||||
aos_blueprint(module)
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,385 +1,15 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# (c) 2017 Apstra Inc, <community@apstra.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: aos_blueprint_param
|
||||
author: jeremy@apstra.com (@jeremyschulman)
|
||||
version_added: "2.3"
|
||||
short_description: Manage AOS blueprint parameter values
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module does not support AOS 2.1 or later
|
||||
alternative: See new modules at U(https://www.ansible.com/ansible-apstra).
|
||||
description:
|
||||
- Apstra AOS Blueprint Parameter module let you manage your Blueprint Parameter easily.
|
||||
You can create access, define and delete Blueprint Parameter. The list of
|
||||
Parameters supported is different per Blueprint. The option I(get_param_list)
|
||||
can help you to access the list of supported Parameters for your blueprint.
|
||||
This module is idempotent and support the I(check) mode. It's using the AOS REST API.
|
||||
requirements:
|
||||
- "aos-pyez >= 0.6.0"
|
||||
options:
|
||||
session:
|
||||
description:
|
||||
- An existing AOS session as obtained by M(aos_login) module.
|
||||
required: true
|
||||
blueprint:
|
||||
description:
|
||||
- Blueprint Name or Id as defined in AOS.
|
||||
required: True
|
||||
name:
|
||||
description:
|
||||
- Name of blueprint parameter, as defined by AOS design template. You can
|
||||
use the option I(get_param_list) to get the complete list of supported
|
||||
parameters for your blueprint.
|
||||
value:
|
||||
description:
|
||||
- Blueprint parameter value. This value may be transformed by using the
|
||||
I(param_map) field; used when the blueprint parameter requires
|
||||
an AOS unique ID value.
|
||||
get_param_list:
|
||||
description:
|
||||
- Get the complete list of supported parameters for this blueprint and the
|
||||
description of those parameters.
|
||||
state:
|
||||
description:
|
||||
- Indicate what is the expected state of the Blueprint Parameter (present or not).
|
||||
default: present
|
||||
choices: ['present', 'absent']
|
||||
param_map:
|
||||
description:
|
||||
- Defines the aos-pyez collection that will is used to map the user-defined
|
||||
item name into the AOS unique ID value. For example, if the caller
|
||||
provides an IP address pool I(param_value) called "Server-IpAddrs", then
|
||||
the aos-pyez collection is 'IpPools'. Some I(param_map) are already defined
|
||||
by default like I(logical_device_maps).
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
|
||||
- name: Add Logical Device Maps information in a Blueprint
|
||||
aos_blueprint_param:
|
||||
session: "{{ aos_session }}"
|
||||
blueprint: "my-blueprint-l2"
|
||||
name: "logical_device_maps"
|
||||
value:
|
||||
spine_1: CumulusVX-Spine-Switch
|
||||
spine_2: CumulusVX-Spine-Switch
|
||||
leaf_1: CumulusVX-Leaf-Switch
|
||||
leaf_2: CumulusVX-Leaf-Switch
|
||||
leaf_3: CumulusVX-Leaf-Switch
|
||||
state: present
|
||||
|
||||
- name: Access Logical Device Maps information from a Blueprint
|
||||
aos_blueprint_param:
|
||||
session: "{{ aos_session }}"
|
||||
blueprint: "my-blueprint-l2"
|
||||
name: "logical_device_maps"
|
||||
state: present
|
||||
|
||||
- name: Reset Logical Device Maps information in a Blueprint
|
||||
aos_blueprint_param:
|
||||
session: "{{ aos_session }}"
|
||||
blueprint: "my-blueprint-l2"
|
||||
name: "logical_device_maps"
|
||||
state: absent
|
||||
|
||||
- name: Get list of all supported Params for a blueprint
|
||||
aos_blueprint_param:
|
||||
session: "{{ aos_session }}"
|
||||
blueprint: "my-blueprint-l2"
|
||||
get_param_list: yes
|
||||
register: params_list
|
||||
- debug: var=params_list
|
||||
|
||||
- name: Add Resource Pools information in a Blueprint, by providing a param_map
|
||||
aos_blueprint_param:
|
||||
session: "{{ aos_session }}"
|
||||
blueprint: "my-blueprint-l2"
|
||||
name: "resource_pools"
|
||||
value:
|
||||
leaf_loopback_ips: ['Switches-IpAddrs']
|
||||
spine_loopback_ips: ['Switches-IpAddrs']
|
||||
spine_leaf_link_ips: ['Switches-IpAddrs']
|
||||
spine_asns: ['Private-ASN-pool']
|
||||
leaf_asns: ['Private-ASN-pool']
|
||||
virtual_network_svi_subnets: ['Servers-IpAddrs']
|
||||
param_map:
|
||||
leaf_loopback_ips: IpPools
|
||||
spine_loopback_ips: IpPools
|
||||
spine_leaf_link_ips: IpPools
|
||||
spine_asns: AsnPools
|
||||
leaf_asns: AsnPools
|
||||
virtual_network_svi_subnets: IpPools
|
||||
state: present
|
||||
'''
|
||||
|
||||
RETURNS = '''
|
||||
blueprint:
|
||||
description: Name of the Blueprint
|
||||
returned: always
|
||||
type: str
|
||||
sample: Server-IpAddrs
|
||||
|
||||
name:
|
||||
description: Name of the Blueprint Parameter
|
||||
returned: always
|
||||
type: str
|
||||
sample: fcc4ac1c-e249-4fe7-b458-2138bfb44c06
|
||||
|
||||
value:
|
||||
description: Value of the Blueprint Parameter as returned by the AOS Server
|
||||
returned: always
|
||||
type: dict
|
||||
sample: {'...'}
|
||||
|
||||
params_list:
|
||||
description: Value of the Blueprint Parameter as returned by the AOS Server
|
||||
returned: when I(get_param_list) is defined.
|
||||
type: dict
|
||||
sample: {'...'}
|
||||
'''
|
||||
|
||||
import json
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.network.aos.aos import get_aos_session, find_collection_item, check_aos_version
|
||||
from ansible.module_utils._text import to_native
|
||||
|
||||
try:
|
||||
import yaml
|
||||
HAS_YAML = True
|
||||
except ImportError:
|
||||
HAS_YAML = False
|
||||
|
||||
try:
|
||||
from apstra.aosom.collection_mapper import CollectionMapper, MultiCollectionMapper
|
||||
HAS_AOS_PYEZ_MAPPER = True
|
||||
except ImportError:
|
||||
HAS_AOS_PYEZ_MAPPER = False
|
||||
|
||||
param_map_list = dict(
|
||||
logical_device_maps='LogicalDeviceMaps',
|
||||
resource_pools=dict(
|
||||
spine_asns="AsnPools",
|
||||
leaf_asns="AsnPools",
|
||||
virtual_network_svi_subnets="IpPools",
|
||||
spine_loopback_ips="IpPools",
|
||||
leaf_loopback_ips="IpPools",
|
||||
spine_leaf_link_ips="IpPools"
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def get_collection_from_param_map(module, aos):
|
||||
|
||||
param_map = None
|
||||
|
||||
# Check if param_map is provided
|
||||
if module.params['param_map'] is not None:
|
||||
param_map_json = module.params['param_map']
|
||||
|
||||
if not HAS_YAML:
|
||||
module.fail_json(msg="Python library Yaml is mandatory to use 'param_map'")
|
||||
|
||||
try:
|
||||
param_map = yaml.safe_load(param_map_json)
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to parse param_map information")
|
||||
|
||||
else:
|
||||
# search in the param_map_list to find the right one
|
||||
for key, value in param_map_list.items():
|
||||
if module.params['name'] == key:
|
||||
param_map = value
|
||||
|
||||
# If param_map is defined, search for a Collection that matches
|
||||
if param_map:
|
||||
if isinstance(param_map, dict):
|
||||
return MultiCollectionMapper(aos, param_map)
|
||||
else:
|
||||
return CollectionMapper(getattr(aos, param_map))
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def blueprint_param_present(module, aos, blueprint, param, param_value):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# If param_value is not defined, just return the object
|
||||
if not param_value:
|
||||
module.exit_json(changed=False,
|
||||
blueprint=blueprint.name,
|
||||
name=param.name,
|
||||
value=param.value)
|
||||
|
||||
# Check if current value is the same or not
|
||||
elif param.value != param_value:
|
||||
if not module.check_mode:
|
||||
try:
|
||||
param.value = param_value
|
||||
except Exception as exc:
|
||||
module.fail_json(msg='unable to write to param %s: %s' %
|
||||
(margs['name'], to_native(exc)))
|
||||
|
||||
module.exit_json(changed=True,
|
||||
blueprint=blueprint.name,
|
||||
name=param.name,
|
||||
value=param.value)
|
||||
|
||||
# If value are already the same, nothing needs to be changed
|
||||
else:
|
||||
module.exit_json(changed=False,
|
||||
blueprint=blueprint.name,
|
||||
name=param.name,
|
||||
value=param.value)
|
||||
|
||||
|
||||
def blueprint_param_absent(module, aos, blueprint, param, param_value):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# Check if current value is the same or not
|
||||
if param.value != dict():
|
||||
if not module.check_mode:
|
||||
try:
|
||||
param.value = {}
|
||||
except Exception as exc:
|
||||
module.fail_json(msg='Unable to write to param %s: %s' % (margs['name'], to_native(exc)))
|
||||
|
||||
module.exit_json(changed=True,
|
||||
blueprint=blueprint.name,
|
||||
name=param.name,
|
||||
value=param.value)
|
||||
|
||||
else:
|
||||
module.exit_json(changed=False,
|
||||
blueprint=blueprint.name,
|
||||
name=param.name,
|
||||
value=param.value)
|
||||
|
||||
|
||||
def blueprint_param(module):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# Get AOS session object based on Session Info
|
||||
# --------------------------------------------------------------------
|
||||
try:
|
||||
aos = get_aos_session(module, margs['session'])
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to login to the AOS server")
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# Get the blueprint Object based on either name or ID
|
||||
# --------------------------------------------------------------------
|
||||
try:
|
||||
blueprint = find_collection_item(aos.Blueprints,
|
||||
item_name=margs['blueprint'],
|
||||
item_id=margs['blueprint'])
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to find the Blueprint based on name or ID, something went wrong")
|
||||
|
||||
if blueprint.exists is False:
|
||||
module.fail_json(msg='Blueprint %s does not exist.\n'
|
||||
'known blueprints are [%s]' %
|
||||
(margs['blueprint'], ','.join(aos.Blueprints.names)))
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# If get_param_list is defined, build the list of supported parameters
|
||||
# and extract info for each
|
||||
# --------------------------------------------------------------------
|
||||
if margs['get_param_list']:
|
||||
|
||||
params_list = {}
|
||||
for param in blueprint.params.names:
|
||||
params_list[param] = blueprint.params[param].info
|
||||
|
||||
module.exit_json(changed=False,
|
||||
blueprint=blueprint.name,
|
||||
params_list=params_list)
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# Check Param name, return an error if not supported by this blueprint
|
||||
# --------------------------------------------------------------------
|
||||
if margs['name'] in blueprint.params.names:
|
||||
param = blueprint.params[margs['name']]
|
||||
else:
|
||||
module.fail_json(msg='unable to access param %s' % margs['name'])
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# Check if param_value needs to be converted to an object
|
||||
# based on param_map
|
||||
# --------------------------------------------------------------------
|
||||
param_value = margs['value']
|
||||
param_collection = get_collection_from_param_map(module, aos)
|
||||
|
||||
# If a collection is find and param_value is defined,
|
||||
# convert param_value into an object
|
||||
if param_collection and param_value:
|
||||
param_value = param_collection.from_label(param_value)
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# Proceed based on State value
|
||||
# --------------------------------------------------------------------
|
||||
if margs['state'] == 'absent':
|
||||
|
||||
blueprint_param_absent(module, aos, blueprint, param, param_value)
|
||||
|
||||
elif margs['state'] == 'present':
|
||||
|
||||
blueprint_param_present(module, aos, blueprint, param, param_value)
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
session=dict(required=True, type="dict"),
|
||||
blueprint=dict(required=True),
|
||||
get_param_list=dict(required=False, type="bool"),
|
||||
name=dict(required=False),
|
||||
value=dict(required=False, type="dict"),
|
||||
param_map=dict(required=False),
|
||||
state=dict(choices=['present', 'absent'], default='present')
|
||||
),
|
||||
supports_check_mode=True
|
||||
)
|
||||
|
||||
# Check if aos-pyez is present and match the minimum version
|
||||
check_aos_version(module, '0.6.0')
|
||||
|
||||
# aos-pyez availability has been verify already by "check_aos_version"
|
||||
# but this module requires few more object
|
||||
if not HAS_AOS_PYEZ_MAPPER:
|
||||
module.fail_json(msg='unable to load the Mapper library from aos-pyez')
|
||||
|
||||
blueprint_param(module)
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,221 +1,15 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# (c) 2017 Apstra Inc, <community@apstra.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: aos_blueprint_virtnet
|
||||
author: Damien Garros (@dgarros)
|
||||
version_added: "2.3"
|
||||
short_description: Manage AOS blueprint parameter values
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module does not support AOS 2.1 or later
|
||||
alternative: See new modules at U(https://www.ansible.com/ansible-apstra).
|
||||
description:
|
||||
- Apstra AOS Blueprint Virtual Network module let you manage your Virtual Network easily.
|
||||
You can create access, define and delete Virtual Network by name or by using a JSON / Yaml file.
|
||||
This module is idempotent and support the I(check) mode. It's using the AOS REST API.
|
||||
requirements:
|
||||
- "aos-pyez >= 0.6.0"
|
||||
options:
|
||||
session:
|
||||
description:
|
||||
- An existing AOS session as obtained by M(aos_login) module.
|
||||
required: true
|
||||
blueprint:
|
||||
description:
|
||||
- Blueprint Name or Id as defined in AOS.
|
||||
required: True
|
||||
name:
|
||||
description:
|
||||
- Name of Virtual Network as part of the Blueprint.
|
||||
content:
|
||||
description:
|
||||
- Datastructure of the Virtual Network to manage. The data can be in YAML / JSON or
|
||||
directly a variable. It's the same datastructure that is returned on success in I(value).
|
||||
state:
|
||||
description:
|
||||
- Indicate what is the expected state of the Virtual Network (present or not).
|
||||
default: present
|
||||
choices: ['present', 'absent']
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
|
||||
- name: "Access Existing Virtual Network"
|
||||
aos_blueprint_virtnet:
|
||||
session: "{{ aos_session }}"
|
||||
blueprint: "my-blueprint-l2"
|
||||
name: "my-virtual-network"
|
||||
state: present
|
||||
|
||||
- name: "Delete Virtual Network with JSON File"
|
||||
aos_blueprint_virtnet:
|
||||
session: "{{ aos_session }}"
|
||||
blueprint: "my-blueprint-l2"
|
||||
content: "{{ lookup('file', 'resources/virtual-network-02.json') }}"
|
||||
state: absent
|
||||
|
||||
- name: "Create Virtual Network"
|
||||
aos_blueprint_virtnet:
|
||||
session: "{{ aos_session }}"
|
||||
blueprint: "my-blueprint-l2"
|
||||
content: "{{ lookup('file', 'resources/virtual-network-02.json') }}"
|
||||
state: present
|
||||
'''
|
||||
|
||||
import json
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils._text import to_native
|
||||
from ansible.module_utils.network.aos.aos import get_aos_session, find_collection_item, do_load_resource, check_aos_version, content_to_dict
|
||||
|
||||
|
||||
def ensure_present(module, aos, blueprint, virtnet):
|
||||
|
||||
# if exist already return tru
|
||||
if virtnet.exists:
|
||||
module.exit_json(changed=False,
|
||||
blueprint=blueprint.name,
|
||||
name=virtnet.name,
|
||||
id=virtnet.id,
|
||||
value=virtnet.value)
|
||||
|
||||
else:
|
||||
if not module.check_mode:
|
||||
try:
|
||||
virtnet.create(module.params['content'])
|
||||
except Exception as e:
|
||||
module.fail_json(msg="unable to create virtual-network : %s" % to_native(e))
|
||||
|
||||
module.exit_json(changed=True,
|
||||
blueprint=blueprint.name,
|
||||
name=virtnet.name,
|
||||
id=virtnet.id,
|
||||
value=virtnet.value)
|
||||
|
||||
|
||||
def ensure_absent(module, aos, blueprint, virtnet):
|
||||
|
||||
if virtnet.exists:
|
||||
if not module.check_mode:
|
||||
try:
|
||||
virtnet.delete()
|
||||
except Exception as e:
|
||||
module.fail_json(msg="unable to delete virtual-network %s : %s" % (virtnet.name, to_native(e)))
|
||||
|
||||
module.exit_json(changed=True,
|
||||
blueprint=blueprint.name)
|
||||
|
||||
else:
|
||||
module.exit_json(changed=False,
|
||||
blueprint=blueprint.name)
|
||||
|
||||
|
||||
def blueprint_virtnet(module):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# Get AOS session object based on Session Info
|
||||
# --------------------------------------------------------------------
|
||||
try:
|
||||
aos = get_aos_session(module, margs['session'])
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to login to the AOS server")
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# Get the blueprint Object based on either name or ID
|
||||
# --------------------------------------------------------------------
|
||||
try:
|
||||
blueprint = find_collection_item(aos.Blueprints,
|
||||
item_name=margs['blueprint'],
|
||||
item_id=margs['blueprint'])
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to find the Blueprint based on name or ID, something went wrong")
|
||||
|
||||
if blueprint.exists is False:
|
||||
module.fail_json(msg='Blueprint %s does not exist.\n'
|
||||
'known blueprints are [%s]' %
|
||||
(margs['blueprint'], ','.join(aos.Blueprints.names)))
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# Convert "content" to dict and extract name
|
||||
# --------------------------------------------------------------------
|
||||
if margs['content'] is not None:
|
||||
|
||||
content = content_to_dict(module, margs['content'])
|
||||
|
||||
if 'display_name' in content.keys():
|
||||
item_name = content['display_name']
|
||||
else:
|
||||
module.fail_json(msg="Unable to extract 'display_name' from 'content'")
|
||||
|
||||
elif margs['name'] is not None:
|
||||
item_name = margs['name']
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# Try to find VirtualNetwork object
|
||||
# --------------------------------------------------------------------
|
||||
try:
|
||||
virtnet = blueprint.VirtualNetworks[item_name]
|
||||
except Exception:
|
||||
module.fail_json(msg="Something went wrong while trying to find Virtual Network %s in blueprint %s"
|
||||
% (item_name, blueprint.name))
|
||||
|
||||
# --------------------------------------------------------------------
|
||||
# Proceed based on State value
|
||||
# --------------------------------------------------------------------
|
||||
if margs['state'] == 'absent':
|
||||
|
||||
ensure_absent(module, aos, blueprint, virtnet)
|
||||
|
||||
elif margs['state'] == 'present':
|
||||
|
||||
ensure_present(module, aos, blueprint, virtnet)
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
session=dict(required=True, type="dict"),
|
||||
blueprint=dict(required=True),
|
||||
name=dict(required=False),
|
||||
content=dict(required=False, type="json"),
|
||||
state=dict(choices=['present', 'absent'], default='present')
|
||||
),
|
||||
mutually_exclusive=[('name', 'content')],
|
||||
required_one_of=[('name', 'content')],
|
||||
supports_check_mode=True
|
||||
)
|
||||
|
||||
# Check if aos-pyez is present and match the minimum version
|
||||
check_aos_version(module, '0.6.0')
|
||||
|
||||
blueprint_virtnet(module)
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,222 +1,15 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# (c) 2017 Apstra Inc, <community@apstra.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: aos_device
|
||||
author: Damien Garros (@dgarros)
|
||||
version_added: "2.3"
|
||||
short_description: Manage Devices on AOS Server
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module does not support AOS 2.1 or later
|
||||
alternative: See new modules at U(https://www.ansible.com/ansible-apstra).
|
||||
description:
|
||||
- Apstra AOS Device module let you manage your devices in AOS easily. You can
|
||||
approve devices and define in which state the device should be. Currently
|
||||
only the state I(normal) is supported but the goal is to extend this module
|
||||
with additional state. This module is idempotent and support the I(check) mode.
|
||||
It's using the AOS REST API.
|
||||
requirements:
|
||||
- "aos-pyez >= 0.6.0"
|
||||
options:
|
||||
session:
|
||||
description:
|
||||
- An existing AOS session as obtained by M(aos_login) module.
|
||||
required: true
|
||||
name:
|
||||
description:
|
||||
- The device serial-number; i.e. uniquely identifies the device in the
|
||||
AOS system. Only one of I(name) or I(id) can be set.
|
||||
id:
|
||||
description:
|
||||
- The AOS internal id for a device; i.e. uniquely identifies the device in the
|
||||
AOS system. Only one of I(name) or I(id) can be set.
|
||||
state:
|
||||
description:
|
||||
- Define in which state the device should be. Currently only I(normal)
|
||||
is supported but the goal is to add I(maint) and I(decomm).
|
||||
default: normal
|
||||
choices: ['normal']
|
||||
approve:
|
||||
description:
|
||||
- The approve argument instruct the module to convert a device in quarantine
|
||||
mode into approved mode.
|
||||
default: "no"
|
||||
type: bool
|
||||
location:
|
||||
description:
|
||||
- When approving a device using the I(approve) argument, it's possible
|
||||
define the location of the device.
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
|
||||
- name: Approve a new device
|
||||
aos_device:
|
||||
session: "{{ aos_session }}"
|
||||
name: D2060B2F105429GDABCD123
|
||||
state: 'normal'
|
||||
approve: true
|
||||
location: "rack-45, ru-18"
|
||||
'''
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
RETURNS = '''
|
||||
name:
|
||||
description: Name of the Device, usually the serial-number.
|
||||
returned: always
|
||||
type: str
|
||||
sample: Server-IpAddrs
|
||||
|
||||
id:
|
||||
description: AOS unique ID assigned to the Device
|
||||
returned: always
|
||||
type: str
|
||||
sample: fcc4ac1c-e249-4fe7-b458-2138bfb44c06
|
||||
|
||||
value:
|
||||
description: Value of the object as returned by the AOS Server
|
||||
returned: always
|
||||
type: dict
|
||||
sample: {'...'}
|
||||
'''
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.network.aos.aos import HAS_AOS_PYEZ, get_aos_session, check_aos_version, find_collection_item
|
||||
|
||||
if HAS_AOS_PYEZ:
|
||||
from apstra.aosom.exc import SessionError, SessionRqstError
|
||||
|
||||
|
||||
def aos_device_normal(module, aos, dev):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# If approve is define, check if the device needs to be approved or not
|
||||
if margs['approve'] is not None:
|
||||
|
||||
if dev.is_approved:
|
||||
module.exit_json(changed=False,
|
||||
name=dev.name,
|
||||
id=dev.id,
|
||||
value=dev.value)
|
||||
|
||||
if not module.check_mode:
|
||||
try:
|
||||
dev.approve(location=margs['location'])
|
||||
except (SessionError, SessionRqstError):
|
||||
module.fail_json(msg="Unable to approve device")\
|
||||
|
||||
module.exit_json(changed=True,
|
||||
name=dev.name,
|
||||
id=dev.id,
|
||||
value=dev.value)
|
||||
else:
|
||||
# Check if the device is online
|
||||
if dev.state in ('OOS-READY', 'IS-READY'):
|
||||
module.exit_json(changed=False,
|
||||
name=dev.name,
|
||||
id=dev.id,
|
||||
value=dev.value)
|
||||
else:
|
||||
module.fail_json(msg="Device is in '%s' state" % dev.state)
|
||||
|
||||
|
||||
def aos_device(module):
|
||||
margs = module.params
|
||||
|
||||
try:
|
||||
aos = get_aos_session(module, margs['session'])
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to login to the AOS server")
|
||||
|
||||
item_name = False
|
||||
item_id = False
|
||||
|
||||
if margs['id'] is not None:
|
||||
item_id = margs['id']
|
||||
|
||||
elif margs['name'] is not None:
|
||||
item_name = margs['name']
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Find Object if available based on ID or Name
|
||||
# ----------------------------------------------------
|
||||
dev = find_collection_item(aos.Devices,
|
||||
item_name=item_name,
|
||||
item_id=item_id)
|
||||
|
||||
if dev.exists is False:
|
||||
module.fail_json(msg="unknown device '%s'" % margs['name'])
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Valid device state for reference
|
||||
# ----------------------------------------------------
|
||||
# DEVICE_STATE_IS_ACTIVE = 1;
|
||||
# DEVICE_STATE_IS_READY = 2;
|
||||
# DEVICE_STATE_IS_NOCOMMS = 3;
|
||||
# DEVICE_STATE_IS_MAINT = 4;
|
||||
# DEVICE_STATE_IS_REBOOTING = 5;
|
||||
# DEVICE_STATE_OOS_STOCKED = 6;
|
||||
# DEVICE_STATE_OOS_QUARANTINED = 7;
|
||||
# DEVICE_STATE_OOS_READY = 8;
|
||||
# DEVICE_STATE_OOS_NOCOMMS = 9;
|
||||
# DEVICE_STATE_OOS_DECOMM = 10;
|
||||
# DEVICE_STATE_OOS_MAINT = 11;
|
||||
# DEVICE_STATE_OOS_REBOOTING = 12;
|
||||
# DEVICE_STATE_ERROR = 13;
|
||||
# ----------------------------------------------------
|
||||
# State == Normal
|
||||
# ----------------------------------------------------
|
||||
if margs['state'] == 'normal':
|
||||
aos_device_normal(module, aos, dev)
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
session=dict(required=True, type="dict"),
|
||||
name=dict(required=False),
|
||||
id=dict(required=False),
|
||||
state=dict(choices=['normal'],
|
||||
default='normal'),
|
||||
approve=dict(required=False, type='bool'),
|
||||
location=dict(required=False, default='')
|
||||
),
|
||||
mutually_exclusive=[('name', 'id')],
|
||||
required_one_of=[('name', 'id')],
|
||||
supports_check_mode=True
|
||||
)
|
||||
|
||||
# Check if aos-pyez is present and match the minimum version
|
||||
check_aos_version(module, '0.6.0')
|
||||
|
||||
aos_device(module)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
if __name__ == '__main__':
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,342 +1,15 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# (c) 2017 Apstra Inc, <community@apstra.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: aos_external_router
|
||||
author: Damien Garros (@dgarros)
|
||||
version_added: "2.3"
|
||||
short_description: Manage AOS External Router
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module does not support AOS 2.1 or later
|
||||
alternative: See new modules at U(https://www.ansible.com/ansible-apstra).
|
||||
description:
|
||||
- Apstra AOS External Router module let you manage your External Router easily. You can create
|
||||
create and delete External Router by Name, ID or by using a JSON File. This module
|
||||
is idempotent and support the I(check) mode. It's using the AOS REST API.
|
||||
requirements:
|
||||
- "aos-pyez >= 0.6.0"
|
||||
options:
|
||||
session:
|
||||
description:
|
||||
- An existing AOS session as obtained by M(aos_login) module.
|
||||
required: true
|
||||
name:
|
||||
description:
|
||||
- Name of the External Router to manage.
|
||||
Only one of I(name), I(id) or I(content) can be set.
|
||||
id:
|
||||
description:
|
||||
- AOS Id of the External Router to manage (can't be used to create a new External Router),
|
||||
Only one of I(name), I(id) or I(content) can be set.
|
||||
content:
|
||||
description:
|
||||
- Datastructure of the External Router to create. The format is defined by the
|
||||
I(content_format) parameter. It's the same datastructure that is returned
|
||||
on success in I(value).
|
||||
state:
|
||||
description:
|
||||
- Indicate what is the expected state of the External Router (present or not).
|
||||
default: present
|
||||
choices: ['present', 'absent']
|
||||
loopback:
|
||||
description:
|
||||
- IP address of the Loopback interface of the external_router.
|
||||
asn:
|
||||
description:
|
||||
- ASN id of the external_router.
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
|
||||
- name: "Create an External Router"
|
||||
aos_external_router:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-external-router"
|
||||
loopback: 10.0.0.1
|
||||
asn: 65000
|
||||
state: present
|
||||
|
||||
- name: "Check if an External Router exist by ID"
|
||||
aos_external_router:
|
||||
session: "{{ aos_session }}"
|
||||
name: "45ab26fc-c2ed-4307-b330-0870488fa13e"
|
||||
state: present
|
||||
|
||||
- name: "Delete an External Router by name"
|
||||
aos_external_router:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-external-router"
|
||||
state: absent
|
||||
|
||||
- name: "Delete an External Router by id"
|
||||
aos_external_router:
|
||||
session: "{{ aos_session }}"
|
||||
id: "45ab26fc-c2ed-4307-b330-0870488fa13e"
|
||||
state: absent
|
||||
|
||||
# Save an External Router to a file
|
||||
- name: "Access External Router 1/3"
|
||||
aos_external_router:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-external-router"
|
||||
state: present
|
||||
register: external_router
|
||||
|
||||
- name: "Save External Router into a file in JSON 2/3"
|
||||
copy:
|
||||
content: "{{ external_router.value | to_nice_json }}"
|
||||
dest: external_router_saved.json
|
||||
|
||||
- name: "Save External Router into a file in YAML 3/3"
|
||||
copy:
|
||||
content: "{{ external_router.value | to_nice_yaml }}"
|
||||
dest: external_router_saved.yaml
|
||||
|
||||
- name: "Load External Router from a JSON file"
|
||||
aos_external_router:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/external_router_saved.json') }}"
|
||||
state: present
|
||||
|
||||
- name: "Load External Router from a YAML file"
|
||||
aos_external_router:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/external_router_saved.yaml') }}"
|
||||
state: present
|
||||
'''
|
||||
|
||||
RETURNS = '''
|
||||
name:
|
||||
description: Name of the External Router
|
||||
returned: always
|
||||
type: str
|
||||
sample: Server-IpAddrs
|
||||
|
||||
id:
|
||||
description: AOS unique ID assigned to the External Router
|
||||
returned: always
|
||||
type: str
|
||||
sample: fcc4ac1c-e249-4fe7-b458-2138bfb44c06
|
||||
|
||||
value:
|
||||
description: Value of the object as returned by the AOS Server
|
||||
returned: always
|
||||
type: dict
|
||||
sample: {'...'}
|
||||
'''
|
||||
|
||||
import json
|
||||
import time
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.network.aos.aos import get_aos_session, find_collection_item, do_load_resource, check_aos_version, content_to_dict
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
def create_new_ext_router(module, my_ext_router, name, loopback, asn):
|
||||
|
||||
# Create value
|
||||
datum = dict(display_name=name, address=loopback, asn=asn)
|
||||
|
||||
my_ext_router.datum = datum
|
||||
|
||||
# Write to AOS
|
||||
return my_ext_router.write()
|
||||
|
||||
#########################################################
|
||||
# State Processing
|
||||
#########################################################
|
||||
|
||||
|
||||
def ext_router_absent(module, aos, my_ext_router):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# If the module do not exist, return directly
|
||||
if my_ext_router.exists is False:
|
||||
module.exit_json(changed=False,
|
||||
name=margs['name'],
|
||||
id=margs['id'],
|
||||
value={})
|
||||
|
||||
# If not in check mode, delete External Router
|
||||
if not module.check_mode:
|
||||
try:
|
||||
# Add Sleep before delete to workaround a bug in AOS
|
||||
time.sleep(2)
|
||||
my_ext_router.delete()
|
||||
except Exception:
|
||||
module.fail_json(msg="An error occurred, while trying to delete the External Router")
|
||||
|
||||
module.exit_json(changed=True,
|
||||
name=my_ext_router.name,
|
||||
id=my_ext_router.id,
|
||||
value={})
|
||||
|
||||
|
||||
def ext_router_present(module, aos, my_ext_router):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# if content is defined, create object from Content
|
||||
if my_ext_router.exists is False and margs['content'] is not None:
|
||||
do_load_resource(module, aos.ExternalRouters, module.params['content']['display_name'])
|
||||
|
||||
# if my_ext_router doesn't exist already, create a new one
|
||||
if my_ext_router.exists is False and margs['name'] is None:
|
||||
module.fail_json(msg="Name is mandatory for module that don't exist currently")
|
||||
|
||||
elif my_ext_router.exists is False:
|
||||
|
||||
if not module.check_mode:
|
||||
try:
|
||||
my_new_ext_router = create_new_ext_router(module,
|
||||
my_ext_router,
|
||||
margs['name'],
|
||||
margs['loopback'],
|
||||
margs['asn'])
|
||||
my_ext_router = my_new_ext_router
|
||||
except Exception:
|
||||
module.fail_json(msg="An error occurred while trying to create a new External Router")
|
||||
|
||||
module.exit_json(changed=True,
|
||||
name=my_ext_router.name,
|
||||
id=my_ext_router.id,
|
||||
value=my_ext_router.value)
|
||||
|
||||
# if external Router already exist, check if loopback and ASN are the same
|
||||
# if same just return the object and report change false
|
||||
loopback = None
|
||||
asn = None
|
||||
|
||||
# Identify the Loopback, parameter 'loopback' has priority over 'content'
|
||||
if margs['loopback'] is not None:
|
||||
loopback = margs['loopback']
|
||||
elif margs['content'] is not None:
|
||||
if 'address' in margs['content'].keys():
|
||||
loopback = margs['content']['address']
|
||||
|
||||
# Identify the ASN, parameter 'asn' has priority over 'content'
|
||||
if margs['asn'] is not None:
|
||||
asn = margs['asn']
|
||||
elif margs['content'] is not None:
|
||||
if 'asn' in margs['content'].keys():
|
||||
asn = margs['content']['asn']
|
||||
|
||||
# Compare Loopback and ASN if defined
|
||||
if loopback is not None:
|
||||
if loopback != my_ext_router.value['address']:
|
||||
module.fail_json(msg="my_ext_router already exist but Loopback is different, currently not supported to update a module")
|
||||
|
||||
if asn is not None:
|
||||
if int(asn) != int(my_ext_router.value['asn']):
|
||||
module.fail_json(msg="my_ext_router already exist but ASN is different, currently not supported to update a module")
|
||||
|
||||
module.exit_json(changed=False,
|
||||
name=my_ext_router.name,
|
||||
id=my_ext_router.id,
|
||||
value=my_ext_router.value)
|
||||
|
||||
#########################################################
|
||||
# Main Function
|
||||
#########################################################
|
||||
|
||||
|
||||
def ext_router(module):
|
||||
|
||||
margs = module.params
|
||||
|
||||
try:
|
||||
aos = get_aos_session(module, margs['session'])
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to login to the AOS server")
|
||||
|
||||
item_name = False
|
||||
item_id = False
|
||||
|
||||
if margs['content'] is not None:
|
||||
|
||||
content = content_to_dict(module, margs['content'])
|
||||
|
||||
if 'display_name' in content.keys():
|
||||
item_name = content['display_name']
|
||||
else:
|
||||
module.fail_json(msg="Unable to extract 'display_name' from 'content'")
|
||||
|
||||
elif margs['name'] is not None:
|
||||
item_name = margs['name']
|
||||
|
||||
elif margs['id'] is not None:
|
||||
item_id = margs['id']
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Find Object if available based on ID or Name
|
||||
# ----------------------------------------------------
|
||||
try:
|
||||
my_ext_router = find_collection_item(aos.ExternalRouters,
|
||||
item_name=item_name,
|
||||
item_id=item_id)
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to find the IP Pool based on name or ID, something went wrong")
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Proceed based on State value
|
||||
# ----------------------------------------------------
|
||||
if margs['state'] == 'absent':
|
||||
|
||||
ext_router_absent(module, aos, my_ext_router)
|
||||
|
||||
elif margs['state'] == 'present':
|
||||
|
||||
ext_router_present(module, aos, my_ext_router)
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
session=dict(required=True, type="dict"),
|
||||
name=dict(required=False),
|
||||
id=dict(required=False),
|
||||
content=dict(required=False, type="json"),
|
||||
state=dict(required=False,
|
||||
choices=['present', 'absent'],
|
||||
default="present"),
|
||||
loopback=dict(required=False),
|
||||
asn=dict(required=False)
|
||||
),
|
||||
mutually_exclusive=[('name', 'id', 'content')],
|
||||
required_one_of=[('name', 'id', 'content')],
|
||||
supports_check_mode=True
|
||||
)
|
||||
|
||||
# Check if aos-pyez is present and match the minimum version
|
||||
check_aos_version(module, '0.6.0')
|
||||
|
||||
ext_router(module)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
if __name__ == '__main__':
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,353 +1,15 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# (c) 2017 Apstra Inc, <community@apstra.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: aos_ip_pool
|
||||
author: Damien Garros (@dgarros)
|
||||
version_added: "2.3"
|
||||
short_description: Manage AOS IP Pool
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module does not support AOS 2.1 or later
|
||||
alternative: See new modules at U(https://www.ansible.com/ansible-apstra).
|
||||
description:
|
||||
- Apstra AOS Ip Pool module let you manage your IP Pool easily. You can create
|
||||
create and delete IP Pool by Name, ID or by using a JSON File. This module
|
||||
is idempotent and support the I(check) mode. It's using the AOS REST API.
|
||||
requirements:
|
||||
- "aos-pyez >= 0.6.0"
|
||||
options:
|
||||
session:
|
||||
description:
|
||||
- An existing AOS session as obtained by M(aos_login) module.
|
||||
required: true
|
||||
name:
|
||||
description:
|
||||
- Name of the IP Pool to manage.
|
||||
Only one of I(name), I(id) or I(content) can be set.
|
||||
id:
|
||||
description:
|
||||
- AOS Id of the IP Pool to manage (can't be used to create a new IP Pool),
|
||||
Only one of I(name), I(id) or I(content) can be set.
|
||||
content:
|
||||
description:
|
||||
- Datastructure of the IP Pool to manage. The data can be in YAML / JSON or
|
||||
directly a variable. It's the same datastructure that is returned
|
||||
on success in I(value).
|
||||
state:
|
||||
description:
|
||||
- Indicate what is the expected state of the IP Pool (present or not).
|
||||
default: present
|
||||
choices: ['present', 'absent']
|
||||
subnets:
|
||||
description:
|
||||
- List of subnet that needs to be part of the IP Pool.
|
||||
'''
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
EXAMPLES = '''
|
||||
|
||||
- name: "Create an IP Pool with one subnet"
|
||||
aos_ip_pool:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-ip-pool"
|
||||
subnets: [ 172.10.0.0/16 ]
|
||||
state: present
|
||||
|
||||
- name: "Create an IP Pool with multiple subnets"
|
||||
aos_ip_pool:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-other-ip-pool"
|
||||
subnets: [ 172.10.0.0/16, 192.168.0.0./24 ]
|
||||
state: present
|
||||
|
||||
- name: "Check if an IP Pool exist with same subnets by ID"
|
||||
aos_ip_pool:
|
||||
session: "{{ aos_session }}"
|
||||
name: "45ab26fc-c2ed-4307-b330-0870488fa13e"
|
||||
subnets: [ 172.10.0.0/16, 192.168.0.0./24 ]
|
||||
state: present
|
||||
|
||||
- name: "Delete an IP Pool by name"
|
||||
aos_ip_pool:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-ip-pool"
|
||||
state: absent
|
||||
|
||||
- name: "Delete an IP pool by id"
|
||||
aos_ip_pool:
|
||||
session: "{{ aos_session }}"
|
||||
id: "45ab26fc-c2ed-4307-b330-0870488fa13e"
|
||||
state: absent
|
||||
|
||||
# Save an IP Pool to a file
|
||||
|
||||
- name: "Access IP Pool 1/3"
|
||||
aos_ip_pool:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-ip-pool"
|
||||
subnets: [ 172.10.0.0/16, 172.12.0.0/16 ]
|
||||
state: present
|
||||
register: ip_pool
|
||||
|
||||
- name: "Save Ip Pool into a file in JSON 2/3"
|
||||
copy:
|
||||
content: "{{ ip_pool.value | to_nice_json }}"
|
||||
dest: ip_pool_saved.json
|
||||
|
||||
- name: "Save Ip Pool into a file in YAML 3/3"
|
||||
copy:
|
||||
content: "{{ ip_pool.value | to_nice_yaml }}"
|
||||
dest: ip_pool_saved.yaml
|
||||
|
||||
- name: "Load IP Pool from a JSON file"
|
||||
aos_ip_pool:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/ip_pool_saved.json') }}"
|
||||
state: present
|
||||
|
||||
- name: "Load IP Pool from a YAML file"
|
||||
aos_ip_pool:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/ip_pool_saved.yaml') }}"
|
||||
state: present
|
||||
|
||||
- name: "Load IP Pool from a Variable"
|
||||
aos_ip_pool:
|
||||
session: "{{ aos_session }}"
|
||||
content:
|
||||
display_name: my-ip-pool
|
||||
id: 4276738d-6f86-4034-9656-4bff94a34ea7
|
||||
subnets:
|
||||
- network: 172.10.0.0/16
|
||||
- network: 172.12.0.0/16
|
||||
state: present
|
||||
'''
|
||||
|
||||
RETURNS = '''
|
||||
name:
|
||||
description: Name of the IP Pool
|
||||
returned: always
|
||||
type: str
|
||||
sample: Server-IpAddrs
|
||||
|
||||
id:
|
||||
description: AOS unique ID assigned to the IP Pool
|
||||
returned: always
|
||||
type: str
|
||||
sample: fcc4ac1c-e249-4fe7-b458-2138bfb44c06
|
||||
|
||||
value:
|
||||
description: Value of the object as returned by the AOS Server
|
||||
returned: always
|
||||
type: dict
|
||||
sample: {'...'}
|
||||
'''
|
||||
|
||||
import json
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.network.aos.aos import get_aos_session, find_collection_item, do_load_resource, check_aos_version, content_to_dict
|
||||
|
||||
|
||||
def get_list_of_subnets(ip_pool):
|
||||
subnets = []
|
||||
|
||||
for subnet in ip_pool.value['subnets']:
|
||||
subnets.append(subnet['network'])
|
||||
|
||||
return subnets
|
||||
|
||||
|
||||
def create_new_ip_pool(ip_pool, name, subnets):
|
||||
|
||||
# Create value
|
||||
datum = dict(display_name=name, subnets=[])
|
||||
for subnet in subnets:
|
||||
datum['subnets'].append(dict(network=subnet))
|
||||
|
||||
ip_pool.datum = datum
|
||||
|
||||
# Write to AOS
|
||||
return ip_pool.write()
|
||||
|
||||
#########################################################
|
||||
# State Processing
|
||||
#########################################################
|
||||
|
||||
|
||||
def ip_pool_absent(module, aos, my_pool):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# If the module do not exist, return directly
|
||||
if my_pool.exists is False:
|
||||
module.exit_json(changed=False, name=margs['name'], id='', value={})
|
||||
|
||||
# Check if object is currently in Use or Not
|
||||
# If in Use, return an error
|
||||
if my_pool.value:
|
||||
if my_pool.value['status'] != 'not_in_use':
|
||||
module.fail_json(msg="unable to delete this ip Pool, currently in use")
|
||||
else:
|
||||
module.fail_json(msg="Ip Pool object has an invalid format, value['status'] must be defined")
|
||||
|
||||
# If not in check mode, delete Ip Pool
|
||||
if not module.check_mode:
|
||||
try:
|
||||
my_pool.delete()
|
||||
except Exception:
|
||||
module.fail_json(msg="An error occurred, while trying to delete the IP Pool")
|
||||
|
||||
module.exit_json(changed=True,
|
||||
name=my_pool.name,
|
||||
id=my_pool.id,
|
||||
value={})
|
||||
|
||||
|
||||
def ip_pool_present(module, aos, my_pool):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# if content is defined, create object from Content
|
||||
try:
|
||||
if margs['content'] is not None:
|
||||
|
||||
if 'display_name' in module.params['content'].keys():
|
||||
do_load_resource(module, aos.IpPools, module.params['content']['display_name'])
|
||||
else:
|
||||
module.fail_json(msg="Unable to find display_name in 'content', Mandatory")
|
||||
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to load resource from content, something went wrong")
|
||||
|
||||
# if ip_pool doesn't exist already, create a new one
|
||||
|
||||
if my_pool.exists is False and 'name' not in margs.keys():
|
||||
module.fail_json(msg="Name is mandatory for module that don't exist currently")
|
||||
|
||||
elif my_pool.exists is False:
|
||||
|
||||
if not module.check_mode:
|
||||
try:
|
||||
my_new_pool = create_new_ip_pool(my_pool, margs['name'], margs['subnets'])
|
||||
my_pool = my_new_pool
|
||||
except Exception:
|
||||
module.fail_json(msg="An error occurred while trying to create a new IP Pool ")
|
||||
|
||||
module.exit_json(changed=True,
|
||||
name=my_pool.name,
|
||||
id=my_pool.id,
|
||||
value=my_pool.value)
|
||||
|
||||
# if pool already exist, check if list of network is the same
|
||||
# if same just return the object and report change false
|
||||
if set(get_list_of_subnets(my_pool)) == set(margs['subnets']):
|
||||
module.exit_json(changed=False,
|
||||
name=my_pool.name,
|
||||
id=my_pool.id,
|
||||
value=my_pool.value)
|
||||
else:
|
||||
module.fail_json(msg="ip_pool already exist but value is different, currently not supported to update a module")
|
||||
|
||||
#########################################################
|
||||
# Main Function
|
||||
#########################################################
|
||||
|
||||
|
||||
def ip_pool(module):
|
||||
|
||||
margs = module.params
|
||||
|
||||
try:
|
||||
aos = get_aos_session(module, margs['session'])
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to login to the AOS server")
|
||||
|
||||
item_name = False
|
||||
item_id = False
|
||||
|
||||
if margs['content'] is not None:
|
||||
|
||||
content = content_to_dict(module, margs['content'])
|
||||
|
||||
if 'display_name' in content.keys():
|
||||
item_name = content['display_name']
|
||||
else:
|
||||
module.fail_json(msg="Unable to extract 'display_name' from 'content'")
|
||||
|
||||
elif margs['name'] is not None:
|
||||
item_name = margs['name']
|
||||
|
||||
elif margs['id'] is not None:
|
||||
item_id = margs['id']
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Find Object if available based on ID or Name
|
||||
# ----------------------------------------------------
|
||||
try:
|
||||
my_pool = find_collection_item(aos.IpPools,
|
||||
item_name=item_name,
|
||||
item_id=item_id)
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to find the IP Pool based on name or ID, something went wrong")
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Proceed based on State value
|
||||
# ----------------------------------------------------
|
||||
if margs['state'] == 'absent':
|
||||
|
||||
ip_pool_absent(module, aos, my_pool)
|
||||
|
||||
elif margs['state'] == 'present':
|
||||
|
||||
ip_pool_present(module, aos, my_pool)
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
session=dict(required=True, type="dict"),
|
||||
name=dict(required=False),
|
||||
id=dict(required=False),
|
||||
content=dict(required=False, type="json"),
|
||||
state=dict(required=False,
|
||||
choices=['present', 'absent'],
|
||||
default="present"),
|
||||
subnets=dict(required=False, type="list")
|
||||
),
|
||||
mutually_exclusive=[('name', 'id', 'content')],
|
||||
required_one_of=[('name', 'id', 'content')],
|
||||
supports_check_mode=True
|
||||
)
|
||||
|
||||
# Check if aos-pyez is present and match the minimum version
|
||||
check_aos_version(module, '0.6.0')
|
||||
|
||||
ip_pool(module)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
if __name__ == '__main__':
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,264 +1,15 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# (c) 2017 Apstra Inc, <community@apstra.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: aos_logical_device
|
||||
author: Damien Garros (@dgarros)
|
||||
version_added: "2.3"
|
||||
short_description: Manage AOS Logical Device
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module does not support AOS 2.1 or later
|
||||
alternative: See new modules at U(https://www.ansible.com/ansible-apstra).
|
||||
description:
|
||||
- Apstra AOS Logical Device module let you manage your Logical Devices easily.
|
||||
You can create create and delete Logical Device by Name, ID or by using a JSON File.
|
||||
This module is idempotent and support the I(check) mode.
|
||||
It's using the AOS REST API.
|
||||
requirements:
|
||||
- "aos-pyez >= 0.6.0"
|
||||
options:
|
||||
session:
|
||||
description:
|
||||
- An existing AOS session as obtained by M(aos_login) module.
|
||||
required: true
|
||||
name:
|
||||
description:
|
||||
- Name of the Logical Device to manage.
|
||||
Only one of I(name), I(id) or I(content) can be set.
|
||||
id:
|
||||
description:
|
||||
- AOS Id of the Logical Device to manage (can't be used to create a new Logical Device),
|
||||
Only one of I(name), I(id) or I(content) can be set.
|
||||
content:
|
||||
description:
|
||||
- Datastructure of the Logical Device to create. The data can be in YAML / JSON or
|
||||
directly a variable. It's the same datastructure that is returned
|
||||
on success in I(value).
|
||||
state:
|
||||
description:
|
||||
- Indicate what is the expected state of the Logical Device (present or not).
|
||||
default: present
|
||||
choices: ['present', 'absent']
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
|
||||
- name: "Delete a Logical Device by name"
|
||||
aos_logical_device:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-logical-device"
|
||||
state: absent
|
||||
|
||||
- name: "Delete a Logical Device by id"
|
||||
aos_logical_device:
|
||||
session: "{{ aos_session }}"
|
||||
id: "45ab26fc-c2ed-4307-b330-0870488fa13e"
|
||||
state: absent
|
||||
|
||||
# Save a Logical Device to a file
|
||||
|
||||
- name: "Access Logical Device 1/3"
|
||||
aos_logical_device:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-logical-device"
|
||||
state: present
|
||||
register: logical_device
|
||||
|
||||
- name: "Save Logical Device into a JSON file 2/3"
|
||||
copy:
|
||||
content: "{{ logical_device.value | to_nice_json }}"
|
||||
dest: logical_device_saved.json
|
||||
- name: "Save Logical Device into a YAML file 3/3"
|
||||
copy:
|
||||
content: "{{ logical_device.value | to_nice_yaml }}"
|
||||
dest: logical_device_saved.yaml
|
||||
|
||||
- name: "Load Logical Device from a JSON file"
|
||||
aos_logical_device:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/logical_device_saved.json') }}"
|
||||
state: present
|
||||
|
||||
- name: "Load Logical Device from a YAML file"
|
||||
aos_logical_device:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/logical_device_saved.yaml') }}"
|
||||
state: present
|
||||
'''
|
||||
|
||||
RETURNS = '''
|
||||
name:
|
||||
description: Name of the Logical Device
|
||||
returned: always
|
||||
type: str
|
||||
sample: AOS-1x25-1
|
||||
|
||||
id:
|
||||
description: AOS unique ID assigned to the Logical Device
|
||||
returned: always
|
||||
type: str
|
||||
sample: fcc4ac1c-e249-4fe7-b458-2138bfb44c06
|
||||
|
||||
value:
|
||||
description: Value of the object as returned by the AOS Server
|
||||
returned: always
|
||||
type: dict
|
||||
sample: {'...'}
|
||||
'''
|
||||
|
||||
import json
|
||||
import time
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.network.aos.aos import get_aos_session, find_collection_item, do_load_resource, check_aos_version, content_to_dict
|
||||
|
||||
#########################################################
|
||||
# State Processing
|
||||
#########################################################
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
def logical_device_absent(module, aos, my_logical_dev):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# If the module do not exist, return directly
|
||||
if my_logical_dev.exists is False:
|
||||
module.exit_json(changed=False,
|
||||
name=margs['name'],
|
||||
id=margs['id'],
|
||||
value={})
|
||||
|
||||
# If not in check mode, delete Logical Device
|
||||
if not module.check_mode:
|
||||
try:
|
||||
# Need to way 1sec before a delete to workaround a current limitation in AOS
|
||||
time.sleep(1)
|
||||
my_logical_dev.delete()
|
||||
except Exception:
|
||||
module.fail_json(msg="An error occurred, while trying to delete the Logical Device")
|
||||
|
||||
module.exit_json(changed=True,
|
||||
name=my_logical_dev.name,
|
||||
id=my_logical_dev.id,
|
||||
value={})
|
||||
|
||||
|
||||
def logical_device_present(module, aos, my_logical_dev):
|
||||
|
||||
margs = module.params
|
||||
|
||||
if margs['content'] is not None:
|
||||
|
||||
if 'display_name' in module.params['content'].keys():
|
||||
do_load_resource(module, aos.LogicalDevices, module.params['content']['display_name'])
|
||||
else:
|
||||
module.fail_json(msg="Unable to find display_name in 'content', Mandatory")
|
||||
|
||||
# if logical_device doesn't exist already, create a new one
|
||||
if my_logical_dev.exists is False and 'content' not in margs.keys():
|
||||
module.fail_json(msg="'content' is mandatory for module that don't exist currently")
|
||||
|
||||
module.exit_json(changed=False,
|
||||
name=my_logical_dev.name,
|
||||
id=my_logical_dev.id,
|
||||
value=my_logical_dev.value)
|
||||
|
||||
#########################################################
|
||||
# Main Function
|
||||
#########################################################
|
||||
|
||||
|
||||
def logical_device(module):
|
||||
|
||||
margs = module.params
|
||||
|
||||
try:
|
||||
aos = get_aos_session(module, margs['session'])
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to login to the AOS server")
|
||||
|
||||
item_name = False
|
||||
item_id = False
|
||||
|
||||
if margs['content'] is not None:
|
||||
|
||||
content = content_to_dict(module, margs['content'])
|
||||
|
||||
if 'display_name' in content.keys():
|
||||
item_name = content['display_name']
|
||||
else:
|
||||
module.fail_json(msg="Unable to extract 'display_name' from 'content'")
|
||||
|
||||
elif margs['name'] is not None:
|
||||
item_name = margs['name']
|
||||
|
||||
elif margs['id'] is not None:
|
||||
item_id = margs['id']
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Find Object if available based on ID or Name
|
||||
# ----------------------------------------------------
|
||||
my_logical_dev = find_collection_item(aos.LogicalDevices,
|
||||
item_name=item_name,
|
||||
item_id=item_id)
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Proceed based on State value
|
||||
# ----------------------------------------------------
|
||||
if margs['state'] == 'absent':
|
||||
|
||||
logical_device_absent(module, aos, my_logical_dev)
|
||||
|
||||
elif margs['state'] == 'present':
|
||||
|
||||
logical_device_present(module, aos, my_logical_dev)
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
session=dict(required=True, type="dict"),
|
||||
name=dict(required=False),
|
||||
id=dict(required=False),
|
||||
content=dict(required=False, type="json"),
|
||||
state=dict(required=False,
|
||||
choices=['present', 'absent'],
|
||||
default="present")
|
||||
),
|
||||
mutually_exclusive=[('name', 'id', 'content')],
|
||||
required_one_of=[('name', 'id', 'content')],
|
||||
supports_check_mode=True
|
||||
)
|
||||
|
||||
# Check if aos-pyez is present and match the minimum version
|
||||
check_aos_version(module, '0.6.0')
|
||||
|
||||
logical_device(module)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
if __name__ == '__main__':
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,286 +1,15 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# (c) 2017 Apstra Inc, <community@apstra.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: aos_logical_device_map
|
||||
author: Damien Garros (@dgarros)
|
||||
version_added: "2.3"
|
||||
short_description: Manage AOS Logical Device Map
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module does not support AOS 2.1 or later
|
||||
alternative: See new modules at U(https://www.ansible.com/ansible-apstra).
|
||||
description:
|
||||
- Apstra AOS Logical Device Map module let you manage your Logical Device Map easily. You can create
|
||||
create and delete Logical Device Map by Name, ID or by using a JSON File. This module
|
||||
is idempotent and support the I(check) mode. It's using the AOS REST API.
|
||||
requirements:
|
||||
- "aos-pyez >= 0.6.0"
|
||||
options:
|
||||
session:
|
||||
description:
|
||||
- An existing AOS session as obtained by M(aos_login) module.
|
||||
required: true
|
||||
name:
|
||||
description:
|
||||
- Name of the Logical Device Map to manage.
|
||||
Only one of I(name), I(id) or I(content) can be set.
|
||||
id:
|
||||
description:
|
||||
- AOS Id of the Logical Device Map to manage (can't be used to create a new Logical Device Map),
|
||||
Only one of I(name), I(id) or I(content) can be set.
|
||||
content:
|
||||
description:
|
||||
- Datastructure of the Logical Device Map to manage. The data can be in YAML / JSON or
|
||||
directly a variable. It's the same datastructure that is returned
|
||||
on success in I(value). Only one of I(name), I(id) or I(content) can be set.
|
||||
state:
|
||||
description:
|
||||
- Indicate what is the expected state of the Logical Device Map (present or not).
|
||||
default: present
|
||||
choices: ['present', 'absent']
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
|
||||
- name: "Create an Logical Device Map with one subnet"
|
||||
aos_logical_device_map:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-logical-device-map"
|
||||
state: present
|
||||
|
||||
- name: "Create an Logical Device Map with multiple subnets"
|
||||
aos_logical_device_map:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-other-logical-device-map"
|
||||
state: present
|
||||
|
||||
- name: "Check if an Logical Device Map exist with same subnets by ID"
|
||||
aos_logical_device_map:
|
||||
session: "{{ aos_session }}"
|
||||
name: "45ab26fc-c2ed-4307-b330-0870488fa13e"
|
||||
state: present
|
||||
|
||||
- name: "Delete an Logical Device Map by name"
|
||||
aos_logical_device_map:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-logical-device-map"
|
||||
state: absent
|
||||
|
||||
- name: "Delete an Logical Device Map by id"
|
||||
aos_logical_device_map:
|
||||
session: "{{ aos_session }}"
|
||||
id: "45ab26fc-c2ed-4307-b330-0870488fa13e"
|
||||
state: absent
|
||||
|
||||
# Save an Logical Device Map to a file
|
||||
|
||||
- name: "Access Logical Device Map 1/3"
|
||||
aos_logical_device_map:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-logical-device-map"
|
||||
state: present
|
||||
register: logical_device_map
|
||||
|
||||
- name: "Save Logical Device Map into a file in JSON 2/3"
|
||||
copy:
|
||||
content: "{{ logical_device_map.value | to_nice_json }}"
|
||||
dest: logical_device_map_saved.json
|
||||
|
||||
- name: "Save Logical Device Map into a file in YAML 3/3"
|
||||
copy:
|
||||
content: "{{ logical_device_map.value | to_nice_yaml }}"
|
||||
dest: logical_device_map_saved.yaml
|
||||
|
||||
- name: "Load Logical Device Map from a JSON file"
|
||||
aos_logical_device_map:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/logical_device_map_saved.json') }}"
|
||||
state: present
|
||||
|
||||
- name: "Load Logical Device Map from a YAML file"
|
||||
aos_logical_device_map:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/logical_device_map_saved.yaml') }}"
|
||||
state: present
|
||||
|
||||
'''
|
||||
|
||||
RETURNS = '''
|
||||
name:
|
||||
description: Name of the Logical Device Map
|
||||
returned: always
|
||||
type: str
|
||||
sample: Server-IpAddrs
|
||||
|
||||
id:
|
||||
description: AOS unique ID assigned to the Logical Device Map
|
||||
returned: always
|
||||
type: str
|
||||
sample: fcc4ac1c-e249-4fe7-b458-2138bfb44c06
|
||||
|
||||
value:
|
||||
description: Value of the object as returned by the AOS Server
|
||||
returned: always
|
||||
type: dict
|
||||
sample: {'...'}
|
||||
'''
|
||||
|
||||
import json
|
||||
import time
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.network.aos.aos import get_aos_session, find_collection_item, do_load_resource, check_aos_version, content_to_dict
|
||||
|
||||
#########################################################
|
||||
# State Processing
|
||||
#########################################################
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
def logical_device_map_absent(module, aos, my_log_dev_map):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# If the module do not exist, return directly
|
||||
if my_log_dev_map.exists is False:
|
||||
module.exit_json(changed=False, name=margs['name'], id='', value={})
|
||||
|
||||
# If not in check mode, delete Logical Device Map
|
||||
if not module.check_mode:
|
||||
try:
|
||||
# Need to wait for 1sec before a delete to workaround a current
|
||||
# limitation in AOS
|
||||
time.sleep(1)
|
||||
my_log_dev_map.delete()
|
||||
except Exception:
|
||||
module.fail_json(msg="An error occurred, while trying to delete the Logical Device Map")
|
||||
|
||||
module.exit_json(changed=True,
|
||||
name=my_log_dev_map.name,
|
||||
id=my_log_dev_map.id,
|
||||
value={})
|
||||
|
||||
|
||||
def logical_device_map_present(module, aos, my_log_dev_map):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# if content is defined, create object from Content
|
||||
if margs['content'] is not None:
|
||||
|
||||
if 'display_name' in module.params['content'].keys():
|
||||
do_load_resource(module, aos.LogicalDeviceMaps, module.params['content']['display_name'])
|
||||
else:
|
||||
module.fail_json(msg="Unable to find display_name in 'content', Mandatory")
|
||||
|
||||
# if my_log_dev_map doesn't exist already, create a new one
|
||||
|
||||
if my_log_dev_map.exists is False and 'content' not in margs.keys():
|
||||
module.fail_json(msg="'Content' is mandatory for module that don't exist currently")
|
||||
|
||||
module.exit_json(changed=False,
|
||||
name=my_log_dev_map.name,
|
||||
id=my_log_dev_map.id,
|
||||
value=my_log_dev_map.value)
|
||||
|
||||
#########################################################
|
||||
# Main Function
|
||||
#########################################################
|
||||
|
||||
|
||||
def logical_device_map(module):
|
||||
|
||||
margs = module.params
|
||||
|
||||
try:
|
||||
aos = get_aos_session(module, margs['session'])
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to login to the AOS server")
|
||||
|
||||
item_name = False
|
||||
item_id = False
|
||||
|
||||
if margs['content'] is not None:
|
||||
|
||||
content = content_to_dict(module, margs['content'])
|
||||
|
||||
if 'display_name' in content.keys():
|
||||
item_name = content['display_name']
|
||||
else:
|
||||
module.fail_json(msg="Unable to extract 'display_name' from 'content'")
|
||||
|
||||
elif margs['name'] is not None:
|
||||
item_name = margs['name']
|
||||
|
||||
elif margs['id'] is not None:
|
||||
item_id = margs['id']
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Find Object if available based on ID or Name
|
||||
# ----------------------------------------------------
|
||||
try:
|
||||
my_log_dev_map = find_collection_item(aos.LogicalDeviceMaps,
|
||||
item_name=item_name,
|
||||
item_id=item_id)
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to find the Logical Device Map based on name or ID, something went wrong")
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Proceed based on State value
|
||||
# ----------------------------------------------------
|
||||
if margs['state'] == 'absent':
|
||||
|
||||
logical_device_map_absent(module, aos, my_log_dev_map)
|
||||
|
||||
elif margs['state'] == 'present':
|
||||
|
||||
logical_device_map_present(module, aos, my_log_dev_map)
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
session=dict(required=True, type="dict"),
|
||||
name=dict(required=False),
|
||||
id=dict(required=False),
|
||||
content=dict(required=False, type="json"),
|
||||
state=dict(required=False,
|
||||
choices=['present', 'absent'],
|
||||
default="present")
|
||||
),
|
||||
mutually_exclusive=[('name', 'id', 'content')],
|
||||
required_one_of=[('name', 'id', 'content')],
|
||||
supports_check_mode=True
|
||||
)
|
||||
|
||||
# Check if aos-pyez is present and match the minimum version
|
||||
check_aos_version(module, '0.6.0')
|
||||
|
||||
logical_device_map(module)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
if __name__ == '__main__':
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,139 +1,15 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# (c) 2017 Apstra Inc, <community@apstra.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: aos_login
|
||||
author: jeremy@apstra.com (@jeremyschulman)
|
||||
version_added: "2.3"
|
||||
short_description: Login to AOS server for session token
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module does not support AOS 2.1 or later
|
||||
alternative: See new modules at U(https://www.ansible.com/ansible-apstra).
|
||||
description:
|
||||
- Obtain the AOS server session token by providing the required
|
||||
username and password credentials. Upon successful authentication,
|
||||
this module will return the session-token that is required by all
|
||||
subsequent AOS module usage. On success the module will automatically populate
|
||||
ansible facts with the variable I(aos_session)
|
||||
This module is not idempotent and do not support check mode.
|
||||
requirements:
|
||||
- "aos-pyez >= 0.6.1"
|
||||
options:
|
||||
server:
|
||||
description:
|
||||
- Address of the AOS Server on which you want to open a connection.
|
||||
required: true
|
||||
port:
|
||||
description:
|
||||
- Port number to use when connecting to the AOS server.
|
||||
default: 443
|
||||
user:
|
||||
description:
|
||||
- Login username to use when connecting to the AOS server.
|
||||
default: admin
|
||||
passwd:
|
||||
description:
|
||||
- Password to use when connecting to the AOS server.
|
||||
default: admin
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
|
||||
- name: Create a session with the AOS-server
|
||||
aos_login:
|
||||
server: "{{ inventory_hostname }}"
|
||||
user: admin
|
||||
passwd: admin
|
||||
|
||||
- name: Use the newly created session (register is not mandatory)
|
||||
aos_ip_pool:
|
||||
session: "{{ aos_session }}"
|
||||
name: my_ip_pool
|
||||
state: present
|
||||
'''
|
||||
|
||||
RETURNS = '''
|
||||
aos_session:
|
||||
description: Authenticated session information
|
||||
returned: always
|
||||
type: dict
|
||||
sample: { 'url': <str>, 'headers': {...} }
|
||||
'''
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.network.aos.aos import check_aos_version
|
||||
|
||||
try:
|
||||
from apstra.aosom.session import Session
|
||||
import apstra.aosom.exc as aosExc
|
||||
|
||||
HAS_AOS_PYEZ = True
|
||||
except ImportError:
|
||||
HAS_AOS_PYEZ = False
|
||||
|
||||
|
||||
def aos_login(module):
|
||||
|
||||
mod_args = module.params
|
||||
|
||||
aos = Session(server=mod_args['server'], port=mod_args['port'],
|
||||
user=mod_args['user'], passwd=mod_args['passwd'])
|
||||
|
||||
try:
|
||||
aos.login()
|
||||
except aosExc.LoginServerUnreachableError:
|
||||
module.fail_json(
|
||||
msg="AOS-server [%s] API not available/reachable, check server" % aos.server)
|
||||
|
||||
except aosExc.LoginAuthError:
|
||||
module.fail_json(msg="AOS-server login credentials failed")
|
||||
|
||||
module.exit_json(changed=False,
|
||||
ansible_facts=dict(aos_session=aos.session),
|
||||
aos_session=dict(aos_session=aos.session))
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
server=dict(required=True),
|
||||
port=dict(default='443', type="int"),
|
||||
user=dict(default='admin'),
|
||||
passwd=dict(default='admin', no_log=True)))
|
||||
|
||||
if not HAS_AOS_PYEZ:
|
||||
module.fail_json(msg='aos-pyez is not installed. Please see details '
|
||||
'here: https://github.com/Apstra/aos-pyez')
|
||||
|
||||
# Check if aos-pyez is present and match the minimum version
|
||||
check_aos_version(module, '0.6.1')
|
||||
|
||||
aos_login(module)
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,261 +1,15 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# (c) 2017 Apstra Inc, <community@apstra.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: aos_rack_type
|
||||
author: Damien Garros (@dgarros)
|
||||
version_added: "2.3"
|
||||
short_description: Manage AOS Rack Type
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module does not support AOS 2.1 or later
|
||||
alternative: See new modules at U(https://www.ansible.com/ansible-apstra).
|
||||
description:
|
||||
- Apstra AOS Rack Type module let you manage your Rack Type easily.
|
||||
You can create create and delete Rack Type by Name, ID or by using a JSON File.
|
||||
This module is idempotent and support the I(check) mode.
|
||||
It's using the AOS REST API.
|
||||
requirements:
|
||||
- "aos-pyez >= 0.6.0"
|
||||
options:
|
||||
session:
|
||||
description:
|
||||
- An existing AOS session as obtained by M(aos_login) module.
|
||||
required: true
|
||||
name:
|
||||
description:
|
||||
- Name of the Rack Type to manage.
|
||||
Only one of I(name), I(id) or I(content) can be set.
|
||||
id:
|
||||
description:
|
||||
- AOS Id of the Rack Type to manage (can't be used to create a new Rack Type),
|
||||
Only one of I(name), I(id) or I(content) can be set.
|
||||
content:
|
||||
description:
|
||||
- Datastructure of the Rack Type to create. The data can be in YAML / JSON or
|
||||
directly a variable. It's the same datastructure that is returned
|
||||
on success in I(value).
|
||||
state:
|
||||
description:
|
||||
- Indicate what is the expected state of the Rack Type (present or not).
|
||||
default: present
|
||||
choices: ['present', 'absent']
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
|
||||
- name: "Delete a Rack Type by name"
|
||||
aos_rack_type:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-rack-type"
|
||||
state: absent
|
||||
|
||||
- name: "Delete a Rack Type by id"
|
||||
aos_rack_type:
|
||||
session: "{{ aos_session }}"
|
||||
id: "45ab26fc-c2ed-4307-b330-0870488fa13e"
|
||||
state: absent
|
||||
|
||||
# Save a Rack Type to a file
|
||||
|
||||
- name: "Access Rack Type 1/3"
|
||||
aos_rack_type:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-rack-type"
|
||||
state: present
|
||||
register: rack_type
|
||||
|
||||
- name: "Save Rack Type into a JSON file 2/3"
|
||||
copy:
|
||||
content: "{{ rack_type.value | to_nice_json }}"
|
||||
dest: rack_type_saved.json
|
||||
- name: "Save Rack Type into a YAML file 3/3"
|
||||
copy:
|
||||
content: "{{ rack_type.value | to_nice_yaml }}"
|
||||
dest: rack_type_saved.yaml
|
||||
|
||||
- name: "Load Rack Type from a JSON file"
|
||||
aos_rack_type:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/rack_type_saved.json') }}"
|
||||
state: present
|
||||
|
||||
- name: "Load Rack Type from a YAML file"
|
||||
aos_rack_type:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/rack_type_saved.yaml') }}"
|
||||
state: present
|
||||
'''
|
||||
|
||||
RETURNS = '''
|
||||
name:
|
||||
description: Name of the Rack Type
|
||||
returned: always
|
||||
type: str
|
||||
sample: AOS-1x25-1
|
||||
|
||||
id:
|
||||
description: AOS unique ID assigned to the Rack Type
|
||||
returned: always
|
||||
type: str
|
||||
sample: fcc4ac1c-e249-4fe7-b458-2138bfb44c06
|
||||
|
||||
value:
|
||||
description: Value of the object as returned by the AOS Server
|
||||
returned: always
|
||||
type: dict
|
||||
sample: {'...'}
|
||||
'''
|
||||
|
||||
import json
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.network.aos.aos import get_aos_session, find_collection_item, do_load_resource, check_aos_version, content_to_dict
|
||||
|
||||
#########################################################
|
||||
# State Processing
|
||||
#########################################################
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
def rack_type_absent(module, aos, my_rack_type):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# If the module do not exist, return directly
|
||||
if my_rack_type.exists is False:
|
||||
module.exit_json(changed=False,
|
||||
name=margs['name'],
|
||||
id=margs['id'],
|
||||
value={})
|
||||
|
||||
# If not in check mode, delete Rack Type
|
||||
if not module.check_mode:
|
||||
try:
|
||||
my_rack_type.delete()
|
||||
except Exception:
|
||||
module.fail_json(msg="An error occurred, while trying to delete the Rack Type")
|
||||
|
||||
module.exit_json(changed=True,
|
||||
name=my_rack_type.name,
|
||||
id=my_rack_type.id,
|
||||
value={})
|
||||
|
||||
|
||||
def rack_type_present(module, aos, my_rack_type):
|
||||
|
||||
margs = module.params
|
||||
|
||||
if margs['content'] is not None:
|
||||
|
||||
if 'display_name' in module.params['content'].keys():
|
||||
do_load_resource(module, aos.RackTypes, module.params['content']['display_name'])
|
||||
else:
|
||||
module.fail_json(msg="Unable to find display_name in 'content', Mandatory")
|
||||
|
||||
# if rack_type doesn't exist already, create a new one
|
||||
if my_rack_type.exists is False and 'content' not in margs.keys():
|
||||
module.fail_json(msg="'content' is mandatory for module that don't exist currently")
|
||||
|
||||
module.exit_json(changed=False,
|
||||
name=my_rack_type.name,
|
||||
id=my_rack_type.id,
|
||||
value=my_rack_type.value)
|
||||
|
||||
#########################################################
|
||||
# Main Function
|
||||
#########################################################
|
||||
|
||||
|
||||
def rack_type(module):
|
||||
|
||||
margs = module.params
|
||||
|
||||
try:
|
||||
aos = get_aos_session(module, margs['session'])
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to login to the AOS server")
|
||||
|
||||
item_name = False
|
||||
item_id = False
|
||||
|
||||
if margs['content'] is not None:
|
||||
|
||||
content = content_to_dict(module, margs['content'])
|
||||
|
||||
if 'display_name' in content.keys():
|
||||
item_name = content['display_name']
|
||||
else:
|
||||
module.fail_json(msg="Unable to extract 'display_name' from 'content'")
|
||||
|
||||
elif margs['name'] is not None:
|
||||
item_name = margs['name']
|
||||
|
||||
elif margs['id'] is not None:
|
||||
item_id = margs['id']
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Find Object if available based on ID or Name
|
||||
# ----------------------------------------------------
|
||||
my_rack_type = find_collection_item(aos.RackTypes,
|
||||
item_name=item_name,
|
||||
item_id=item_id)
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Proceed based on State value
|
||||
# ----------------------------------------------------
|
||||
if margs['state'] == 'absent':
|
||||
|
||||
rack_type_absent(module, aos, my_rack_type)
|
||||
|
||||
elif margs['state'] == 'present':
|
||||
|
||||
rack_type_present(module, aos, my_rack_type)
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
session=dict(required=True, type="dict"),
|
||||
name=dict(required=False),
|
||||
id=dict(required=False),
|
||||
content=dict(required=False, type="json"),
|
||||
state=dict(required=False,
|
||||
choices=['present', 'absent'],
|
||||
default="present")
|
||||
),
|
||||
mutually_exclusive=[('name', 'id', 'content')],
|
||||
required_one_of=[('name', 'id', 'content')],
|
||||
supports_check_mode=True
|
||||
)
|
||||
|
||||
# Check if aos-pyez is present and match the minimum version
|
||||
check_aos_version(module, '0.6.0')
|
||||
|
||||
rack_type(module)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
if __name__ == '__main__':
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,278 +1,15 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# (c) 2017 Apstra Inc, <community@apstra.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: aos_template
|
||||
author: Damien Garros (@dgarros)
|
||||
version_added: "2.3"
|
||||
short_description: Manage AOS Template
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: This module does not support AOS 2.1 or later
|
||||
alternative: See new modules at U(https://www.ansible.com/ansible-apstra).
|
||||
description:
|
||||
- Apstra AOS Template module let you manage your Template easily. You can create
|
||||
create and delete Template by Name, ID or by using a JSON File. This module
|
||||
is idempotent and support the I(check) mode. It's using the AOS REST API.
|
||||
requirements:
|
||||
- "aos-pyez >= 0.6.0"
|
||||
options:
|
||||
session:
|
||||
description:
|
||||
- An existing AOS session as obtained by M(aos_login) module.
|
||||
required: true
|
||||
name:
|
||||
description:
|
||||
- Name of the Template to manage.
|
||||
Only one of I(name), I(id) or I(src) can be set.
|
||||
id:
|
||||
description:
|
||||
- AOS Id of the Template to manage (can't be used to create a new Template),
|
||||
Only one of I(name), I(id) or I(src) can be set.
|
||||
content:
|
||||
description:
|
||||
- Datastructure of the Template to create. The data can be in YAML / JSON or
|
||||
directly a variable. It's the same datastructure that is returned
|
||||
on success in I(value).
|
||||
state:
|
||||
description:
|
||||
- Indicate what is the expected state of the Template (present or not).
|
||||
default: present
|
||||
choices: ['present', 'absent']
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
|
||||
- name: "Check if an Template exist by name"
|
||||
aos_template:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-template"
|
||||
state: present
|
||||
|
||||
- name: "Check if an Template exist by ID"
|
||||
aos_template:
|
||||
session: "{{ aos_session }}"
|
||||
id: "45ab26fc-c2ed-4307-b330-0870488fa13e"
|
||||
state: present
|
||||
|
||||
- name: "Delete an Template by name"
|
||||
aos_template:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-template"
|
||||
state: absent
|
||||
|
||||
- name: "Delete an Template by id"
|
||||
aos_template:
|
||||
session: "{{ aos_session }}"
|
||||
id: "45ab26fc-c2ed-4307-b330-0870488fa13e"
|
||||
state: absent
|
||||
|
||||
- name: "Access Template 1/3"
|
||||
aos_template:
|
||||
session: "{{ aos_session }}"
|
||||
name: "my-template"
|
||||
state: present
|
||||
register: template
|
||||
|
||||
- name: "Save Template into a JSON file 2/3"
|
||||
copy:
|
||||
content: "{{ template.value | to_nice_json }}"
|
||||
dest: template_saved.json
|
||||
- name: "Save Template into a YAML file 2/3"
|
||||
copy:
|
||||
content: "{{ template.value | to_nice_yaml }}"
|
||||
dest: template_saved.yaml
|
||||
|
||||
- name: "Load Template from File (Json)"
|
||||
aos_template:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/template_saved.json') }}"
|
||||
state: present
|
||||
|
||||
- name: "Load Template from File (yaml)"
|
||||
aos_template:
|
||||
session: "{{ aos_session }}"
|
||||
content: "{{ lookup('file', 'resources/template_saved.yaml') }}"
|
||||
state: present
|
||||
'''
|
||||
|
||||
RETURNS = '''
|
||||
name:
|
||||
description: Name of the Template
|
||||
returned: always
|
||||
type: str
|
||||
sample: My-Template
|
||||
|
||||
id:
|
||||
description: AOS unique ID assigned to the Template
|
||||
returned: always
|
||||
type: str
|
||||
sample: fcc4ac1c-e249-4fe7-b458-2138bfb44c06
|
||||
|
||||
value:
|
||||
description: Value of the object as returned by the AOS Server
|
||||
returned: always
|
||||
type: dict
|
||||
sample: {'...'}
|
||||
'''
|
||||
|
||||
import time
|
||||
import json
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.network.aos.aos import get_aos_session, find_collection_item, do_load_resource, check_aos_version, content_to_dict
|
||||
|
||||
#########################################################
|
||||
# State Processing
|
||||
#########################################################
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
def template_absent(module, aos, my_template):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# If the module do not exist, return directly
|
||||
if my_template.exists is False:
|
||||
module.exit_json(changed=False,
|
||||
name=margs['name'],
|
||||
id=margs['id'],
|
||||
value={})
|
||||
|
||||
# If not in check mode, delete Template
|
||||
if not module.check_mode:
|
||||
try:
|
||||
# need to way 1sec before delete to workaround a current limitation in AOS
|
||||
time.sleep(1)
|
||||
my_template.delete()
|
||||
except Exception:
|
||||
module.fail_json(msg="An error occurred, while trying to delete the Template")
|
||||
|
||||
module.exit_json(changed=True,
|
||||
name=my_template.name,
|
||||
id=my_template.id,
|
||||
value={})
|
||||
|
||||
|
||||
def template_present(module, aos, my_template):
|
||||
|
||||
margs = module.params
|
||||
|
||||
# if content is defined, create object from Content
|
||||
|
||||
if margs['content'] is not None:
|
||||
|
||||
if 'display_name' in module.params['content'].keys():
|
||||
do_load_resource(module, aos.DesignTemplates, module.params['content']['display_name'])
|
||||
else:
|
||||
module.fail_json(msg="Unable to find display_name in 'content', Mandatory")
|
||||
|
||||
# if template doesn't exist already, create a new one
|
||||
if my_template.exists is False and 'content' not in margs.keys():
|
||||
module.fail_json(msg="'content' is mandatory for module that don't exist currently")
|
||||
|
||||
# if module already exist, just return it
|
||||
module.exit_json(changed=False,
|
||||
name=my_template.name,
|
||||
id=my_template.id,
|
||||
value=my_template.value)
|
||||
|
||||
|
||||
#########################################################
|
||||
# Main Function
|
||||
#########################################################
|
||||
def aos_template(module):
|
||||
|
||||
margs = module.params
|
||||
|
||||
try:
|
||||
aos = get_aos_session(module, margs['session'])
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to login to the AOS server")
|
||||
|
||||
item_name = False
|
||||
item_id = False
|
||||
|
||||
if margs['content'] is not None:
|
||||
|
||||
content = content_to_dict(module, margs['content'])
|
||||
|
||||
if 'display_name' in content.keys():
|
||||
item_name = content['display_name']
|
||||
else:
|
||||
module.fail_json(msg="Unable to extract 'display_name' from 'content'")
|
||||
|
||||
elif margs['name'] is not None:
|
||||
item_name = margs['name']
|
||||
|
||||
elif margs['id'] is not None:
|
||||
item_id = margs['id']
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Find Object if available based on ID or Name
|
||||
# ----------------------------------------------------
|
||||
try:
|
||||
my_template = find_collection_item(aos.DesignTemplates,
|
||||
item_name=item_name,
|
||||
item_id=item_id)
|
||||
except Exception:
|
||||
module.fail_json(msg="Unable to find the IP Pool based on name or ID, something went wrong")
|
||||
|
||||
# ----------------------------------------------------
|
||||
# Proceed based on State value
|
||||
# ----------------------------------------------------
|
||||
if margs['state'] == 'absent':
|
||||
|
||||
template_absent(module, aos, my_template)
|
||||
|
||||
elif margs['state'] == 'present':
|
||||
|
||||
template_present(module, aos, my_template)
|
||||
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec=dict(
|
||||
session=dict(required=True, type="dict"),
|
||||
name=dict(required=False),
|
||||
id=dict(required=False),
|
||||
content=dict(required=False, type="json"),
|
||||
state=dict(required=False,
|
||||
choices=['present', 'absent'],
|
||||
default="present")
|
||||
),
|
||||
mutually_exclusive=[('name', 'id', 'content')],
|
||||
required_one_of=[('name', 'id', 'content')],
|
||||
supports_check_mode=True
|
||||
)
|
||||
|
||||
# Check if aos-pyez is present and match the minimum version
|
||||
check_aos_version(module, '0.6.0')
|
||||
|
||||
aos_template(module)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
if __name__ == '__main__':
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,599 +1,14 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# Copyright: Ansible Project
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'network'}
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: nxos_ip_interface
|
||||
version_added: "2.1"
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: Replaced with common C(*_l3_interface) network modules.
|
||||
alternative: Use M(nxos_l3_interface) instead.
|
||||
short_description: Manages L3 attributes for IPv4 and IPv6 interfaces.
|
||||
description:
|
||||
- Manages Layer 3 attributes for IPv4 and IPv6 interfaces.
|
||||
extends_documentation_fragment: nxos
|
||||
author:
|
||||
- Jason Edelman (@jedelman8)
|
||||
- Gabriele Gerbino (@GGabriele)
|
||||
notes:
|
||||
- Tested against NXOSv 7.3.(0)D1(1) on VIRL
|
||||
- Interface must already be a L3 port when using this module.
|
||||
- Logical interfaces (po, loop, svi) must be created first.
|
||||
- C(mask) must be inserted in decimal format (i.e. 24) for
|
||||
both IPv6 and IPv4.
|
||||
- A single interface can have multiple IPv6 configured.
|
||||
- C(tag) is not idempotent for IPv6 addresses and I2 system image.
|
||||
options:
|
||||
interface:
|
||||
description:
|
||||
- Full name of interface, i.e. Ethernet1/1, vlan10.
|
||||
required: true
|
||||
addr:
|
||||
description:
|
||||
- IPv4 or IPv6 Address.
|
||||
version:
|
||||
description:
|
||||
- Version of IP address. If the IP address is IPV4 version should be v4.
|
||||
If the IP address is IPV6 version should be v6.
|
||||
default: v4
|
||||
choices: ['v4', 'v6']
|
||||
mask:
|
||||
description:
|
||||
- Subnet mask for IPv4 or IPv6 Address in decimal format.
|
||||
dot1q:
|
||||
description:
|
||||
- Configures IEEE 802.1Q VLAN encapsulation on the subinterface. The range is from 2 to 4093.
|
||||
version_added: "2.5"
|
||||
tag:
|
||||
description:
|
||||
- Route tag for IPv4 or IPv6 Address in integer format.
|
||||
default: 0
|
||||
version_added: "2.4"
|
||||
allow_secondary:
|
||||
description:
|
||||
- Allow to configure IPv4 secondary addresses on interface.
|
||||
type: bool
|
||||
default: 'no'
|
||||
version_added: "2.4"
|
||||
state:
|
||||
description:
|
||||
- Specify desired state of the resource.
|
||||
default: present
|
||||
choices: ['present','absent']
|
||||
requirements:
|
||||
- "ipaddress"
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
- name: Ensure ipv4 address is configured on Ethernet1/32
|
||||
nxos_ip_interface:
|
||||
interface: Ethernet1/32
|
||||
transport: nxapi
|
||||
version: v4
|
||||
state: present
|
||||
addr: 20.20.20.20
|
||||
mask: 24
|
||||
|
||||
- name: Ensure ipv6 address is configured on Ethernet1/31
|
||||
nxos_ip_interface:
|
||||
interface: Ethernet1/31
|
||||
transport: cli
|
||||
version: v6
|
||||
state: present
|
||||
addr: '2001::db8:800:200c:cccb'
|
||||
mask: 64
|
||||
|
||||
- name: Ensure ipv4 address is configured with tag
|
||||
nxos_ip_interface:
|
||||
interface: Ethernet1/32
|
||||
transport: nxapi
|
||||
version: v4
|
||||
state: present
|
||||
tag: 100
|
||||
addr: 20.20.20.20
|
||||
mask: 24
|
||||
|
||||
- name: Ensure ipv4 address is configured on sub-intf with dot1q encapsulation
|
||||
nxos_ip_interface:
|
||||
interface: Ethernet1/32.10
|
||||
transport: nxapi
|
||||
version: v4
|
||||
state: present
|
||||
dot1q: 10
|
||||
addr: 20.20.20.20
|
||||
mask: 24
|
||||
|
||||
- name: Configure ipv4 address as secondary if needed
|
||||
nxos_ip_interface:
|
||||
interface: Ethernet1/32
|
||||
transport: nxapi
|
||||
version: v4
|
||||
state: present
|
||||
allow_secondary: true
|
||||
addr: 21.21.21.21
|
||||
mask: 24
|
||||
'''
|
||||
|
||||
RETURN = '''
|
||||
proposed:
|
||||
description: k/v pairs of parameters passed into module
|
||||
returned: always
|
||||
type: dict
|
||||
sample: {"addr": "20.20.20.20", "allow_secondary": true,
|
||||
"interface": "Ethernet1/32", "mask": "24", "tag": 100}
|
||||
existing:
|
||||
description: k/v pairs of existing IP attributes on the interface
|
||||
returned: always
|
||||
type: dict
|
||||
sample: {"addresses": [{"addr": "11.11.11.11", "mask": 17, "tag": 101, "secondary": false}],
|
||||
"interface": "ethernet1/32", "prefixes": ["11.11.0.0/17"],
|
||||
"type": "ethernet", "vrf": "default"}
|
||||
end_state:
|
||||
description: k/v pairs of IP attributes after module execution
|
||||
returned: always
|
||||
type: dict
|
||||
sample: {"addresses": [{"addr": "11.11.11.11", "mask": 17, "tag": 101, "secondary": false},
|
||||
{"addr": "20.20.20.20", "mask": 24, "tag": 100, "secondary": true}],
|
||||
"interface": "ethernet1/32", "prefixes": ["11.11.0.0/17", "20.20.20.0/24"],
|
||||
"type": "ethernet", "vrf": "default"}
|
||||
commands:
|
||||
description: commands sent to the device
|
||||
returned: always
|
||||
type: list
|
||||
sample: ["interface ethernet1/32", "ip address 20.20.20.20/24 secondary tag 100"]
|
||||
changed:
|
||||
description: check to see if a change was made on the device
|
||||
returned: always
|
||||
type: bool
|
||||
sample: true
|
||||
'''
|
||||
|
||||
import re
|
||||
|
||||
try:
|
||||
import ipaddress
|
||||
|
||||
HAS_IPADDRESS = True
|
||||
except ImportError:
|
||||
HAS_IPADDRESS = False
|
||||
|
||||
from ansible.module_utils.network.nxos.nxos import load_config, run_commands
|
||||
from ansible.module_utils.network.nxos.nxos import get_capabilities, nxos_argument_spec
|
||||
from ansible.module_utils.network.nxos.nxos import get_interface_type
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
|
||||
|
||||
def find_same_addr(existing, addr, mask, full=False, **kwargs):
|
||||
for address in existing['addresses']:
|
||||
if address['addr'] == addr and address['mask'] == mask:
|
||||
if full:
|
||||
if kwargs['version'] == 'v4' and int(address['tag']) == kwargs['tag']:
|
||||
return address
|
||||
elif kwargs['version'] == 'v6' and kwargs['tag'] == 0:
|
||||
# Currently we don't get info about IPv6 address tag
|
||||
# But let's not break idempotence for the default case
|
||||
return address
|
||||
else:
|
||||
return address
|
||||
return False
|
||||
|
||||
|
||||
def execute_show_command(command, module):
|
||||
cmd = {}
|
||||
cmd['answer'] = None
|
||||
cmd['command'] = command
|
||||
cmd['output'] = 'text'
|
||||
cmd['prompt'] = None
|
||||
|
||||
body = run_commands(module, [cmd])
|
||||
|
||||
return body
|
||||
|
||||
|
||||
def is_default(interface, module):
|
||||
command = 'show run interface {0}'.format(interface)
|
||||
|
||||
try:
|
||||
body = execute_show_command(command, module)[0]
|
||||
if 'invalid' in body.lower():
|
||||
return 'DNE'
|
||||
else:
|
||||
raw_list = body.split('\n')
|
||||
if raw_list[-1].startswith('interface'):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
except KeyError:
|
||||
return 'DNE'
|
||||
|
||||
|
||||
def get_interface_mode(interface, intf_type, module):
|
||||
command = 'show interface {0} switchport'.format(interface)
|
||||
mode = 'unknown'
|
||||
|
||||
if intf_type in ['ethernet', 'portchannel']:
|
||||
body = execute_show_command(command, module)[0]
|
||||
if len(body) > 0:
|
||||
if 'Switchport: Disabled' in body:
|
||||
mode = 'layer3'
|
||||
elif 'Switchport: Enabled' in body:
|
||||
mode = "layer2"
|
||||
elif intf_type == 'svi':
|
||||
mode = 'layer3'
|
||||
return mode
|
||||
|
||||
|
||||
def send_show_command(interface_name, version, module):
|
||||
if version == 'v4':
|
||||
command = 'show ip interface {0}'.format(interface_name)
|
||||
elif version == 'v6':
|
||||
command = 'show ipv6 interface {0}'.format(interface_name)
|
||||
body = execute_show_command(command, module)
|
||||
return body
|
||||
|
||||
|
||||
def parse_unstructured_data(body, interface_name, version, module):
|
||||
interface = {}
|
||||
interface['addresses'] = []
|
||||
interface['prefixes'] = []
|
||||
vrf = None
|
||||
|
||||
body = body[0]
|
||||
splitted_body = body.split('\n')
|
||||
|
||||
if version == "v6":
|
||||
if "ipv6 is disabled" not in body.lower():
|
||||
address_list = []
|
||||
# We can have multiple IPv6 on the same interface.
|
||||
# We need to parse them manually from raw output.
|
||||
for index in range(0, len(splitted_body) - 1):
|
||||
if "IPv6 address:" in splitted_body[index]:
|
||||
first_reference_point = index + 1
|
||||
elif "IPv6 subnet:" in splitted_body[index]:
|
||||
last_reference_point = index
|
||||
break
|
||||
|
||||
interface_list_table = splitted_body[first_reference_point:last_reference_point]
|
||||
|
||||
for each_line in interface_list_table:
|
||||
address = each_line.strip().split(' ')[0]
|
||||
if address not in address_list:
|
||||
address_list.append(address)
|
||||
interface['prefixes'].append(str(ipaddress.ip_interface(u"%s" % address).network))
|
||||
|
||||
if address_list:
|
||||
for ipv6 in address_list:
|
||||
address = {}
|
||||
splitted_address = ipv6.split('/')
|
||||
address['addr'] = splitted_address[0]
|
||||
address['mask'] = splitted_address[1]
|
||||
interface['addresses'].append(address)
|
||||
|
||||
else:
|
||||
for index in range(0, len(splitted_body) - 1):
|
||||
if "IP address" in splitted_body[index]:
|
||||
regex = r'.*IP\saddress:\s(?P<addr>\d{1,3}(?:\.\d{1,3}){3}),\sIP\ssubnet:' + \
|
||||
r'\s\d{1,3}(?:\.\d{1,3}){3}\/(?P<mask>\d+)(?:\s(?P<secondary>secondary)\s)?' + \
|
||||
r'(.+?tag:\s(?P<tag>\d+).*)?'
|
||||
match = re.match(regex, splitted_body[index])
|
||||
if match:
|
||||
match_dict = match.groupdict()
|
||||
if match_dict['secondary'] is None:
|
||||
match_dict['secondary'] = False
|
||||
else:
|
||||
match_dict['secondary'] = True
|
||||
if match_dict['tag'] is None:
|
||||
match_dict['tag'] = 0
|
||||
else:
|
||||
match_dict['tag'] = int(match_dict['tag'])
|
||||
interface['addresses'].append(match_dict)
|
||||
prefix = str(ipaddress.ip_interface(u"%(addr)s/%(mask)s" % match_dict).network)
|
||||
interface['prefixes'].append(prefix)
|
||||
|
||||
try:
|
||||
vrf_regex = r'.+?VRF\s+(?P<vrf>\S+?)\s'
|
||||
match_vrf = re.match(vrf_regex, body, re.DOTALL)
|
||||
vrf = match_vrf.groupdict()['vrf']
|
||||
except AttributeError:
|
||||
vrf = None
|
||||
|
||||
interface['interface'] = interface_name
|
||||
interface['type'] = get_interface_type(interface_name)
|
||||
interface['vrf'] = vrf
|
||||
|
||||
return interface
|
||||
|
||||
|
||||
def parse_interface_data(body):
|
||||
body = body[0]
|
||||
splitted_body = body.split('\n')
|
||||
|
||||
for index in range(0, len(splitted_body) - 1):
|
||||
if "Encapsulation 802.1Q" in splitted_body[index]:
|
||||
regex = r'(.+?ID\s(?P<dot1q>\d+).*)?'
|
||||
match = re.match(regex, splitted_body[index])
|
||||
if match:
|
||||
match_dict = match.groupdict()
|
||||
if match_dict['dot1q'] is not None:
|
||||
return int(match_dict['dot1q'])
|
||||
return 0
|
||||
|
||||
|
||||
def get_dot1q_id(interface_name, module):
|
||||
|
||||
if "." not in interface_name:
|
||||
return 0
|
||||
|
||||
command = 'show interface {0}'.format(interface_name)
|
||||
try:
|
||||
body = execute_show_command(command, module)
|
||||
dot1q = parse_interface_data(body)
|
||||
return dot1q
|
||||
except KeyError:
|
||||
return 0
|
||||
|
||||
|
||||
def get_ip_interface(interface_name, version, module):
|
||||
body = send_show_command(interface_name, version, module)
|
||||
interface = parse_unstructured_data(body, interface_name, version, module)
|
||||
return interface
|
||||
|
||||
|
||||
def get_remove_ip_config_commands(interface, addr, mask, existing, version):
|
||||
commands = []
|
||||
if version == 'v4':
|
||||
# We can't just remove primary address if secondary address exists
|
||||
for address in existing['addresses']:
|
||||
if address['addr'] == addr:
|
||||
if address['secondary']:
|
||||
commands.append('no ip address {0}/{1} secondary'.format(addr, mask))
|
||||
elif len(existing['addresses']) > 1:
|
||||
new_primary = False
|
||||
for address in existing['addresses']:
|
||||
if address['addr'] != addr:
|
||||
commands.append('no ip address {0}/{1} secondary'.format(address['addr'], address['mask']))
|
||||
|
||||
if not new_primary:
|
||||
command = 'ip address {0}/{1}'.format(address['addr'], address['mask'])
|
||||
new_primary = True
|
||||
else:
|
||||
command = 'ip address {0}/{1} secondary'.format(address['addr'], address['mask'])
|
||||
|
||||
if 'tag' in address and address['tag'] != 0:
|
||||
command += " tag " + str(address['tag'])
|
||||
commands.append(command)
|
||||
else:
|
||||
commands.append('no ip address {0}/{1}'.format(addr, mask))
|
||||
break
|
||||
else:
|
||||
for address in existing['addresses']:
|
||||
if address['addr'] == addr:
|
||||
commands.append('no ipv6 address {0}/{1}'.format(addr, mask))
|
||||
|
||||
return commands
|
||||
|
||||
|
||||
def get_config_ip_commands(delta, interface, existing, version):
|
||||
commands = []
|
||||
delta = dict(delta)
|
||||
|
||||
if version == 'v4':
|
||||
command = 'ip address {addr}/{mask}'.format(**delta)
|
||||
if len(existing['addresses']) > 0:
|
||||
if delta['allow_secondary']:
|
||||
for address in existing['addresses']:
|
||||
if delta['addr'] == address['addr'] and address['secondary'] is False and delta['tag'] != 0:
|
||||
break
|
||||
else:
|
||||
command += ' secondary'
|
||||
else:
|
||||
# Remove all existed addresses if 'allow_secondary' isn't specified
|
||||
for address in existing['addresses']:
|
||||
if address['secondary']:
|
||||
commands.insert(0, 'no ip address {addr}/{mask} secondary'.format(**address))
|
||||
else:
|
||||
commands.append('no ip address {addr}/{mask}'.format(**address))
|
||||
else:
|
||||
if not delta['allow_secondary']:
|
||||
# Remove all existed addresses if 'allow_secondary' isn't specified
|
||||
for address in existing['addresses']:
|
||||
commands.insert(0, 'no ipv6 address {addr}/{mask}'.format(**address))
|
||||
|
||||
command = 'ipv6 address {addr}/{mask}'.format(**delta)
|
||||
|
||||
if int(delta['tag']) > 0:
|
||||
command += ' tag {tag}'.format(**delta)
|
||||
elif int(delta['tag']) == 0:
|
||||
# Case when we need to remove tag from an address. Just enter command like
|
||||
# 'ip address ...' (without 'tag') not enough
|
||||
commands += get_remove_ip_config_commands(interface, delta['addr'], delta['mask'], existing, version)
|
||||
|
||||
commands.append(command)
|
||||
return commands
|
||||
|
||||
|
||||
def flatten_list(command_lists):
|
||||
flat_command_list = []
|
||||
for command in command_lists:
|
||||
if isinstance(command, list):
|
||||
flat_command_list.extend(command)
|
||||
else:
|
||||
flat_command_list.append(command)
|
||||
return flat_command_list
|
||||
|
||||
|
||||
def validate_params(addr, interface, mask, dot1q, tag, allow_secondary, version, state, intf_type, module):
|
||||
device_info = get_capabilities(module)
|
||||
network_api = device_info.get('network_api', 'nxapi')
|
||||
|
||||
if state == "present":
|
||||
if addr is None or mask is None:
|
||||
module.fail_json(msg="An IP address AND a mask must be provided "
|
||||
"when state=present.")
|
||||
elif state == "absent" and version == "v6":
|
||||
if addr is None or mask is None:
|
||||
module.fail_json(msg="IPv6 address and mask must be provided when "
|
||||
"state=absent.")
|
||||
|
||||
if intf_type != "ethernet" and network_api == 'cliconf':
|
||||
if is_default(interface, module) == "DNE":
|
||||
module.fail_json(msg="That interface does not exist yet. Create "
|
||||
"it first.", interface=interface)
|
||||
if mask is not None:
|
||||
try:
|
||||
if (int(mask) < 1 or int(mask) > 32) and version == "v4":
|
||||
raise ValueError
|
||||
elif int(mask) < 1 or int(mask) > 128:
|
||||
raise ValueError
|
||||
except ValueError:
|
||||
module.fail_json(msg="Warning! 'mask' must be an integer between"
|
||||
" 1 and 32 when version v4 and up to 128 "
|
||||
"when version v6.", version=version,
|
||||
mask=mask)
|
||||
if addr is not None and mask is not None:
|
||||
try:
|
||||
ipaddress.ip_interface(u'%s/%s' % (addr, mask))
|
||||
except ValueError:
|
||||
module.fail_json(msg="Warning! Invalid ip address or mask set.", addr=addr, mask=mask)
|
||||
|
||||
if dot1q is not None:
|
||||
try:
|
||||
if 2 > dot1q > 4093:
|
||||
raise ValueError
|
||||
except ValueError:
|
||||
module.fail_json(msg="Warning! 'dot1q' must be an integer between"
|
||||
" 2 and 4093", dot1q=dot1q)
|
||||
if tag is not None:
|
||||
try:
|
||||
if 0 > tag > 4294967295:
|
||||
raise ValueError
|
||||
except ValueError:
|
||||
module.fail_json(msg="Warning! 'tag' must be an integer between"
|
||||
" 0 (default) and 4294967295."
|
||||
"To use tag you must set 'addr' and 'mask' params.", tag=tag)
|
||||
if allow_secondary is not None:
|
||||
try:
|
||||
if addr is None or mask is None:
|
||||
raise ValueError
|
||||
except ValueError:
|
||||
module.fail_json(msg="Warning! 'secondary' can be used only when 'addr' and 'mask' set.",
|
||||
allow_secondary=allow_secondary)
|
||||
|
||||
|
||||
def main():
|
||||
argument_spec = dict(
|
||||
interface=dict(required=True),
|
||||
addr=dict(required=False),
|
||||
version=dict(required=False, choices=['v4', 'v6'],
|
||||
default='v4'),
|
||||
mask=dict(type='str', required=False),
|
||||
dot1q=dict(required=False, default=0, type='int'),
|
||||
tag=dict(required=False, default=0, type='int'),
|
||||
state=dict(required=False, default='present',
|
||||
choices=['present', 'absent']),
|
||||
allow_secondary=dict(required=False, default=False,
|
||||
type='bool')
|
||||
)
|
||||
|
||||
argument_spec.update(nxos_argument_spec)
|
||||
|
||||
module = AnsibleModule(argument_spec=argument_spec,
|
||||
supports_check_mode=True)
|
||||
|
||||
if not HAS_IPADDRESS:
|
||||
module.fail_json(msg="ipaddress is required for this module. Run 'pip install ipaddress' for install.")
|
||||
|
||||
warnings = list()
|
||||
|
||||
addr = module.params['addr']
|
||||
version = module.params['version']
|
||||
mask = module.params['mask']
|
||||
dot1q = module.params['dot1q']
|
||||
tag = module.params['tag']
|
||||
allow_secondary = module.params['allow_secondary']
|
||||
interface = module.params['interface'].lower()
|
||||
state = module.params['state']
|
||||
|
||||
intf_type = get_interface_type(interface)
|
||||
validate_params(addr, interface, mask, dot1q, tag, allow_secondary, version, state, intf_type, module)
|
||||
|
||||
mode = get_interface_mode(interface, intf_type, module)
|
||||
if mode == 'layer2':
|
||||
module.fail_json(msg='That interface is a layer2 port.\nMake it '
|
||||
'a layer 3 port first.', interface=interface)
|
||||
|
||||
existing = get_ip_interface(interface, version, module)
|
||||
|
||||
dot1q_tag = get_dot1q_id(interface, module)
|
||||
if dot1q_tag > 1:
|
||||
existing['dot1q'] = dot1q_tag
|
||||
|
||||
args = dict(addr=addr, mask=mask, dot1q=dot1q, tag=tag, interface=interface, allow_secondary=allow_secondary)
|
||||
proposed = dict((k, v) for k, v in args.items() if v is not None)
|
||||
commands = []
|
||||
changed = False
|
||||
end_state = existing
|
||||
|
||||
commands = ['interface {0}'.format(interface)]
|
||||
if state == 'absent':
|
||||
if existing['addresses']:
|
||||
if find_same_addr(existing, addr, mask):
|
||||
command = get_remove_ip_config_commands(interface, addr,
|
||||
mask, existing, version)
|
||||
commands.append(command)
|
||||
if 'dot1q' in existing and existing['dot1q'] > 1:
|
||||
command = 'no encapsulation dot1Q {0}'.format(existing['dot1q'])
|
||||
commands.append(command)
|
||||
elif state == 'present':
|
||||
if not find_same_addr(existing, addr, mask, full=True, tag=tag, version=version):
|
||||
command = get_config_ip_commands(proposed, interface, existing, version)
|
||||
commands.append(command)
|
||||
if 'dot1q' not in existing and (intf_type in ['ethernet', 'portchannel'] and "." in interface):
|
||||
command = 'encapsulation dot1Q {0}'.format(proposed['dot1q'])
|
||||
commands.append(command)
|
||||
if len(commands) < 2:
|
||||
del commands[0]
|
||||
cmds = flatten_list(commands)
|
||||
if cmds:
|
||||
if module.check_mode:
|
||||
module.exit_json(changed=True, commands=cmds)
|
||||
else:
|
||||
load_config(module, cmds)
|
||||
changed = True
|
||||
end_state = get_ip_interface(interface, version, module)
|
||||
if 'configure' in cmds:
|
||||
cmds.pop(0)
|
||||
|
||||
results = {}
|
||||
results['proposed'] = proposed
|
||||
results['existing'] = existing
|
||||
results['end_state'] = end_state
|
||||
results['commands'] = cmds
|
||||
results['changed'] = changed
|
||||
results['warnings'] = warnings
|
||||
|
||||
module.exit_json(**results)
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,480 +1,14 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# Copyright: Ansible Project
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'network'}
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: nxos_portchannel
|
||||
extends_documentation_fragment: nxos
|
||||
version_added: "2.2"
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: Replaced with common C(*_linkagg) network modules.
|
||||
alternative: Use M(nxos_linkagg) instead.
|
||||
short_description: Manages port-channel interfaces.
|
||||
description:
|
||||
- Manages port-channel specific configuration parameters.
|
||||
author:
|
||||
- Jason Edelman (@jedelman8)
|
||||
- Gabriele Gerbino (@GGabriele)
|
||||
notes:
|
||||
- Tested against NXOSv 7.3.(0)D1(1) on VIRL
|
||||
- C(state=absent) removes the portchannel config and interface if it
|
||||
already exists. If members to be removed are not explicitly
|
||||
passed, all existing members (if any), are removed.
|
||||
- Members must be a list.
|
||||
- LACP needs to be enabled first if active/passive modes are used.
|
||||
options:
|
||||
group:
|
||||
description:
|
||||
- Channel-group number for the port-channel.
|
||||
required: true
|
||||
mode:
|
||||
description:
|
||||
- Mode for the port-channel, i.e. on, active, passive.
|
||||
default: on
|
||||
choices: ['active','passive','on']
|
||||
min_links:
|
||||
description:
|
||||
- Min links required to keep portchannel up.
|
||||
members:
|
||||
description:
|
||||
- List of interfaces that will be managed in a given portchannel.
|
||||
force:
|
||||
description:
|
||||
- When true it forces port-channel members to match what is
|
||||
declared in the members param. This can be used to remove
|
||||
members.
|
||||
choices: [ 'false', 'true' ]
|
||||
default: 'false'
|
||||
state:
|
||||
description:
|
||||
- Manage the state of the resource.
|
||||
default: present
|
||||
choices: ['present','absent']
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
# Ensure port-channel99 is created, add two members, and set to mode on
|
||||
- nxos_portchannel:
|
||||
group: 99
|
||||
members: ['Ethernet1/1','Ethernet1/2']
|
||||
mode: 'active'
|
||||
state: present
|
||||
'''
|
||||
|
||||
RETURN = '''
|
||||
commands:
|
||||
description: command sent to the device
|
||||
returned: always
|
||||
type: list
|
||||
sample: ["interface Ethernet2/6", "no channel-group 12",
|
||||
"interface Ethernet2/5", "no channel-group 12",
|
||||
"interface Ethernet2/6", "channel-group 12 mode on",
|
||||
"interface Ethernet2/5", "channel-group 12 mode on"]
|
||||
'''
|
||||
|
||||
import collections
|
||||
import re
|
||||
|
||||
from ansible.module_utils.network.nxos.nxos import get_config, load_config, run_commands
|
||||
from ansible.module_utils.network.nxos.nxos import get_capabilities, nxos_argument_spec
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils.network.common.config import CustomNetworkConfig
|
||||
|
||||
|
||||
def get_value(arg, config, module):
|
||||
param_to_command_keymap = {
|
||||
'min_links': 'lacp min-links'
|
||||
}
|
||||
|
||||
REGEX = re.compile(r'(?:{0}\s)(?P<value>.*)$'.format(param_to_command_keymap[arg]), re.M)
|
||||
value = ''
|
||||
if param_to_command_keymap[arg] in config:
|
||||
value = REGEX.search(config).group('value')
|
||||
return value
|
||||
|
||||
|
||||
def check_interface(module, netcfg):
|
||||
config = str(netcfg)
|
||||
REGEX = re.compile(r'\s+interface port-channel{0}$'.format(module.params['group']), re.M)
|
||||
value = False
|
||||
try:
|
||||
if REGEX.search(config):
|
||||
value = True
|
||||
except TypeError:
|
||||
value = False
|
||||
|
||||
return value
|
||||
|
||||
|
||||
def get_custom_value(arg, config, module):
|
||||
REGEX = re.compile(r'\s+member vni {0} associate-vrf\s*$'.format(
|
||||
module.params['vni']), re.M)
|
||||
value = False
|
||||
try:
|
||||
if REGEX.search(config):
|
||||
value = True
|
||||
except TypeError:
|
||||
value = False
|
||||
return value
|
||||
|
||||
|
||||
def get_portchannel_members(pchannel):
|
||||
try:
|
||||
members = pchannel['TABLE_member']['ROW_member']
|
||||
except KeyError:
|
||||
members = []
|
||||
|
||||
return members
|
||||
|
||||
|
||||
def get_portchannel_mode(interface, protocol, module, netcfg):
|
||||
if protocol != 'LACP':
|
||||
mode = 'on'
|
||||
else:
|
||||
netcfg = CustomNetworkConfig(indent=2, contents=get_config(module))
|
||||
parents = ['interface {0}'.format(interface.capitalize())]
|
||||
body = netcfg.get_section(parents)
|
||||
|
||||
mode_list = body.split('\n')
|
||||
|
||||
for line in mode_list:
|
||||
this_line = line.strip()
|
||||
if this_line.startswith('channel-group'):
|
||||
find = this_line
|
||||
if 'mode' in find:
|
||||
if 'passive' in find:
|
||||
mode = 'passive'
|
||||
elif 'active' in find:
|
||||
mode = 'active'
|
||||
|
||||
return mode
|
||||
|
||||
|
||||
def get_portchannel(module, netcfg=None):
|
||||
command = 'show port-channel summary | json'
|
||||
portchannel = {}
|
||||
portchannel_table = {}
|
||||
members = []
|
||||
|
||||
try:
|
||||
body = run_commands(module, [command])[0]
|
||||
pc_table = body['TABLE_channel']['ROW_channel']
|
||||
|
||||
if isinstance(pc_table, dict):
|
||||
pc_table = [pc_table]
|
||||
|
||||
for pc in pc_table:
|
||||
if pc['group'] == module.params['group']:
|
||||
portchannel_table = pc
|
||||
elif module.params['group'].isdigit() and pc['group'] == int(module.params['group']):
|
||||
portchannel_table = pc
|
||||
except (KeyError, AttributeError, TypeError, IndexError):
|
||||
return {}
|
||||
|
||||
if portchannel_table:
|
||||
portchannel['group'] = portchannel_table['group']
|
||||
protocol = portchannel_table['prtcl']
|
||||
members_list = get_portchannel_members(portchannel_table)
|
||||
|
||||
if isinstance(members_list, dict):
|
||||
members_list = [members_list]
|
||||
|
||||
member_dictionary = {}
|
||||
for each_member in members_list:
|
||||
interface = each_member['port']
|
||||
members.append(interface)
|
||||
|
||||
pc_member = {}
|
||||
pc_member['status'] = str(each_member['port-status'])
|
||||
pc_member['mode'] = get_portchannel_mode(interface,
|
||||
protocol, module, netcfg)
|
||||
|
||||
member_dictionary[interface] = pc_member
|
||||
portchannel['members'] = members
|
||||
portchannel['members_detail'] = member_dictionary
|
||||
|
||||
# Ensure each member have the same mode.
|
||||
modes = set()
|
||||
for each, value in member_dictionary.items():
|
||||
modes.update([value['mode']])
|
||||
if len(modes) == 1:
|
||||
portchannel['mode'] = value['mode']
|
||||
else:
|
||||
portchannel['mode'] = 'unknown'
|
||||
return portchannel
|
||||
|
||||
|
||||
def get_existing(module, args):
|
||||
existing = {}
|
||||
netcfg = CustomNetworkConfig(indent=2, contents=get_config(module))
|
||||
|
||||
interface_exist = check_interface(module, netcfg)
|
||||
if interface_exist:
|
||||
parents = ['interface port-channel{0}'.format(module.params['group'])]
|
||||
config = netcfg.get_section(parents)
|
||||
|
||||
if config:
|
||||
existing['min_links'] = get_value('min_links', config, module)
|
||||
existing.update(get_portchannel(module, netcfg=netcfg))
|
||||
|
||||
return existing, interface_exist
|
||||
|
||||
|
||||
def config_portchannel(proposed, mode, group, force):
|
||||
commands = []
|
||||
# NOTE: Leading whitespace for force option is important
|
||||
force = ' force' if force else ''
|
||||
config_args = {
|
||||
'mode': 'channel-group {group}{force} mode {mode}',
|
||||
'min_links': 'lacp min-links {min_links}',
|
||||
}
|
||||
|
||||
for member in proposed.get('members', []):
|
||||
commands.append('interface {0}'.format(member))
|
||||
commands.append(config_args.get('mode').format(group=group, force=force, mode=mode))
|
||||
|
||||
min_links = proposed.get('min_links', None)
|
||||
if min_links:
|
||||
command = 'interface port-channel {0}'.format(group)
|
||||
commands.append(command)
|
||||
commands.append(config_args.get('min_links').format(
|
||||
min_links=min_links))
|
||||
|
||||
return commands
|
||||
|
||||
|
||||
def get_commands_to_add_members(proposed, existing, force, module):
|
||||
try:
|
||||
proposed_members = proposed['members']
|
||||
except KeyError:
|
||||
proposed_members = []
|
||||
|
||||
try:
|
||||
existing_members = existing['members']
|
||||
except KeyError:
|
||||
existing_members = []
|
||||
|
||||
members_to_add = list(set(proposed_members).difference(existing_members))
|
||||
|
||||
commands = []
|
||||
# NOTE: Leading whitespace for force option is important
|
||||
force = ' force' if force else ''
|
||||
if members_to_add:
|
||||
for member in members_to_add:
|
||||
commands.append('interface {0}'.format(member))
|
||||
commands.append('channel-group {0}{1} mode {2}'.format(
|
||||
existing['group'], force, proposed['mode']))
|
||||
|
||||
return commands
|
||||
|
||||
|
||||
def get_commands_to_remove_members(proposed, existing, module):
|
||||
try:
|
||||
proposed_members = proposed['members']
|
||||
except KeyError:
|
||||
proposed_members = []
|
||||
|
||||
try:
|
||||
existing_members = existing['members']
|
||||
except KeyError:
|
||||
existing_members = []
|
||||
|
||||
members_to_remove = list(set(existing_members).difference(proposed_members))
|
||||
commands = []
|
||||
if members_to_remove:
|
||||
for member in members_to_remove:
|
||||
commands.append('interface {0}'.format(member))
|
||||
commands.append('no channel-group {0}'.format(existing['group']))
|
||||
|
||||
return commands
|
||||
|
||||
|
||||
def get_commands_if_mode_change(proposed, existing, group, mode, force, module):
|
||||
try:
|
||||
proposed_members = proposed['members']
|
||||
except KeyError:
|
||||
proposed_members = []
|
||||
|
||||
try:
|
||||
existing_members = existing['members']
|
||||
except KeyError:
|
||||
existing_members = []
|
||||
|
||||
try:
|
||||
members_dict = existing['members_detail']
|
||||
except KeyError:
|
||||
members_dict = {}
|
||||
|
||||
members_to_remove = set(existing_members).difference(proposed_members)
|
||||
members_with_mode_change = []
|
||||
if members_dict:
|
||||
for interface, values in members_dict.items():
|
||||
if (interface in proposed_members and
|
||||
(interface not in members_to_remove)):
|
||||
if values['mode'] != mode:
|
||||
members_with_mode_change.append(interface)
|
||||
|
||||
commands = []
|
||||
# NOTE: Leading whitespace for force option is important
|
||||
force = ' force' if force else ''
|
||||
if members_with_mode_change:
|
||||
for member in members_with_mode_change:
|
||||
commands.append('interface {0}'.format(member))
|
||||
commands.append('no channel-group {0}'.format(group))
|
||||
|
||||
for member in members_with_mode_change:
|
||||
commands.append('interface {0}'.format(member))
|
||||
commands.append('channel-group {0}{1} mode {2}'.format(group, force, mode))
|
||||
|
||||
return commands
|
||||
|
||||
|
||||
def get_commands_min_links(existing, proposed, group, min_links, module):
|
||||
commands = []
|
||||
try:
|
||||
if (existing['min_links'] is None or
|
||||
(existing['min_links'] != proposed['min_links'])):
|
||||
commands.append('interface port-channel{0}'.format(group))
|
||||
commands.append('lacp min-link {0}'.format(min_links))
|
||||
except KeyError:
|
||||
commands.append('interface port-channel{0}'.format(group))
|
||||
commands.append('lacp min-link {0}'.format(min_links))
|
||||
return commands
|
||||
|
||||
|
||||
def flatten_list(command_lists):
|
||||
flat_command_list = []
|
||||
for command in command_lists:
|
||||
if isinstance(command, list):
|
||||
flat_command_list.extend(command)
|
||||
else:
|
||||
flat_command_list.append(command)
|
||||
return flat_command_list
|
||||
|
||||
|
||||
def state_present(module, existing, proposed, interface_exist, force, warnings):
|
||||
commands = []
|
||||
group = str(module.params['group'])
|
||||
mode = module.params['mode']
|
||||
min_links = module.params['min_links']
|
||||
|
||||
if not interface_exist:
|
||||
command = config_portchannel(proposed, mode, group, force)
|
||||
commands.append(command)
|
||||
commands.insert(0, 'interface port-channel{0}'.format(group))
|
||||
warnings.append("The proposed port-channel interface did not "
|
||||
"exist. It's recommended to use nxos_interface to "
|
||||
"create all logical interfaces.")
|
||||
|
||||
elif existing and interface_exist:
|
||||
if force:
|
||||
command = get_commands_to_remove_members(proposed, existing, module)
|
||||
commands.append(command)
|
||||
|
||||
command = get_commands_to_add_members(proposed, existing, force, module)
|
||||
commands.append(command)
|
||||
|
||||
mode_command = get_commands_if_mode_change(proposed, existing, group, mode, force, module)
|
||||
commands.insert(0, mode_command)
|
||||
|
||||
if min_links:
|
||||
command = get_commands_min_links(existing, proposed, group, min_links, module)
|
||||
commands.append(command)
|
||||
|
||||
return commands
|
||||
|
||||
|
||||
def state_absent(module, existing, proposed):
|
||||
commands = []
|
||||
group = str(module.params['group'])
|
||||
commands.append(['no interface port-channel{0}'.format(group)])
|
||||
return commands
|
||||
|
||||
|
||||
def main():
|
||||
argument_spec = dict(
|
||||
group=dict(required=True, type='str'),
|
||||
mode=dict(required=False, choices=['on', 'active', 'passive'], default='on', type='str'),
|
||||
min_links=dict(required=False, default=None, type='str'),
|
||||
members=dict(required=False, default=None, type='list'),
|
||||
force=dict(required=False, default='false', type='str', choices=['true', 'false']),
|
||||
state=dict(required=False, choices=['absent', 'present'], default='present'),
|
||||
)
|
||||
|
||||
argument_spec.update(nxos_argument_spec)
|
||||
|
||||
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=True)
|
||||
|
||||
warnings = list()
|
||||
results = dict(changed=False, warnings=warnings)
|
||||
|
||||
group = str(module.params['group'])
|
||||
mode = module.params['mode']
|
||||
min_links = module.params['min_links']
|
||||
members = module.params['members']
|
||||
state = module.params['state']
|
||||
|
||||
if str(module.params['force']).lower() == 'true':
|
||||
force = True
|
||||
elif module.params['force'] == 'false':
|
||||
force = False
|
||||
|
||||
if ((min_links or mode) and
|
||||
(not members and state == 'present')):
|
||||
module.fail_json(msg='"members" is required when state=present and '
|
||||
'"min_links" or "mode" are provided')
|
||||
|
||||
args = [
|
||||
'group',
|
||||
'members',
|
||||
'min_links',
|
||||
'mode'
|
||||
]
|
||||
|
||||
existing, interface_exist = get_existing(module, args)
|
||||
proposed = dict((k, v) for k, v in module.params.items()
|
||||
if v is not None and k in args)
|
||||
|
||||
commands = []
|
||||
|
||||
if state == 'absent' and existing:
|
||||
commands = state_absent(module, existing, proposed)
|
||||
elif state == 'present':
|
||||
commands = state_present(module, existing, proposed, interface_exist, force, warnings)
|
||||
|
||||
cmds = flatten_list(commands)
|
||||
if cmds:
|
||||
if module.check_mode:
|
||||
module.exit_json(**results)
|
||||
else:
|
||||
load_config(module, cmds)
|
||||
results['changed'] = True
|
||||
if 'configure' in cmds:
|
||||
cmds.pop(0)
|
||||
|
||||
results['commands'] = cmds
|
||||
module.exit_json(**results)
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,542 +1,14 @@
|
|||
#!/usr/bin/python
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# Copyright: Ansible Project
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'network'}
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: nxos_switchport
|
||||
extends_documentation_fragment: nxos
|
||||
version_added: "2.1"
|
||||
deprecated:
|
||||
removed_in: "2.9"
|
||||
why: Replaced with generic version.
|
||||
alternative: Use M(nxos_l2_interface) instead.
|
||||
short_description: Manages Layer 2 switchport interfaces.
|
||||
description:
|
||||
- Manages Layer 2 interfaces
|
||||
author: Jason Edelman (@jedelman8)
|
||||
notes:
|
||||
- Tested against NXOSv 7.3.(0)D1(1) on VIRL
|
||||
- When C(state=absent), VLANs can be added/removed from trunk links and
|
||||
the existing access VLAN can be 'unconfigured' to just having VLAN 1
|
||||
on that interface.
|
||||
- When working with trunks VLANs the keywords add/remove are always sent
|
||||
in the `switchport trunk allowed vlan` command. Use verbose mode to see
|
||||
commands sent.
|
||||
- When C(state=unconfigured), the interface will result with having a default
|
||||
Layer 2 interface, i.e. vlan 1 in access mode.
|
||||
options:
|
||||
interface:
|
||||
description:
|
||||
- Full name of the interface, i.e. Ethernet1/1.
|
||||
mode:
|
||||
description:
|
||||
- Mode for the Layer 2 port.
|
||||
choices: ['access','trunk']
|
||||
access_vlan:
|
||||
description:
|
||||
- If C(mode=access), used as the access VLAN ID.
|
||||
native_vlan:
|
||||
description:
|
||||
- If C(mode=trunk), used as the trunk native VLAN ID.
|
||||
trunk_vlans:
|
||||
description:
|
||||
- If C(mode=trunk), used as the VLAN range to ADD or REMOVE
|
||||
from the trunk.
|
||||
aliases:
|
||||
- trunk_add_vlans
|
||||
state:
|
||||
description:
|
||||
- Manage the state of the resource.
|
||||
default: present
|
||||
choices: ['present','absent', 'unconfigured']
|
||||
trunk_allowed_vlans:
|
||||
description:
|
||||
- if C(mode=trunk), these are the only VLANs that will be
|
||||
configured on the trunk, i.e. "2-10,15".
|
||||
version_added: 2.2
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
- name: Ensure Eth1/5 is in its default switchport state
|
||||
nxos_switchport:
|
||||
interface: eth1/5
|
||||
state: unconfigured
|
||||
|
||||
- name: Ensure Eth1/5 is configured for access vlan 20
|
||||
nxos_switchport:
|
||||
interface: eth1/5
|
||||
mode: access
|
||||
access_vlan: 20
|
||||
|
||||
- name: Ensure Eth1/5 only has vlans 5-10 as trunk vlans
|
||||
nxos_switchport:
|
||||
interface: eth1/5
|
||||
mode: trunk
|
||||
native_vlan: 10
|
||||
trunk_vlans: 5-10
|
||||
|
||||
- name: Ensure eth1/5 is a trunk port and ensure 2-50 are being tagged (doesn't mean others aren't also being tagged)
|
||||
nxos_switchport:
|
||||
interface: eth1/5
|
||||
mode: trunk
|
||||
native_vlan: 10
|
||||
trunk_vlans: 2-50
|
||||
|
||||
- name: Ensure these VLANs are not being tagged on the trunk
|
||||
nxos_switchport:
|
||||
interface: eth1/5
|
||||
mode: trunk
|
||||
trunk_vlans: 51-4094
|
||||
state: absent
|
||||
'''
|
||||
|
||||
RETURN = '''
|
||||
commands:
|
||||
description: command string sent to the device
|
||||
returned: always
|
||||
type: list
|
||||
sample: ["interface eth1/5", "switchport access vlan 20"]
|
||||
'''
|
||||
|
||||
from ansible.module_utils.network.nxos.nxos import load_config, run_commands
|
||||
from ansible.module_utils.network.nxos.nxos import get_capabilities, nxos_argument_spec
|
||||
from ansible.module_utils.network.nxos.nxos import get_interface_type
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
|
||||
|
||||
def get_interface_mode(interface, module):
|
||||
"""Gets current mode of interface: layer2 or layer3
|
||||
Args:
|
||||
device (Device): This is the device object of an NX-API enabled device
|
||||
using the Device class within device.py
|
||||
interface (string): full name of interface, i.e. Ethernet1/1,
|
||||
loopback10, port-channel20, vlan20
|
||||
Returns:
|
||||
str: 'layer2' or 'layer3'
|
||||
"""
|
||||
command = 'show interface {0} | json'.format(interface)
|
||||
intf_type = get_interface_type(interface)
|
||||
mode = 'unknown'
|
||||
interface_table = {}
|
||||
|
||||
try:
|
||||
body = run_commands(module, [command])[0]
|
||||
interface_table = body['TABLE_interface']['ROW_interface']
|
||||
except (KeyError, AttributeError, IndexError):
|
||||
return mode
|
||||
|
||||
if interface_table:
|
||||
# HACK FOR NOW
|
||||
if intf_type in ['ethernet', 'portchannel']:
|
||||
mode = str(interface_table.get('eth_mode', 'layer3'))
|
||||
if mode in ['access', 'trunk']:
|
||||
mode = 'layer2'
|
||||
if mode == 'routed':
|
||||
mode = 'layer3'
|
||||
elif intf_type == 'loopback' or intf_type == 'svi':
|
||||
mode = 'layer3'
|
||||
return mode
|
||||
|
||||
|
||||
def interface_is_portchannel(interface, module):
|
||||
"""Checks to see if an interface is part of portchannel bundle
|
||||
Args:
|
||||
interface (str): full name of interface, i.e. Ethernet1/1
|
||||
Returns:
|
||||
True/False based on if interface is a member of a portchannel bundle
|
||||
"""
|
||||
intf_type = get_interface_type(interface)
|
||||
|
||||
if intf_type == 'ethernet':
|
||||
command = 'show interface {0} | json'.format(interface)
|
||||
try:
|
||||
body = run_commands(module, [command])[0]
|
||||
interface_table = body['TABLE_interface']['ROW_interface']
|
||||
except (KeyError, AttributeError, IndexError):
|
||||
interface_table = None
|
||||
|
||||
if interface_table:
|
||||
state = interface_table.get('eth_bundle')
|
||||
if state:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def get_switchport(port, module):
|
||||
"""Gets current config of L2 switchport
|
||||
Args:
|
||||
device (Device): This is the device object of an NX-API enabled device
|
||||
using the Device class within device.py
|
||||
port (str): full name of interface, i.e. Ethernet1/1
|
||||
Returns:
|
||||
dictionary with k/v pairs for L2 vlan config
|
||||
"""
|
||||
|
||||
command = 'show interface {0} switchport | json'.format(port)
|
||||
|
||||
try:
|
||||
body = run_commands(module, [command])[0]
|
||||
sp_table = body['TABLE_interface']['ROW_interface']
|
||||
except (KeyError, AttributeError, IndexError):
|
||||
sp_table = None
|
||||
|
||||
if sp_table:
|
||||
key_map = {
|
||||
"interface": "interface",
|
||||
"oper_mode": "mode",
|
||||
"switchport": "switchport",
|
||||
"access_vlan": "access_vlan",
|
||||
"access_vlan_name": "access_vlan_name",
|
||||
"native_vlan": "native_vlan",
|
||||
"native_vlan_name": "native_vlan_name",
|
||||
"trunk_vlans": "trunk_vlans"
|
||||
}
|
||||
sp = apply_key_map(key_map, sp_table)
|
||||
return sp
|
||||
|
||||
else:
|
||||
return {}
|
||||
|
||||
|
||||
def remove_switchport_config_commands(interface, existing, proposed, module):
|
||||
mode = proposed.get('mode')
|
||||
commands = []
|
||||
command = None
|
||||
|
||||
if mode == 'access':
|
||||
av_check = existing.get('access_vlan') == proposed.get('access_vlan')
|
||||
if av_check:
|
||||
command = 'no switchport access vlan {0}'.format(existing.get('access_vlan'))
|
||||
commands.append(command)
|
||||
|
||||
elif mode == 'trunk':
|
||||
|
||||
# Supported Remove Scenarios for trunk_vlans_list
|
||||
# 1) Existing: 1,2,3 Proposed: 1,2,3 - Remove all
|
||||
# 2) Existing: 1,2,3 Proposed: 1,2 - Remove 1,2 Leave 3
|
||||
# 3) Existing: 1,2,3 Proposed: 2,3 - Remove 2,3 Leave 1
|
||||
# 4) Existing: 1,2,3 Proposed: 4,5,6 - None removed.
|
||||
# 5) Existing: None Proposed: 1,2,3 - None removed.
|
||||
|
||||
existing_vlans = existing.get('trunk_vlans_list')
|
||||
proposed_vlans = proposed.get('trunk_vlans_list')
|
||||
vlans_to_remove = set(proposed_vlans).intersection(existing_vlans)
|
||||
|
||||
if vlans_to_remove:
|
||||
proposed_allowed_vlans = proposed.get('trunk_allowed_vlans')
|
||||
remove_trunk_allowed_vlans = proposed.get('trunk_vlans', proposed_allowed_vlans)
|
||||
command = 'switchport trunk allowed vlan remove {0}'.format(remove_trunk_allowed_vlans)
|
||||
commands.append(command)
|
||||
|
||||
native_check = existing.get('native_vlan') == proposed.get('native_vlan')
|
||||
if native_check and proposed.get('native_vlan'):
|
||||
command = 'no switchport trunk native vlan {0}'.format(existing.get('native_vlan'))
|
||||
commands.append(command)
|
||||
|
||||
if commands:
|
||||
commands.insert(0, 'interface ' + interface)
|
||||
return commands
|
||||
|
||||
|
||||
def get_switchport_config_commands(interface, existing, proposed, module):
|
||||
"""Gets commands required to config a given switchport interface
|
||||
"""
|
||||
|
||||
proposed_mode = proposed.get('mode')
|
||||
existing_mode = existing.get('mode')
|
||||
commands = []
|
||||
command = None
|
||||
|
||||
if proposed_mode != existing_mode:
|
||||
if proposed_mode == 'trunk':
|
||||
command = 'switchport mode trunk'
|
||||
elif proposed_mode == 'access':
|
||||
command = 'switchport mode access'
|
||||
|
||||
if command:
|
||||
commands.append(command)
|
||||
|
||||
if proposed_mode == 'access':
|
||||
av_check = str(existing.get('access_vlan')) == str(proposed.get('access_vlan'))
|
||||
if not av_check:
|
||||
command = 'switchport access vlan {0}'.format(proposed.get('access_vlan'))
|
||||
commands.append(command)
|
||||
|
||||
elif proposed_mode == 'trunk':
|
||||
tv_check = existing.get('trunk_vlans_list') == proposed.get('trunk_vlans_list')
|
||||
|
||||
if not tv_check:
|
||||
if proposed.get('allowed'):
|
||||
command = 'switchport trunk allowed vlan {0}'.format(proposed.get('trunk_allowed_vlans'))
|
||||
commands.append(command)
|
||||
|
||||
else:
|
||||
existing_vlans = existing.get('trunk_vlans_list')
|
||||
proposed_vlans = proposed.get('trunk_vlans_list')
|
||||
vlans_to_add = set(proposed_vlans).difference(existing_vlans)
|
||||
if vlans_to_add:
|
||||
command = 'switchport trunk allowed vlan add {0}'.format(proposed.get('trunk_vlans'))
|
||||
commands.append(command)
|
||||
|
||||
native_check = str(existing.get('native_vlan')) == str(proposed.get('native_vlan'))
|
||||
if not native_check and proposed.get('native_vlan'):
|
||||
command = 'switchport trunk native vlan {0}'.format(proposed.get('native_vlan'))
|
||||
commands.append(command)
|
||||
|
||||
if commands:
|
||||
commands.insert(0, 'interface ' + interface)
|
||||
return commands
|
||||
|
||||
|
||||
def is_switchport_default(existing):
|
||||
"""Determines if switchport has a default config based on mode
|
||||
Args:
|
||||
existing (dict): existing switchport configuration from Ansible mod
|
||||
Returns:
|
||||
boolean: True if switchport has OOB Layer 2 config, i.e.
|
||||
vlan 1 and trunk all and mode is access
|
||||
"""
|
||||
|
||||
c1 = str(existing['access_vlan']) == '1'
|
||||
c2 = str(existing['native_vlan']) == '1'
|
||||
c3 = existing['trunk_vlans'] == '1-4094'
|
||||
c4 = existing['mode'] == 'access'
|
||||
|
||||
default = c1 and c2 and c3 and c4
|
||||
|
||||
return default
|
||||
|
||||
|
||||
def default_switchport_config(interface):
|
||||
commands = []
|
||||
commands.append('interface ' + interface)
|
||||
commands.append('switchport mode access')
|
||||
commands.append('switch access vlan 1')
|
||||
commands.append('switchport trunk native vlan 1')
|
||||
commands.append('switchport trunk allowed vlan all')
|
||||
return commands
|
||||
|
||||
|
||||
def vlan_range_to_list(vlans):
|
||||
result = []
|
||||
if vlans:
|
||||
for part in vlans.split(','):
|
||||
if part == 'none':
|
||||
break
|
||||
if '-' in part:
|
||||
a, b = part.split('-')
|
||||
a, b = int(a), int(b)
|
||||
result.extend(range(a, b + 1))
|
||||
else:
|
||||
a = int(part)
|
||||
result.append(a)
|
||||
return numerical_sort(result)
|
||||
return result
|
||||
|
||||
|
||||
def get_list_of_vlans(module):
|
||||
|
||||
command = 'show vlan | json'
|
||||
vlan_list = []
|
||||
|
||||
try:
|
||||
body = run_commands(module, [command])[0]
|
||||
vlan_table = body['TABLE_vlanbrief']['ROW_vlanbrief']
|
||||
except (KeyError, AttributeError, IndexError):
|
||||
return []
|
||||
|
||||
if isinstance(vlan_table, list):
|
||||
for vlan in vlan_table:
|
||||
vlan_list.append(str(vlan['vlanshowbr-vlanid-utf']))
|
||||
else:
|
||||
vlan_list.append('1')
|
||||
|
||||
return vlan_list
|
||||
|
||||
|
||||
def numerical_sort(string_int_list):
|
||||
"""Sorts list of strings/integers that are digits in numerical order.
|
||||
"""
|
||||
|
||||
as_int_list = []
|
||||
as_str_list = []
|
||||
for vlan in string_int_list:
|
||||
as_int_list.append(int(vlan))
|
||||
as_int_list.sort()
|
||||
for vlan in as_int_list:
|
||||
as_str_list.append(str(vlan))
|
||||
return as_str_list
|
||||
|
||||
|
||||
def apply_key_map(key_map, table):
|
||||
new_dict = {}
|
||||
for key, value in table.items():
|
||||
new_key = key_map.get(key)
|
||||
if new_key:
|
||||
new_dict[new_key] = value
|
||||
return new_dict
|
||||
|
||||
|
||||
def apply_value_map(value_map, resource):
|
||||
for key, value in value_map.items():
|
||||
resource[key] = value[resource.get(key)]
|
||||
return resource
|
||||
|
||||
|
||||
def flatten_list(command_lists):
|
||||
flat_command_list = []
|
||||
for command in command_lists:
|
||||
if isinstance(command, list):
|
||||
flat_command_list.extend(command)
|
||||
else:
|
||||
flat_command_list.append(command)
|
||||
return flat_command_list
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
argument_spec = dict(
|
||||
interface=dict(required=True, type='str'),
|
||||
mode=dict(choices=['access', 'trunk'], required=False),
|
||||
access_vlan=dict(type='str', required=False),
|
||||
native_vlan=dict(type='str', required=False),
|
||||
trunk_vlans=dict(type='str', aliases=['trunk_add_vlans'], required=False),
|
||||
trunk_allowed_vlans=dict(type='str', required=False),
|
||||
state=dict(choices=['absent', 'present', 'unconfigured'], default='present')
|
||||
)
|
||||
|
||||
argument_spec.update(nxos_argument_spec)
|
||||
|
||||
module = AnsibleModule(argument_spec=argument_spec,
|
||||
mutually_exclusive=[['access_vlan', 'trunk_vlans'],
|
||||
['access_vlan', 'native_vlan'],
|
||||
['access_vlan', 'trunk_allowed_vlans']],
|
||||
supports_check_mode=True)
|
||||
|
||||
warnings = list()
|
||||
commands = []
|
||||
results = {'changed': False}
|
||||
|
||||
interface = module.params['interface']
|
||||
mode = module.params['mode']
|
||||
access_vlan = module.params['access_vlan']
|
||||
state = module.params['state']
|
||||
trunk_vlans = module.params['trunk_vlans']
|
||||
native_vlan = module.params['native_vlan']
|
||||
trunk_allowed_vlans = module.params['trunk_allowed_vlans']
|
||||
|
||||
args = dict(interface=interface, mode=mode, access_vlan=access_vlan,
|
||||
native_vlan=native_vlan, trunk_vlans=trunk_vlans,
|
||||
trunk_allowed_vlans=trunk_allowed_vlans)
|
||||
|
||||
proposed = dict((k, v) for k, v in args.items() if v is not None)
|
||||
|
||||
interface = interface.lower()
|
||||
|
||||
if mode == 'access' and state == 'present' and not access_vlan:
|
||||
module.fail_json(msg='access_vlan param is required when mode=access && state=present')
|
||||
|
||||
if mode == 'trunk' and access_vlan:
|
||||
module.fail_json(msg='access_vlan param not supported when using mode=trunk')
|
||||
|
||||
current_mode = get_interface_mode(interface, module)
|
||||
|
||||
# Current mode will return layer3, layer2, or unknown
|
||||
if current_mode == 'unknown' or current_mode == 'layer3':
|
||||
module.fail_json(msg='Ensure interface is configured to be a L2'
|
||||
'\nport first before using this module. You can use'
|
||||
'\nthe nxos_interface module for this.')
|
||||
|
||||
if interface_is_portchannel(interface, module):
|
||||
module.fail_json(msg='Cannot change L2 config on physical '
|
||||
'\nport because it is in a portchannel. '
|
||||
'\nYou should update the portchannel config.')
|
||||
|
||||
# existing will never be null for Eth intfs as there is always a default
|
||||
existing = get_switchport(interface, module)
|
||||
|
||||
# Safeguard check
|
||||
# If there isn't an existing, something is wrong per previous comment
|
||||
if not existing:
|
||||
module.fail_json(msg='Make sure you are using the FULL interface name')
|
||||
|
||||
if trunk_vlans or trunk_allowed_vlans:
|
||||
if trunk_vlans:
|
||||
trunk_vlans_list = vlan_range_to_list(trunk_vlans)
|
||||
elif trunk_allowed_vlans:
|
||||
trunk_vlans_list = vlan_range_to_list(trunk_allowed_vlans)
|
||||
proposed['allowed'] = True
|
||||
|
||||
existing_trunks_list = vlan_range_to_list((existing['trunk_vlans']))
|
||||
|
||||
existing['trunk_vlans_list'] = existing_trunks_list
|
||||
proposed['trunk_vlans_list'] = trunk_vlans_list
|
||||
|
||||
current_vlans = get_list_of_vlans(module)
|
||||
|
||||
if state == 'present':
|
||||
if access_vlan and access_vlan not in current_vlans:
|
||||
module.fail_json(msg='You are trying to configure a VLAN'
|
||||
' on an interface that\ndoes not exist on the '
|
||||
' switch yet!', vlan=access_vlan)
|
||||
elif native_vlan and native_vlan not in current_vlans:
|
||||
module.fail_json(msg='You are trying to configure a VLAN'
|
||||
' on an interface that\ndoes not exist on the '
|
||||
' switch yet!', vlan=native_vlan)
|
||||
else:
|
||||
command = get_switchport_config_commands(interface, existing, proposed, module)
|
||||
commands.append(command)
|
||||
elif state == 'unconfigured':
|
||||
is_default = is_switchport_default(existing)
|
||||
if not is_default:
|
||||
command = default_switchport_config(interface)
|
||||
commands.append(command)
|
||||
elif state == 'absent':
|
||||
command = remove_switchport_config_commands(interface, existing, proposed, module)
|
||||
commands.append(command)
|
||||
|
||||
if trunk_vlans or trunk_allowed_vlans:
|
||||
existing.pop('trunk_vlans_list')
|
||||
proposed.pop('trunk_vlans_list')
|
||||
|
||||
cmds = flatten_list(commands)
|
||||
|
||||
if cmds:
|
||||
if module.check_mode:
|
||||
module.exit_json(changed=True, commands=cmds)
|
||||
else:
|
||||
results['changed'] = True
|
||||
load_config(module, cmds)
|
||||
if 'configure' in cmds:
|
||||
cmds.pop(0)
|
||||
|
||||
results['commands'] = cmds
|
||||
results['warnings'] = warnings
|
||||
|
||||
module.exit_json(**results)
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -3,335 +3,15 @@
|
|||
#
|
||||
# Ansible module to manage PaloAltoNetworks Firewall
|
||||
# (c) 2016, techbizdev <techbizdev@paloaltonetworks.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: panos_nat_policy
|
||||
short_description: create a policy NAT rule
|
||||
description:
|
||||
- Create a policy nat rule. Keep in mind that we can either end up configuring source NAT, destination NAT, or both. Instead of splitting it
|
||||
into two we will make a fair attempt to determine which one the user wants.
|
||||
author: "Luigi Mori (@jtschichold), Ivan Bojer (@ivanbojer)"
|
||||
version_added: "2.3"
|
||||
requirements:
|
||||
- pan-python
|
||||
deprecated:
|
||||
alternative: Use M(panos_nat_rule) instead.
|
||||
removed_in: '2.9'
|
||||
why: This module depended on outdated and old SDK, use M(panos_nat_rule) instead.
|
||||
options:
|
||||
ip_address:
|
||||
description:
|
||||
- IP address (or hostname) of PAN-OS device
|
||||
required: true
|
||||
password:
|
||||
description:
|
||||
- password for authentication
|
||||
required: true
|
||||
username:
|
||||
description:
|
||||
- username for authentication
|
||||
default: "admin"
|
||||
rule_name:
|
||||
description:
|
||||
- name of the SNAT rule
|
||||
required: true
|
||||
from_zone:
|
||||
description:
|
||||
- list of source zones
|
||||
required: true
|
||||
to_zone:
|
||||
description:
|
||||
- destination zone
|
||||
required: true
|
||||
source:
|
||||
description:
|
||||
- list of source addresses
|
||||
default: ["any"]
|
||||
destination:
|
||||
description:
|
||||
- list of destination addresses
|
||||
default: ["any"]
|
||||
service:
|
||||
description:
|
||||
- service
|
||||
default: "any"
|
||||
snat_type:
|
||||
description:
|
||||
- type of source translation
|
||||
snat_address:
|
||||
description:
|
||||
- snat translated address
|
||||
snat_interface:
|
||||
description:
|
||||
- snat interface
|
||||
snat_interface_address:
|
||||
description:
|
||||
- snat interface address
|
||||
snat_bidirectional:
|
||||
description:
|
||||
- bidirectional flag
|
||||
type: bool
|
||||
default: 'no'
|
||||
dnat_address:
|
||||
description:
|
||||
- dnat translated address
|
||||
dnat_port:
|
||||
description:
|
||||
- dnat translated port
|
||||
override:
|
||||
description:
|
||||
- attempt to override rule if one with the same name already exists
|
||||
type: bool
|
||||
default: 'no'
|
||||
commit:
|
||||
description:
|
||||
- commit if changed
|
||||
type: bool
|
||||
default: 'yes'
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
# Create a source and destination nat rule
|
||||
- name: create nat SSH221 rule for 10.0.1.101
|
||||
panos_nat:
|
||||
ip_address: "192.168.1.1"
|
||||
password: "admin"
|
||||
rule_name: "Web SSH"
|
||||
from_zone: ["external"]
|
||||
to_zone: "external"
|
||||
source: ["any"]
|
||||
destination: ["10.0.0.100"]
|
||||
service: "service-tcp-221"
|
||||
snat_type: "dynamic-ip-and-port"
|
||||
snat_interface: "ethernet1/2"
|
||||
dnat_address: "10.0.1.101"
|
||||
dnat_port: "22"
|
||||
commit: False
|
||||
'''
|
||||
|
||||
RETURN = '''
|
||||
# Default return values
|
||||
'''
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils._text import to_native
|
||||
|
||||
try:
|
||||
import pan.xapi
|
||||
from pan.xapi import PanXapiError
|
||||
|
||||
HAS_LIB = True
|
||||
except ImportError:
|
||||
HAS_LIB = False
|
||||
|
||||
_NAT_XPATH = "/config/devices/entry[@name='localhost.localdomain']" + \
|
||||
"/vsys/entry[@name='vsys1']" + \
|
||||
"/rulebase/nat/rules/entry[@name='%s']"
|
||||
|
||||
|
||||
def nat_rule_exists(xapi, rule_name):
|
||||
xapi.get(_NAT_XPATH % rule_name)
|
||||
e = xapi.element_root.find('.//entry')
|
||||
if e is None:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def dnat_xml(m, dnat_address, dnat_port):
|
||||
if dnat_address is None and dnat_port is None:
|
||||
return None
|
||||
|
||||
exml = ["<destination-translation>"]
|
||||
if dnat_address is not None:
|
||||
exml.append("<translated-address>%s</translated-address>" %
|
||||
dnat_address)
|
||||
if dnat_port is not None:
|
||||
exml.append("<translated-port>%s</translated-port>" %
|
||||
dnat_port)
|
||||
exml.append('</destination-translation>')
|
||||
|
||||
return ''.join(exml)
|
||||
|
||||
|
||||
def snat_xml(m, snat_type, snat_address, snat_interface,
|
||||
snat_interface_address, snat_bidirectional):
|
||||
if snat_type == 'static-ip':
|
||||
if snat_address is None:
|
||||
m.fail_json(msg="snat_address should be speicified "
|
||||
"for snat_type static-ip")
|
||||
|
||||
exml = ["<source-translation>", "<static-ip>"]
|
||||
if snat_bidirectional:
|
||||
exml.append('<bi-directional>%s</bi-directional>' % 'yes')
|
||||
else:
|
||||
exml.append('<bi-directional>%s</bi-directional>' % 'no')
|
||||
exml.append('<translated-address>%s</translated-address>' %
|
||||
snat_address)
|
||||
exml.append('</static-ip>')
|
||||
exml.append('</source-translation>')
|
||||
elif snat_type == 'dynamic-ip-and-port':
|
||||
exml = ["<source-translation>",
|
||||
"<dynamic-ip-and-port>"]
|
||||
if snat_interface is not None:
|
||||
exml = exml + [
|
||||
"<interface-address>",
|
||||
"<interface>%s</interface>" % snat_interface]
|
||||
if snat_interface_address is not None:
|
||||
exml.append("<ip>%s</ip>" % snat_interface_address)
|
||||
exml.append("</interface-address>")
|
||||
elif snat_address is not None:
|
||||
exml.append("<translated-address>")
|
||||
for t in snat_address:
|
||||
exml.append("<member>%s</member>" % t)
|
||||
exml.append("</translated-address>")
|
||||
else:
|
||||
m.fail_json(msg="no snat_interface or snat_address "
|
||||
"specified for snat_type dynamic-ip-and-port")
|
||||
exml.append('</dynamic-ip-and-port>')
|
||||
exml.append('</source-translation>')
|
||||
else:
|
||||
m.fail_json(msg="unknown snat_type %s" % snat_type)
|
||||
|
||||
return ''.join(exml)
|
||||
|
||||
|
||||
def add_nat(xapi, module, rule_name, from_zone, to_zone,
|
||||
source, destination, service, dnatxml=None, snatxml=None):
|
||||
exml = []
|
||||
if dnatxml:
|
||||
exml.append(dnatxml)
|
||||
if snatxml:
|
||||
exml.append(snatxml)
|
||||
|
||||
exml.append("<to><member>%s</member></to>" % to_zone)
|
||||
|
||||
exml.append("<from>")
|
||||
exml = exml + ["<member>%s</member>" % e for e in from_zone]
|
||||
exml.append("</from>")
|
||||
|
||||
exml.append("<source>")
|
||||
exml = exml + ["<member>%s</member>" % e for e in source]
|
||||
exml.append("</source>")
|
||||
|
||||
exml.append("<destination>")
|
||||
exml = exml + ["<member>%s</member>" % e for e in destination]
|
||||
exml.append("</destination>")
|
||||
|
||||
exml.append("<service>%s</service>" % service)
|
||||
|
||||
exml.append("<nat-type>ipv4</nat-type>")
|
||||
|
||||
exml = ''.join(exml)
|
||||
|
||||
xapi.set(xpath=_NAT_XPATH % rule_name, element=exml)
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def main():
|
||||
argument_spec = dict(
|
||||
ip_address=dict(required=True),
|
||||
password=dict(required=True, no_log=True),
|
||||
username=dict(default='admin'),
|
||||
rule_name=dict(required=True),
|
||||
from_zone=dict(type='list', required=True),
|
||||
to_zone=dict(required=True),
|
||||
source=dict(type='list', default=["any"]),
|
||||
destination=dict(type='list', default=["any"]),
|
||||
service=dict(default="any"),
|
||||
snat_type=dict(),
|
||||
snat_address=dict(),
|
||||
snat_interface=dict(),
|
||||
snat_interface_address=dict(),
|
||||
snat_bidirectional=dict(default=False),
|
||||
dnat_address=dict(),
|
||||
dnat_port=dict(),
|
||||
override=dict(type='bool', default=False),
|
||||
commit=dict(type='bool', default=True)
|
||||
)
|
||||
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False)
|
||||
|
||||
if module._name == 'panos_nat_policy':
|
||||
module.deprecate("The 'panos_nat_policy' module is being renamed 'panos_nat_rule'", version=2.9)
|
||||
|
||||
if not HAS_LIB:
|
||||
module.fail_json(msg='pan-python is required for this module')
|
||||
|
||||
ip_address = module.params["ip_address"]
|
||||
password = module.params["password"]
|
||||
username = module.params['username']
|
||||
|
||||
xapi = pan.xapi.PanXapi(
|
||||
hostname=ip_address,
|
||||
api_username=username,
|
||||
api_password=password
|
||||
)
|
||||
|
||||
rule_name = module.params['rule_name']
|
||||
from_zone = module.params['from_zone']
|
||||
to_zone = module.params['to_zone']
|
||||
source = module.params['source']
|
||||
destination = module.params['destination']
|
||||
service = module.params['service']
|
||||
|
||||
snat_type = module.params['snat_type']
|
||||
snat_address = module.params['snat_address']
|
||||
snat_interface = module.params['snat_interface']
|
||||
snat_interface_address = module.params['snat_interface_address']
|
||||
snat_bidirectional = module.params['snat_bidirectional']
|
||||
|
||||
dnat_address = module.params['dnat_address']
|
||||
dnat_port = module.params['dnat_port']
|
||||
commit = module.params['commit']
|
||||
|
||||
override = module.params["override"]
|
||||
if not override and nat_rule_exists(xapi, rule_name):
|
||||
module.exit_json(changed=False, msg="rule exists")
|
||||
|
||||
try:
|
||||
changed = add_nat(
|
||||
xapi,
|
||||
module,
|
||||
rule_name,
|
||||
from_zone,
|
||||
to_zone,
|
||||
source,
|
||||
destination,
|
||||
service,
|
||||
dnatxml=dnat_xml(module, dnat_address, dnat_port),
|
||||
snatxml=snat_xml(module, snat_type, snat_address,
|
||||
snat_interface, snat_interface_address,
|
||||
snat_bidirectional)
|
||||
)
|
||||
|
||||
if changed and commit:
|
||||
xapi.commit(cmd="<commit></commit>", sync=True, interval=1)
|
||||
|
||||
module.exit_json(changed=changed, msg="okey dokey")
|
||||
|
||||
except PanXapiError as exc:
|
||||
module.fail_json(msg=to_native(exc))
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -3,507 +3,15 @@
|
|||
#
|
||||
# Ansible module to manage PaloAltoNetworks Firewall
|
||||
# (c) 2016, techbizdev <techbizdev@paloaltonetworks.com>
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
|
||||
|
||||
ANSIBLE_METADATA = {'metadata_version': '1.1',
|
||||
'status': ['deprecated'],
|
||||
'status': ['removed'],
|
||||
'supported_by': 'community'}
|
||||
|
||||
|
||||
DOCUMENTATION = '''
|
||||
---
|
||||
module: panos_security_policy
|
||||
short_description: Create security rule policy on PanOS devices.
|
||||
description:
|
||||
- Security policies allow you to enforce rules and take action, and can be as
|
||||
general or specific as needed. The policy rules are compared against the
|
||||
incoming traffic in sequence, and because the first rule that matches the
|
||||
traffic is applied, the more specific rules must precede the more general ones.
|
||||
author: "Ivan Bojer (@ivanbojer)"
|
||||
version_added: "2.3"
|
||||
deprecated:
|
||||
alternative: Use M(panos_security_rule) instead.
|
||||
removed_in: '2.9'
|
||||
why: This module depended on outdated and old SDK. In 2.4 use M(panos_security_rule) instead.
|
||||
requirements:
|
||||
- pan-python can be obtained from PyPI U(https://pypi.org/project/pan-python/)
|
||||
- pandevice can be obtained from PyPI U(https://pypi.org/project/pandevice/)
|
||||
notes:
|
||||
- Checkmode is not supported.
|
||||
- Panorama is supported
|
||||
options:
|
||||
ip_address:
|
||||
description:
|
||||
- IP address (or hostname) of PAN-OS device being configured.
|
||||
required: true
|
||||
username:
|
||||
description:
|
||||
- Username credentials to use for auth unless I(api_key) is set.
|
||||
default: "admin"
|
||||
password:
|
||||
description:
|
||||
- Password credentials to use for auth unless I(api_key) is set.
|
||||
required: true
|
||||
api_key:
|
||||
description:
|
||||
- API key that can be used instead of I(username)/I(password) credentials.
|
||||
rule_name:
|
||||
description:
|
||||
- Name of the security rule.
|
||||
required: true
|
||||
rule_type:
|
||||
description:
|
||||
- Type of security rule (version 6.1 of PanOS and above).
|
||||
default: "universal"
|
||||
description:
|
||||
description:
|
||||
- Description for the security rule.
|
||||
tag:
|
||||
description:
|
||||
- Administrative tags that can be added to the rule. Note, tags must be already defined.
|
||||
from_zone:
|
||||
description:
|
||||
- List of source zones.
|
||||
default: "any"
|
||||
to_zone:
|
||||
description:
|
||||
- List of destination zones.
|
||||
default: "any"
|
||||
source:
|
||||
description:
|
||||
- List of source addresses.
|
||||
default: "any"
|
||||
source_user:
|
||||
description:
|
||||
- Use users to enforce policy for individual users or a group of users.
|
||||
default: "any"
|
||||
hip_profiles:
|
||||
description: >
|
||||
If you are using GlobalProtect with host information profile (HIP) enabled, you can also base the policy
|
||||
on information collected by GlobalProtect. For example, the user access level can be determined HIP that
|
||||
notifies the firewall about the user's local configuration.
|
||||
default: "any"
|
||||
destination:
|
||||
description:
|
||||
- List of destination addresses.
|
||||
default: "any"
|
||||
application:
|
||||
description:
|
||||
- List of applications.
|
||||
default: "any"
|
||||
service:
|
||||
description:
|
||||
- List of services.
|
||||
default: "application-default"
|
||||
log_start:
|
||||
description:
|
||||
- Whether to log at session start.
|
||||
log_end:
|
||||
description:
|
||||
- Whether to log at session end.
|
||||
default: true
|
||||
action:
|
||||
description:
|
||||
- Action to apply once rules maches.
|
||||
default: "allow"
|
||||
group_profile:
|
||||
description: >
|
||||
Security profile group that is already defined in the system. This property supersedes antivirus,
|
||||
vulnerability, spyware, url_filtering, file_blocking, data_filtering, and wildfire_analysis properties.
|
||||
antivirus:
|
||||
description:
|
||||
- Name of the already defined antivirus profile.
|
||||
vulnerability:
|
||||
description:
|
||||
- Name of the already defined vulnerability profile.
|
||||
spyware:
|
||||
description:
|
||||
- Name of the already defined spyware profile.
|
||||
url_filtering:
|
||||
description:
|
||||
- Name of the already defined url_filtering profile.
|
||||
file_blocking:
|
||||
description:
|
||||
- Name of the already defined file_blocking profile.
|
||||
data_filtering:
|
||||
description:
|
||||
- Name of the already defined data_filtering profile.
|
||||
wildfire_analysis:
|
||||
description:
|
||||
- Name of the already defined wildfire_analysis profile.
|
||||
devicegroup:
|
||||
description: >
|
||||
Device groups are used for the Panorama interaction with Firewall(s). The group must exists on Panorama.
|
||||
If device group is not define we assume that we are contacting Firewall.
|
||||
commit:
|
||||
description:
|
||||
- Commit configuration if changed.
|
||||
default: true
|
||||
'''
|
||||
|
||||
EXAMPLES = '''
|
||||
- name: permit ssh to 1.1.1.1
|
||||
panos_security_policy:
|
||||
ip_address: '10.5.172.91'
|
||||
username: 'admin'
|
||||
password: 'paloalto'
|
||||
rule_name: 'SSH permit'
|
||||
description: 'SSH rule test'
|
||||
from_zone: ['public']
|
||||
to_zone: ['private']
|
||||
source: ['any']
|
||||
source_user: ['any']
|
||||
destination: ['1.1.1.1']
|
||||
category: ['any']
|
||||
application: ['ssh']
|
||||
service: ['application-default']
|
||||
hip_profiles: ['any']
|
||||
action: 'allow'
|
||||
commit: false
|
||||
|
||||
- name: Allow HTTP multimedia only from CDNs
|
||||
panos_security_policy:
|
||||
ip_address: '10.5.172.91'
|
||||
username: 'admin'
|
||||
password: 'paloalto'
|
||||
rule_name: 'HTTP Multimedia'
|
||||
description: 'Allow HTTP multimedia only to host at 1.1.1.1'
|
||||
from_zone: ['public']
|
||||
to_zone: ['private']
|
||||
source: ['any']
|
||||
source_user: ['any']
|
||||
destination: ['1.1.1.1']
|
||||
category: ['content-delivery-networks']
|
||||
application: ['http-video', 'http-audio']
|
||||
service: ['service-http', 'service-https']
|
||||
hip_profiles: ['any']
|
||||
action: 'allow'
|
||||
commit: false
|
||||
|
||||
- name: more complex fictitious rule that uses profiles
|
||||
panos_security_policy:
|
||||
ip_address: '10.5.172.91'
|
||||
username: 'admin'
|
||||
password: 'paloalto'
|
||||
rule_name: 'Allow HTTP w profile'
|
||||
log_start: false
|
||||
log_end: true
|
||||
action: 'allow'
|
||||
antivirus: 'default'
|
||||
vulnerability: 'default'
|
||||
spyware: 'default'
|
||||
url_filtering: 'default'
|
||||
wildfire_analysis: 'default'
|
||||
commit: false
|
||||
|
||||
- name: deny all
|
||||
panos_security_policy:
|
||||
ip_address: '10.5.172.91'
|
||||
username: 'admin'
|
||||
password: 'paloalto'
|
||||
rule_name: 'DenyAll'
|
||||
log_start: true
|
||||
log_end: true
|
||||
action: 'deny'
|
||||
rule_type: 'interzone'
|
||||
commit: false
|
||||
|
||||
# permit ssh to 1.1.1.1 using panorama and pushing the configuration to firewalls
|
||||
# that are defined in 'DeviceGroupA' device group
|
||||
- name: permit ssh to 1.1.1.1 through Panorama
|
||||
panos_security_policy:
|
||||
ip_address: '10.5.172.92'
|
||||
password: 'paloalto'
|
||||
rule_name: 'SSH permit'
|
||||
description: 'SSH rule test'
|
||||
from_zone: ['public']
|
||||
to_zone: ['private']
|
||||
source: ['any']
|
||||
source_user: ['any']
|
||||
destination: ['1.1.1.1']
|
||||
category: ['any']
|
||||
application: ['ssh']
|
||||
service: ['application-default']
|
||||
hip_profiles: ['any']
|
||||
action: 'allow'
|
||||
devicegroup: 'DeviceGroupA'
|
||||
'''
|
||||
|
||||
RETURN = '''
|
||||
# Default return values
|
||||
'''
|
||||
|
||||
from ansible.module_utils.basic import AnsibleModule
|
||||
from ansible.module_utils._text import to_native
|
||||
|
||||
try:
|
||||
import pan.xapi
|
||||
from pan.xapi import PanXapiError
|
||||
import pandevice
|
||||
import pandevice.firewall
|
||||
import pandevice.panorama
|
||||
import pandevice.objects
|
||||
import pandevice.policies
|
||||
|
||||
HAS_LIB = True
|
||||
except ImportError:
|
||||
HAS_LIB = False
|
||||
|
||||
|
||||
def security_rule_exists(device, sec_rule):
|
||||
if isinstance(device, pandevice.firewall.Firewall):
|
||||
rule_base = pandevice.policies.Rulebase.refreshall(device)
|
||||
elif isinstance(device, pandevice.panorama.Panorama):
|
||||
# look for only pre-rulebase ATM
|
||||
rule_base = pandevice.policies.PreRulebase.refreshall(device)
|
||||
|
||||
match_check = ['name', 'description', 'group_profile', 'antivirus', 'vulnerability',
|
||||
'spyware', 'url_filtering', 'file_blocking', 'data_filtering',
|
||||
'wildfire_analysis', 'type', 'action', 'tag', 'log_start', 'log_end']
|
||||
list_check = ['tozone', 'fromzone', 'source', 'source_user', 'destination', 'category',
|
||||
'application', 'service', 'hip_profiles']
|
||||
|
||||
change_check = False
|
||||
if rule_base:
|
||||
rule_base = rule_base[0]
|
||||
security_rules = rule_base.findall(pandevice.policies.SecurityRule)
|
||||
if security_rules:
|
||||
for r in security_rules:
|
||||
if r.name == sec_rule.name:
|
||||
change_check = True
|
||||
for check in match_check:
|
||||
propose_check = getattr(sec_rule, check, None)
|
||||
current_check = getattr(r, check, None)
|
||||
if propose_check != current_check:
|
||||
return True
|
||||
for check in list_check:
|
||||
propose_check = getattr(sec_rule, check, [])
|
||||
current_check = getattr(r, check, [])
|
||||
if set(propose_check) != set(current_check):
|
||||
return True
|
||||
if change_check:
|
||||
return 'no_change'
|
||||
return False
|
||||
|
||||
|
||||
def create_security_rule(**kwargs):
|
||||
security_rule = pandevice.policies.SecurityRule(
|
||||
name=kwargs['rule_name'],
|
||||
description=kwargs['description'],
|
||||
tozone=kwargs['to_zone'],
|
||||
fromzone=kwargs['from_zone'],
|
||||
source=kwargs['source'],
|
||||
source_user=kwargs['source_user'],
|
||||
destination=kwargs['destination'],
|
||||
category=kwargs['category'],
|
||||
application=kwargs['application'],
|
||||
service=kwargs['service'],
|
||||
hip_profiles=kwargs['hip_profiles'],
|
||||
log_start=kwargs['log_start'],
|
||||
log_end=kwargs['log_end'],
|
||||
type=kwargs['rule_type'],
|
||||
action=kwargs['action'])
|
||||
|
||||
if 'tag' in kwargs:
|
||||
security_rule.tag = kwargs['tag']
|
||||
|
||||
# profile settings
|
||||
if 'group_profile' in kwargs:
|
||||
security_rule.group = kwargs['group_profile']
|
||||
else:
|
||||
if 'antivirus' in kwargs:
|
||||
security_rule.virus = kwargs['antivirus']
|
||||
if 'vulnerability' in kwargs:
|
||||
security_rule.vulnerability = kwargs['vulnerability']
|
||||
if 'spyware' in kwargs:
|
||||
security_rule.spyware = kwargs['spyware']
|
||||
if 'url_filtering' in kwargs:
|
||||
security_rule.url_filtering = kwargs['url_filtering']
|
||||
if 'file_blocking' in kwargs:
|
||||
security_rule.file_blocking = kwargs['file_blocking']
|
||||
if 'data_filtering' in kwargs:
|
||||
security_rule.data_filtering = kwargs['data_filtering']
|
||||
if 'wildfire_analysis' in kwargs:
|
||||
security_rule.wildfire_analysis = kwargs['wildfire_analysis']
|
||||
|
||||
return security_rule
|
||||
|
||||
|
||||
def add_security_rule(device, sec_rule, rule_exist):
|
||||
if isinstance(device, pandevice.firewall.Firewall):
|
||||
rule_base = pandevice.policies.Rulebase.refreshall(device)
|
||||
elif isinstance(device, pandevice.panorama.Panorama):
|
||||
# look for only pre-rulebase ATM
|
||||
rule_base = pandevice.policies.PreRulebase.refreshall(device)
|
||||
|
||||
if rule_exist:
|
||||
return False
|
||||
if rule_base:
|
||||
rule_base = rule_base[0]
|
||||
|
||||
rule_base.add(sec_rule)
|
||||
sec_rule.create()
|
||||
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def _commit(device, device_group=None):
|
||||
"""
|
||||
:param device: either firewall or panorama
|
||||
:param device_group: panorama device group or if none then 'all'
|
||||
:return: True if successful
|
||||
"""
|
||||
result = device.commit(sync=True)
|
||||
|
||||
if isinstance(device, pandevice.panorama.Panorama):
|
||||
result = device.commit_all(sync=True, sync_all=True, devicegroup=device_group)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
argument_spec = dict(
|
||||
ip_address=dict(required=True),
|
||||
password=dict(no_log=True),
|
||||
username=dict(default='admin'),
|
||||
api_key=dict(no_log=True),
|
||||
rule_name=dict(required=True),
|
||||
description=dict(default=''),
|
||||
tag=dict(),
|
||||
to_zone=dict(type='list', default=['any']),
|
||||
from_zone=dict(type='list', default=['any']),
|
||||
source=dict(type='list', default=["any"]),
|
||||
source_user=dict(type='list', default=['any']),
|
||||
destination=dict(type='list', default=["any"]),
|
||||
category=dict(type='list', default=['any']),
|
||||
application=dict(type='list', default=['any']),
|
||||
service=dict(type='list', default=['application-default']),
|
||||
hip_profiles=dict(type='list', default=['any']),
|
||||
group_profile=dict(),
|
||||
antivirus=dict(),
|
||||
vulnerability=dict(),
|
||||
spyware=dict(),
|
||||
url_filtering=dict(),
|
||||
file_blocking=dict(),
|
||||
data_filtering=dict(),
|
||||
wildfire_analysis=dict(),
|
||||
log_start=dict(type='bool', default=False),
|
||||
log_end=dict(type='bool', default=True),
|
||||
rule_type=dict(default='universal'),
|
||||
action=dict(default='allow'),
|
||||
devicegroup=dict(),
|
||||
commit=dict(type='bool', default=True)
|
||||
)
|
||||
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False,
|
||||
required_one_of=[['api_key', 'password']])
|
||||
|
||||
if module._name == 'panos_security_policy':
|
||||
module.deprecate("The 'panos_security_policy' module is being renamed 'panos_security_rule'", version=2.9)
|
||||
|
||||
if not HAS_LIB:
|
||||
module.fail_json(msg='Missing required pan-python and pandevice modules.')
|
||||
|
||||
ip_address = module.params["ip_address"]
|
||||
password = module.params["password"]
|
||||
username = module.params['username']
|
||||
api_key = module.params['api_key']
|
||||
rule_name = module.params['rule_name']
|
||||
description = module.params['description']
|
||||
tag = module.params['tag']
|
||||
from_zone = module.params['from_zone']
|
||||
to_zone = module.params['to_zone']
|
||||
source = module.params['source']
|
||||
source_user = module.params['source_user']
|
||||
destination = module.params['destination']
|
||||
category = module.params['category']
|
||||
application = module.params['application']
|
||||
service = module.params['service']
|
||||
hip_profiles = module.params['hip_profiles']
|
||||
log_start = module.params['log_start']
|
||||
log_end = module.params['log_end']
|
||||
rule_type = module.params['rule_type']
|
||||
action = module.params['action']
|
||||
|
||||
group_profile = module.params['group_profile']
|
||||
antivirus = module.params['antivirus']
|
||||
vulnerability = module.params['vulnerability']
|
||||
spyware = module.params['spyware']
|
||||
url_filtering = module.params['url_filtering']
|
||||
file_blocking = module.params['file_blocking']
|
||||
data_filtering = module.params['data_filtering']
|
||||
wildfire_analysis = module.params['wildfire_analysis']
|
||||
|
||||
devicegroup = module.params['devicegroup']
|
||||
|
||||
commit = module.params['commit']
|
||||
|
||||
if devicegroup:
|
||||
device = pandevice.panorama.Panorama(ip_address, username, password, api_key=api_key)
|
||||
dev_grps = device.refresh_devices()
|
||||
|
||||
for grp in dev_grps:
|
||||
if grp.name == devicegroup:
|
||||
break
|
||||
module.fail_json(msg=' \'%s\' device group not found in Panorama. Is the name correct?' % devicegroup)
|
||||
else:
|
||||
device = pandevice.firewall.Firewall(ip_address, username, password, api_key=api_key)
|
||||
|
||||
sec_rule = create_security_rule(
|
||||
rule_name=rule_name,
|
||||
description=description,
|
||||
tag=tag,
|
||||
from_zone=from_zone,
|
||||
to_zone=to_zone,
|
||||
source=source,
|
||||
source_user=source_user,
|
||||
destination=destination,
|
||||
category=category,
|
||||
application=application,
|
||||
service=service,
|
||||
hip_profiles=hip_profiles,
|
||||
group_profile=group_profile,
|
||||
antivirus=antivirus,
|
||||
vulnerability=vulnerability,
|
||||
spyware=spyware,
|
||||
url_filtering=url_filtering,
|
||||
file_blocking=file_blocking,
|
||||
data_filtering=data_filtering,
|
||||
wildfire_analysis=wildfire_analysis,
|
||||
log_start=log_start,
|
||||
log_end=log_end,
|
||||
rule_type=rule_type,
|
||||
action=action
|
||||
)
|
||||
|
||||
rule_exist = security_rule_exists(device, sec_rule)
|
||||
if rule_exist is True:
|
||||
module.fail_json(msg='Rule with the same name but different objects exists.')
|
||||
try:
|
||||
changed = add_security_rule(device, sec_rule, rule_exist)
|
||||
except PanXapiError as exc:
|
||||
module.fail_json(msg=to_native(exc))
|
||||
|
||||
if changed and commit:
|
||||
result = _commit(device, devicegroup)
|
||||
|
||||
module.exit_json(changed=changed, msg="okey dokey")
|
||||
from ansible.module_utils.common.removed import removed_module
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
removed_module(removed_in='2.9')
|
||||
|
|
|
@ -1,5 +1,18 @@
|
|||
async_wrapper
|
||||
accelerate
|
||||
aos_asn_pool
|
||||
aos_blueprint
|
||||
aos_blueprint_param
|
||||
aos_blueprint_virtnet
|
||||
aos_device
|
||||
aos_external_router
|
||||
aos_ip_pool
|
||||
aos_logical_device
|
||||
aos_logical_device_map
|
||||
aos_login
|
||||
aos_rack_type
|
||||
aos_template
|
||||
azure
|
||||
cl_bond
|
||||
cl_bridge
|
||||
cl_img_install
|
||||
|
@ -7,15 +20,23 @@ cl_interface
|
|||
cl_interface_policy
|
||||
cl_license
|
||||
cl_ports
|
||||
cs_nic
|
||||
docker
|
||||
ec2_ami_find
|
||||
ec2_ami_search
|
||||
ec2_facts
|
||||
ec2_vpc
|
||||
nxos_mtu
|
||||
s3
|
||||
azure
|
||||
cs_nic
|
||||
ec2_remote_facts
|
||||
ec2_vpc
|
||||
kubernetes
|
||||
netscaler
|
||||
win_msi
|
||||
nxos_ip_interface
|
||||
nxos_mtu
|
||||
nxos_portchannel
|
||||
nxos_switchport
|
||||
oc
|
||||
os_server_actions
|
||||
panos_nat_policy
|
||||
panos_security_policy
|
||||
s3
|
||||
vsphere_guest
|
||||
win_msi
|
||||
|
|
|
@ -1,5 +1,3 @@
|
|||
lib/ansible/modules/cloud/amazon/_ec2_ami_find.py E322
|
||||
lib/ansible/modules/cloud/amazon/_ec2_ami_find.py E323
|
||||
lib/ansible/modules/cloud/amazon/aws_api_gateway.py E322
|
||||
lib/ansible/modules/cloud/amazon/aws_application_scaling_policy.py E322
|
||||
lib/ansible/modules/cloud/amazon/aws_application_scaling_policy.py E326
|
||||
|
@ -315,9 +313,6 @@ lib/ansible/modules/clustering/consul_kv.py E322
|
|||
lib/ansible/modules/clustering/consul_kv.py E324
|
||||
lib/ansible/modules/clustering/consul_session.py E322
|
||||
lib/ansible/modules/clustering/etcd3.py E326
|
||||
lib/ansible/modules/clustering/k8s/_kubernetes.py E322
|
||||
lib/ansible/modules/clustering/k8s/_kubernetes.py E323
|
||||
lib/ansible/modules/clustering/k8s/_kubernetes.py E324
|
||||
lib/ansible/modules/clustering/znode.py E326
|
||||
lib/ansible/modules/commands/command.py E322
|
||||
lib/ansible/modules/commands/command.py E323
|
||||
|
@ -583,7 +578,6 @@ lib/ansible/modules/network/netvisor/pn_vrouterbgp.py E324
|
|||
lib/ansible/modules/network/netvisor/pn_vrouterif.py E324
|
||||
lib/ansible/modules/network/netvisor/pn_vrouterif.py E326
|
||||
lib/ansible/modules/network/netvisor/pn_vrouterlbif.py E324
|
||||
lib/ansible/modules/network/nxos/_nxos_portchannel.py E324
|
||||
lib/ansible/modules/network/nxos/nxos_aaa_server.py E326
|
||||
lib/ansible/modules/network/nxos/nxos_acl.py E326
|
||||
lib/ansible/modules/network/nxos/nxos_bgp.py E324
|
||||
|
@ -614,10 +608,6 @@ lib/ansible/modules/network/ordnance/ordnance_config.py E324
|
|||
lib/ansible/modules/network/ordnance/ordnance_facts.py E322
|
||||
lib/ansible/modules/network/ordnance/ordnance_facts.py E324
|
||||
lib/ansible/modules/network/ovs/openvswitch_bridge.py E326
|
||||
lib/ansible/modules/network/panos/_panos_nat_policy.py E324
|
||||
lib/ansible/modules/network/panos/_panos_nat_policy.py E335
|
||||
lib/ansible/modules/network/panos/_panos_security_policy.py E322
|
||||
lib/ansible/modules/network/panos/_panos_security_policy.py E324
|
||||
lib/ansible/modules/network/panos/panos_check.py E324
|
||||
lib/ansible/modules/network/panos/panos_match_rule.py E324
|
||||
lib/ansible/modules/network/panos/panos_match_rule.py E326
|
||||
|
|
|
@ -1,79 +0,0 @@
|
|||
# (c) 2016 Red Hat Inc.
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
# Make coding more python3-ish
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
from units.compat.mock import patch
|
||||
from ansible.modules.network.nxos import _nxos_ip_interface
|
||||
from .nxos_module import TestNxosModule, load_fixture, set_module_args
|
||||
|
||||
|
||||
class TestNxosIPInterfaceModule(TestNxosModule):
|
||||
|
||||
module = _nxos_ip_interface
|
||||
|
||||
def setUp(self):
|
||||
super(TestNxosIPInterfaceModule, self).setUp()
|
||||
|
||||
self.mock_get_interface_mode = patch(
|
||||
'ansible.modules.network.nxos._nxos_ip_interface.get_interface_mode')
|
||||
self.get_interface_mode = self.mock_get_interface_mode.start()
|
||||
|
||||
self.mock_send_show_command = patch(
|
||||
'ansible.modules.network.nxos._nxos_ip_interface.send_show_command')
|
||||
self.send_show_command = self.mock_send_show_command.start()
|
||||
|
||||
self.mock_load_config = patch('ansible.modules.network.nxos._nxos_ip_interface.load_config')
|
||||
self.load_config = self.mock_load_config.start()
|
||||
|
||||
self.mock_get_capabilities = patch('ansible.modules.network.nxos._nxos_ip_interface.get_capabilities')
|
||||
self.get_capabilities = self.mock_get_capabilities.start()
|
||||
self.get_capabilities.return_value = {'network_api': 'cliconf'}
|
||||
|
||||
def tearDown(self):
|
||||
super(TestNxosIPInterfaceModule, self).tearDown()
|
||||
self.mock_get_interface_mode.stop()
|
||||
self.mock_send_show_command.stop()
|
||||
self.mock_load_config.stop()
|
||||
self.mock_get_capabilities.stop()
|
||||
|
||||
def load_fixtures(self, commands=None, device=''):
|
||||
self.get_interface_mode.return_value = 'layer3'
|
||||
self.send_show_command.return_value = [load_fixture('', '_nxos_ip_interface.cfg')]
|
||||
self.load_config.return_value = None
|
||||
|
||||
def test_nxos_ip_interface_ip_present(self):
|
||||
set_module_args(dict(interface='eth2/1', addr='1.1.1.2', mask=8))
|
||||
result = self.execute_module(changed=True)
|
||||
self.assertEqual(result['commands'],
|
||||
['interface eth2/1',
|
||||
'no ip address 192.0.2.1/8',
|
||||
'ip address 1.1.1.2/8'])
|
||||
|
||||
def test_nxos_ip_interface_ip_idempotent(self):
|
||||
set_module_args(dict(interface='eth2/1', addr='192.0.2.1', mask=8))
|
||||
result = self.execute_module(changed=False)
|
||||
self.assertEqual(result['commands'], [])
|
||||
|
||||
def test_nxos_ip_interface_ip_absent(self):
|
||||
set_module_args(dict(interface='eth2/1', state='absent',
|
||||
addr='192.0.2.1', mask=8))
|
||||
result = self.execute_module(changed=True)
|
||||
self.assertEqual(result['commands'],
|
||||
['interface eth2/1', 'no ip address 192.0.2.1/8'])
|
|
@ -1,67 +0,0 @@
|
|||
# (c) 2016 Red Hat Inc.
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
# Make coding more python3-ish
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
from units.compat.mock import patch
|
||||
from ansible.modules.network.nxos import _nxos_portchannel
|
||||
from .nxos_module import TestNxosModule, set_module_args
|
||||
|
||||
|
||||
class TestNxosPortchannelModule(TestNxosModule):
|
||||
|
||||
module = _nxos_portchannel
|
||||
|
||||
def setUp(self):
|
||||
super(TestNxosPortchannelModule, self).setUp()
|
||||
|
||||
self.mock_run_commands = patch('ansible.modules.network.nxos._nxos_portchannel.run_commands')
|
||||
self.run_commands = self.mock_run_commands.start()
|
||||
|
||||
self.mock_load_config = patch('ansible.modules.network.nxos._nxos_portchannel.load_config')
|
||||
self.load_config = self.mock_load_config.start()
|
||||
|
||||
self.mock_get_config = patch('ansible.modules.network.nxos._nxos_portchannel.get_config')
|
||||
self.get_config = self.mock_get_config.start()
|
||||
|
||||
self.mock_get_capabilities = patch('ansible.modules.network.nxos._nxos_portchannel.get_capabilities')
|
||||
self.get_capabilities = self.mock_get_capabilities.start()
|
||||
self.get_capabilities.return_value = {'network_api': 'cliconf'}
|
||||
|
||||
def tearDown(self):
|
||||
super(TestNxosPortchannelModule, self).tearDown()
|
||||
self.mock_run_commands.stop()
|
||||
self.mock_load_config.stop()
|
||||
self.mock_get_config.stop()
|
||||
self.mock_get_capabilities.stop()
|
||||
|
||||
def load_fixtures(self, commands=None, device=''):
|
||||
self.load_config.return_value = None
|
||||
|
||||
def test_nxos_portchannel(self):
|
||||
set_module_args(dict(group='99',
|
||||
members=['Ethernet2/1', 'Ethernet2/2'],
|
||||
mode='active',
|
||||
state='present'))
|
||||
result = self.execute_module(changed=True)
|
||||
self.assertEqual(result['commands'], ['interface port-channel99',
|
||||
'interface Ethernet2/1',
|
||||
'channel-group 99 mode active',
|
||||
'interface Ethernet2/2',
|
||||
'channel-group 99 mode active'])
|
|
@ -1,80 +0,0 @@
|
|||
# (c) 2016 Red Hat Inc.
|
||||
#
|
||||
# This file is part of Ansible
|
||||
#
|
||||
# Ansible is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# Ansible is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
# Make coding more python3-ish
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
from units.compat.mock import patch
|
||||
from ansible.modules.network.nxos import _nxos_switchport
|
||||
from .nxos_module import TestNxosModule, load_fixture, set_module_args
|
||||
|
||||
|
||||
class TestNxosSwitchportModule(TestNxosModule):
|
||||
|
||||
module = _nxos_switchport
|
||||
|
||||
def setUp(self):
|
||||
super(TestNxosSwitchportModule, self).setUp()
|
||||
|
||||
self.mock_run_commands = patch('ansible.modules.network.nxos._nxos_switchport.run_commands')
|
||||
self.run_commands = self.mock_run_commands.start()
|
||||
|
||||
self.mock_load_config = patch('ansible.modules.network.nxos._nxos_switchport.load_config')
|
||||
self.load_config = self.mock_load_config.start()
|
||||
|
||||
self.mock_get_capabilities = patch('ansible.modules.network.nxos._nxos_switchport.get_capabilities')
|
||||
self.get_capabilities = self.mock_get_capabilities.start()
|
||||
self.get_capabilities.return_value = {'network_api': 'cliconf'}
|
||||
|
||||
def tearDown(self):
|
||||
super(TestNxosSwitchportModule, self).tearDown()
|
||||
self.mock_run_commands.stop()
|
||||
self.mock_load_config.stop()
|
||||
self.mock_get_capabilities.stop()
|
||||
|
||||
def load_fixtures(self, commands=None, device=''):
|
||||
def load_from_file(*args, **kwargs):
|
||||
module, commands = args
|
||||
output = list()
|
||||
for command in commands:
|
||||
filename = str(command).split(' | ')[0].replace(' ', '_')
|
||||
filename = filename.replace('2/1', '')
|
||||
output.append(load_fixture('_nxos_switchport', filename))
|
||||
return output
|
||||
|
||||
self.run_commands.side_effect = load_from_file
|
||||
self.load_config.return_value = None
|
||||
|
||||
def test_nxos_switchport_present(self):
|
||||
set_module_args(dict(interface='Ethernet2/1', mode='access', access_vlan=1, state='present'))
|
||||
result = self.execute_module(changed=True)
|
||||
self.assertEqual(result['commands'], ['interface ethernet2/1', 'switchport access vlan 1'])
|
||||
|
||||
def test_nxos_switchport_unconfigured(self):
|
||||
set_module_args(dict(interface='Ethernet2/1', state='unconfigured'))
|
||||
result = self.execute_module(changed=True)
|
||||
self.assertEqual(result['commands'], ['interface ethernet2/1',
|
||||
'switchport mode access',
|
||||
'switch access vlan 1',
|
||||
'switchport trunk native vlan 1',
|
||||
'switchport trunk allowed vlan all'])
|
||||
|
||||
def test_nxos_switchport_absent(self):
|
||||
set_module_args(dict(interface='Ethernet2/1', mode='access', access_vlan=3, state='absent'))
|
||||
result = self.execute_module(changed=False)
|
||||
self.assertEqual(result['commands'], [])
|
Loading…
Reference in a new issue