Backport test infra fixes and updates to stable-2.5. (#46992)

* Fix unit test parametrize order on Python 3.5.

(cherry picked from commit 53b230ca74)

* Fix ansible-test unit test execution. (#45772)

* Fix ansible-test units requirements install.
* Run unit tests as unprivileged user under Docker.

(cherry picked from commit 379a7f4f5a)

* Run unit tests in parallel. (#45812)

(cherry picked from commit abe8e4c9e8)

* Minor fixes for unit test delegation.

(cherry picked from commit be199cfe90)

* add support for opening shell on remote Windows host (#43919)

* add support for opening shell on remote Windows host

* added arg completion and fix sanity check

* remove uneeded arg

(cherry picked from commit 6ca4ea0c1f)

* Block network access for unit tests in docker.

(cherry picked from commit 99cac99cbc)

* Make ansible-test available in the bin directory. (#45876)

(cherry picked from commit f3d1f9544b)

* Support comments in ansible-test flat files.

(cherry picked from commit 5a3000af19)

* Fix incorrect use of subprocess.CalledProcessError (#45890)

(cherry picked from commit 24dd87bd0a)

* Improve ansible-test match error handling.

(cherry picked from commit 2056c981ae)

* Improve error handling for docs-build test.

(cherry picked from commit 2148999048)

* Bug fixes and cleanup for ansible-test. (#45991)

* Remove unused imports.
* Clean up ConfigParser usage in ansible-test.
* Fix bare except statements in ansible-test.
* Miscellaneous cleanup from PyCharm inspections.
* Enable pylint no-self-use for ansible-test.
* Remove obsolete pylint ignores for Python 3.7.
* Fix shellcheck issuers under newer shellcheck.
* Use newer path for ansible-test.
* Fix issues in code-smell tests.

(cherry picked from commit ac492476e5)

* Fix integration test library search path.

This prevents tests from loading modules outside the source tree,
which could result in testing the wrong module if a system-wide
install is present, or custom modules exist.

(cherry picked from commit d603cd41fe)

* Update default container to version 1.2.0.

(cherry picked from commit d478a4c3f6)
(cherry picked from commit 21c4eb8db5)

* Fix ansible-test docker python version handling.

This removes the old name based version detection behavior and
uses versions defined in the docker completion file instead, as
the new containers do not follow the old naming scheme.

(cherry picked from commit 54937ba784)

* Reduce noise in docs-build test failures.

(cherry picked from commit 4085d01617)

* Fix ansible-test encoding issues for exceptions.

(cherry picked from commit 0d7a156319)

* Fix ansible-test multi-group smoke test handling. (#46363)

* Fix ansible-test smoke tests across groups.
* Fix ansible-test list arg defaults.
* Fix ansible-test require and exclude delegation.
* Fix detection of Windows specific changes.
* Add minimal Windows testing for Python 3.7.

(cherry picked from commit e53390b3b1)

* Use default-test-container version 1.3.0.

(cherry picked from commit 6d9be66418)

* Add file exists check in integration-aliases test.

(cherry picked from commit 33a8be9109)

* Improve ansible-test environment checking between tests. (#46459)

* Add unified diff output to environment validation.

This makes it easier to see where the environment changed.

* Compare Python interpreters by version to pip shebangs.

This helps expose cases where pip executables use a different
Python interpreter than is expected.

* Query `pip.__version__` instead of using `pip --version`.

This is a much faster way to query the pip version. It also more
closely matches how we invoke pip within ansible-test.

* Remove redundant environment scan between tests.

This reuses the environment scan from the end of the previous test
as the basis for comparison during the next test.

(cherry picked from commit 0dc7f38787)

* Add symlinks sanity test. (#46467)

* Add symlinks sanity test.
* Replace legacy test symlinks with actual content.
* Remove dir symlink from template_jinja2_latest.
* Update import test to use generated library dir.
* Fix copy test symlink setup.

(cherry picked from commit e2b6047514)

* Fix parametrize warning in unit tests.

(cherry picked from commit 1a28898a00)

* Update MANIFEST.in (#46502)

* Update MANIFEST.in:

- Remove unnecessary prune.
- Include files needed by tests.
- Exclude botmeta sanity test.

These changes permit sanity tests to pass on sdist output.
(cherry picked from commit cbb49f66ec)

* Fix unit tests which modify the source tree. (#45763)

* Fix CNOS unit test log usage.
* Use temp dir for Galaxy unit tests.
* Write to temp files in interfaces_file unit test.
* Fix log placement in netapp_e_ldap unit test.

(cherry picked from commit 0686450cae)

* Fix ansible-test custom docker image traceback.

(cherry picked from commit 712ad9ed64)

* ansible-test: Create public key creating Windows targets (#43760)

* ansible-test: Create public key creating Windows targets

* Changed to always set SSH Key for Windows hosts

(cherry picked from commit adc0efe10c)

* Fix and re-enable sts_assume_role integration tests (#46026)

* Fix the STS assume role error message assertion when the role to assume does not exist.

(cherry picked from commit 18dc928e28)

* Fix ACI unit test on Python 3.7.0.

The previous logic was only needed for pre-release versions of 3.7.

(cherry picked from commit c0bf9815c9)

* Remove placeboify from unit tests that are not calling AWS (i.e. creating a recording) (#45754)

(cherry picked from commit 2167ce6cb6)

* Update sanity test ignore entries.
This commit is contained in:
Matt Clay 2018-10-13 10:44:11 -07:00 committed by Matt Davis
parent f9c4da4cdf
commit ca13e678ae
80 changed files with 964 additions and 386 deletions

View file

@ -1,12 +1,16 @@
prune ticket_stubs
prune hacking
include README.rst COPYING
include SYMLINK_CACHE.json
include requirements.txt
include .coveragerc
include .yamllint
include shippable.yml
include tox.ini
include bin/ansible-test
include examples/hosts
include examples/ansible.cfg
include examples/scripts/ConfigureRemotingForAnsible.ps1
include examples/scripts/upgrade_to_ps3.ps1
recursive-include lib/ansible/module_utils/powershell *
recursive-include lib/ansible/modules *
recursive-include lib/ansible/galaxy/data *

View file

@ -117,7 +117,7 @@ ifneq ($(REPOTAG),)
endif
# ansible-test parameters
ANSIBLE_TEST ?= test/runner/ansible-test
ANSIBLE_TEST ?= bin/ansible-test
TEST_FLAGS ?=
# ansible-test units parameters (make test / make test-py3)

15
bin/ansible-test Executable file
View file

@ -0,0 +1,15 @@
#!/usr/bin/env python
# PYTHON_ARGCOMPLETE_OK
"""Primary entry point for ansible-test."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import sys
if __name__ == '__main__':
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(os.path.dirname(__file__)), 'test', 'runner')))
import lib.cli
lib.cli.main()

View file

@ -0,0 +1,6 @@
Sanity Tests » symlinks
=======================
Symbolic links are only permitted for files that exist to ensure proper tarball generation during a release.
If other types of symlinks are needed for tests they must be created as part of the test.

View file

@ -47,7 +47,7 @@ FULL_PATH=$($PYTHON_BIN -c "import os; print(os.path.realpath('$HACKING_DIR'))")
export ANSIBLE_HOME="$(dirname "$FULL_PATH")"
PREFIX_PYTHONPATH="$ANSIBLE_HOME/lib"
PREFIX_PATH="$ANSIBLE_HOME/bin:$ANSIBLE_HOME/test/runner"
PREFIX_PATH="$ANSIBLE_HOME/bin"
PREFIX_MANPATH="$ANSIBLE_HOME/docs/man"
expr "$PYTHONPATH" : "${PREFIX_PYTHONPATH}.*" > /dev/null || prepend_path PYTHONPATH "$PREFIX_PYTHONPATH"

View file

@ -5,7 +5,7 @@ set HACKING_DIR (dirname (status -f))
set FULL_PATH (python -c "import os; print(os.path.realpath('$HACKING_DIR'))")
set ANSIBLE_HOME (dirname $FULL_PATH)
set PREFIX_PYTHONPATH $ANSIBLE_HOME/lib
set PREFIX_PATH $ANSIBLE_HOME/bin $ANSIBLE_HOME/test/runner
set PREFIX_PATH $ANSIBLE_HOME/bin
set PREFIX_MANPATH $ANSIBLE_HOME/docs/man
# set quiet flag

View file

@ -1 +0,0 @@
/tmp/ansible-test-abs-link

View file

@ -1 +0,0 @@
/tmp/ansible-test-abs-link-dir

View file

@ -1 +0,0 @@
/tmp/ansible-test-link-dir/out_of_tree_circle

View file

@ -1 +0,0 @@
../subdir2/subdir3

View file

@ -9,15 +9,25 @@
local_temp_dir: '{{ tempfile_result.stdout }}'
# output_dir is hardcoded in test/runner/lib/executor.py and created there
remote_dir: '{{ output_dir }}'
symlinks:
ansible-test-abs-link: /tmp/ansible-test-abs-link
ansible-test-abs-link-dir: /tmp/ansible-test-abs-link-dir
circles: ../
invalid: invalid
invalid2: ../invalid
out_of_tree_circle: /tmp/ansible-test-link-dir/out_of_tree_circle
subdir3: ../subdir2/subdir3
- file: path={{local_temp_dir}} state=directory
name: ensure temp dir exists
# file cannot do this properly, use command instead
- name: Create circular symbolic link
command: ln -s ../ circles
- name: Create symbolic link
command: "ln -s '{{ item.value }}' '{{ item.key }}'"
args:
chdir: '{{role_path}}/files/subdir/subdir1'
warn: no
with_dict: "{{ symlinks }}"
- name: Create remote unprivileged remote user
user:
@ -55,11 +65,12 @@
state: absent
connection: local
- name: Remove circular symbolic link
- name: Remove symbolic link
file:
path: '{{ role_path }}/files/subdir/subdir1/circles'
path: '{{ role_path }}/files/subdir/subdir1/{{ item.key }}'
state: absent
connection: local
with_dict: "{{ symlinks }}"
- name: Remote unprivileged remote user
user:

View file

@ -290,14 +290,14 @@
assert:
that:
- 'result.failed'
- "'Not authorized to perform sts:AssumeRole' in result.msg"
- "'Access denied' in result.msg"
when: result.module_stderr is not defined
- name: assert assume not existing sts role
assert:
that:
- 'result.failed'
- "'Not authorized to perform sts:AssumeRole' in result.module_stderr"
- "'Access denied' in result.module_stderr"
when: result.module_stderr is defined
# ============================================================

View file

@ -1 +0,0 @@
../../template

View file

@ -18,4 +18,7 @@ source "${MYTMPDIR}/jinja2/bin/activate"
pip install -U jinja2
ANSIBLE_ROLES_PATH="$(dirname "$(pwd)")"
export ANSIBLE_ROLES_PATH
ansible-playbook -i ../../inventory main.yml -e @../../integration_config.yml -v "$@"

View file

@ -1 +0,0 @@
../../integration/targets/setup_ec2

View file

@ -0,0 +1,2 @@
---
resource_prefix: 'ansible-testing-'

View file

@ -0,0 +1,119 @@
---
# ============================================================
- name: test with no parameters
action: "{{module_name}}"
register: result
ignore_errors: true
- name: assert failure when called with no parameters
assert:
that:
- 'result.failed'
- 'result.msg == "missing required arguments: name"'
# ============================================================
- name: test with only name
action: "{{module_name}} name={{ec2_key_name}}"
register: result
ignore_errors: true
- name: assert failure when called with only 'name'
assert:
that:
- 'result.failed'
- 'result.msg == "Either region or ec2_url must be specified"'
# ============================================================
- name: test invalid region parameter
action: "{{module_name}} name='{{ec2_key_name}}' region='asdf querty 1234'"
register: result
ignore_errors: true
- name: assert invalid region parameter
assert:
that:
- 'result.failed'
- 'result.msg.startswith("value of region must be one of:")'
# ============================================================
- name: test valid region parameter
action: "{{module_name}} name='{{ec2_key_name}}' region='{{ec2_region}}'"
register: result
ignore_errors: true
- name: assert valid region parameter
assert:
that:
- 'result.failed'
- 'result.msg.startswith("No handler was ready to authenticate.")'
# ============================================================
- name: test environment variable EC2_REGION
action: "{{module_name}} name='{{ec2_key_name}}'"
environment:
EC2_REGION: '{{ec2_region}}'
register: result
ignore_errors: true
- name: assert environment variable EC2_REGION
assert:
that:
- 'result.failed'
- 'result.msg.startswith("No handler was ready to authenticate.")'
# ============================================================
- name: test invalid ec2_url parameter
action: "{{module_name}} name='{{ec2_key_name}}'"
environment:
EC2_URL: bogus.example.com
register: result
ignore_errors: true
- name: assert invalid ec2_url parameter
assert:
that:
- 'result.failed'
- 'result.msg.startswith("No handler was ready to authenticate.")'
# ============================================================
- name: test valid ec2_url parameter
action: "{{module_name}} name='{{ec2_key_name}}'"
environment:
EC2_URL: '{{ec2_url}}'
register: result
ignore_errors: true
- name: assert valid ec2_url parameter
assert:
that:
- 'result.failed'
- 'result.msg.startswith("No handler was ready to authenticate.")'
# ============================================================
- name: test credentials from environment
action: "{{module_name}} name='{{ec2_key_name}}'"
environment:
EC2_REGION: '{{ec2_region}}'
EC2_ACCESS_KEY: bogus_access_key
EC2_SECRET_KEY: bogus_secret_key
register: result
ignore_errors: true
- name: assert ec2_key with valid ec2_url
assert:
that:
- 'result.failed'
- '"EC2ResponseError: 401 Unauthorized" in result.msg'
# ============================================================
- name: test credential parameters
action: "{{module_name}} name='{{ec2_key_name}}' ec2_region='{{ec2_region}}' ec2_access_key=bogus_access_key ec2_secret_key=bogus_secret_key"
register: result
ignore_errors: true
- name: assert credential parameters
assert:
that:
- 'result.failed'
- '"EC2ResponseError: 401 Unauthorized" in result.msg'

View file

@ -0,0 +1,3 @@
---
ec2_url: ec2.amazonaws.com
ec2_region: us-east-1

View file

@ -1 +0,0 @@
../../integration/targets/setup_sshkey

View file

@ -0,0 +1,55 @@
# (c) 2014, James Laska <jlaska@ansible.com>
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
- name: create a temp file
tempfile:
state: file
register: sshkey_file
tags:
- prepare
- name: generate sshkey
shell: echo 'y' | ssh-keygen -P '' -f {{ sshkey_file.path }}
tags:
- prepare
- name: create another temp file
tempfile:
state: file
register: another_sshkey_file
tags:
- prepare
- name: generate another_sshkey
shell: echo 'y' | ssh-keygen -P '' -f {{ another_sshkey_file.path }}
tags:
- prepare
- name: record fingerprint
shell: openssl rsa -in {{ sshkey_file.path }} -pubout -outform DER 2>/dev/null | openssl md5 -c
register: fingerprint
tags:
- prepare
- name: set facts for future roles
set_fact:
sshkey: '{{ sshkey_file.path }}'
key_material: "{{ lookup('file', sshkey_file.path ~ '.pub') }}"
another_key_material: "{{ lookup('file', another_sshkey_file.path ~ '.pub') }}"
fingerprint: '{{ fingerprint.stdout.split()[1] }}'
tags:
- prepare

View file

@ -1 +0,0 @@
test.py

8
test/runner/ansible-test Executable file
View file

@ -0,0 +1,8 @@
#!/usr/bin/env python
# PYTHON_ARGCOMPLETE_OK
"""Legacy entry point for ansible-test. The preferred version is in the bin directory."""
import lib.cli
if __name__ == '__main__':
lib.cli.main()

View file

@ -1,11 +1,11 @@
default name=quay.io/ansible/default-test-container:1.0.0
default name=quay.io/ansible/default-test-container:1.3.0 python=3
centos6 name=quay.io/ansible/centos6-test-container:1.4.0 seccomp=unconfined
centos7 name=quay.io/ansible/centos7-test-container:1.4.0 seccomp=unconfined
fedora24 name=quay.io/ansible/fedora24-test-container:1.4.0 seccomp=unconfined
fedora25 name=quay.io/ansible/fedora25-test-container:1.4.0 seccomp=unconfined
fedora26py3 name=quay.io/ansible/fedora26py3-test-container:1.4.0
fedora27py3 name=quay.io/ansible/fedora27py3-test-container:1.4.0
fedora26py3 name=quay.io/ansible/fedora26py3-test-container:1.4.0 python=3
fedora27py3 name=quay.io/ansible/fedora27py3-test-container:1.4.0 python=3
opensuse42.3 name=quay.io/ansible/opensuse42.3-test-container:1.4.0 seccomp=unconfined
ubuntu1404 name=quay.io/ansible/ubuntu1404-test-container:1.4.0 seccomp=unconfined
ubuntu1604 name=quay.io/ansible/ubuntu1604-test-container:1.4.0 seccomp=unconfined
ubuntu1604py3 name=quay.io/ansible/ubuntu1604py3-test-container:1.4.0 seccomp=unconfined
ubuntu1604py3 name=quay.io/ansible/ubuntu1604py3-test-container:1.4.0 seccomp=unconfined python=3

View file

@ -245,10 +245,10 @@ def find_executable(executable):
:rtype: str
"""
self = os.path.abspath(__file__)
path = os.environ.get('PATH', os.defpath)
path = os.environ.get('PATH', os.path.defpath)
seen_dirs = set()
for path_dir in path.split(os.pathsep):
for path_dir in path.split(os.path.pathsep):
if path_dir in seen_dirs:
continue

View file

@ -25,8 +25,8 @@ def ansible_environment(args, color=True):
ansible_path = os.path.join(os.getcwd(), 'bin')
if not path.startswith(ansible_path + os.pathsep):
path = ansible_path + os.pathsep + path
if not path.startswith(ansible_path + os.path.pathsep):
path = ansible_path + os.path.pathsep + path
if isinstance(args, IntegrationConfig):
ansible_config = 'test/integration/%s.cfg' % args.command
@ -41,6 +41,7 @@ def ansible_environment(args, color=True):
ANSIBLE_DEPRECATION_WARNINGS='false',
ANSIBLE_HOST_KEY_CHECKING='false',
ANSIBLE_CONFIG=os.path.abspath(ansible_config),
ANSIBLE_LIBRARY='/dev/null',
PYTHONPATH=os.path.abspath('lib'),
PAGER='/bin/cat',
PATH=path,

54
test/runner/test.py → test/runner/lib/cli.py Executable file → Normal file
View file

@ -1,5 +1,3 @@
#!/usr/bin/env python
# PYTHON_ARGCOMPLETE_OK
"""Test runner for all Ansible tests."""
from __future__ import absolute_import, print_function
@ -15,6 +13,7 @@ from lib.util import (
raw_command,
get_docker_completion,
generate_pip_command,
read_lines_without_comments,
)
from lib.delegation import (
@ -73,7 +72,7 @@ import lib.cover
def main():
"""Main program function."""
try:
git_root = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), '..', '..'))
git_root = os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), '..', '..', '..'))
os.chdir(git_root)
initialize_cloud_plugins()
sanity_init()
@ -104,10 +103,10 @@ def main():
display.review_warnings()
except ApplicationWarning as ex:
display.warning(str(ex))
display.warning(u'%s' % ex)
exit(0)
except ApplicationError as ex:
display.error(str(ex))
display.error(u'%s' % ex)
exit(1)
except KeyboardInterrupt:
exit(2)
@ -182,6 +181,11 @@ def parse_args():
nargs='*',
help='test the specified target').completer = complete_target
test.add_argument('--include',
metavar='TARGET',
action='append',
help='include the specified target').completer = complete_target
test.add_argument('--exclude',
metavar='TARGET',
action='append',
@ -274,6 +278,11 @@ def parse_args():
default='all',
help='target to run when all tests are needed')
integration.add_argument('--changed-all-mode',
metavar='MODE',
choices=('default', 'include', 'exclude'),
help='include/exclude behavior with --changed-all-target: %(choices)s')
integration.add_argument('--list-targets',
action='store_true',
help='list matching targets instead of running tests')
@ -347,6 +356,10 @@ def parse_args():
action='store_true',
help='collect tests but do not execute them')
units.add_argument('--requirements-mode',
choices=('only', 'skip'),
help=argparse.SUPPRESS)
add_extra_docker_options(units, integration=False)
sanity = subparsers.add_parser('sanity',
@ -576,7 +589,7 @@ def add_environments(parser, tox_version=False, tox_only=False):
environments.add_argument('--remote',
metavar='PLATFORM',
default=None,
help='run from a remote instance').completer = complete_remote
help='run from a remote instance').completer = complete_remote_shell if parser.prog.endswith(' shell') else complete_remote
remote = parser.add_argument_group(title='remote arguments')
@ -697,8 +710,23 @@ def complete_remote(prefix, parsed_args, **_):
"""
del parsed_args
with open('test/runner/completion/remote.txt', 'r') as completion_fd:
images = completion_fd.read().splitlines()
images = read_lines_without_comments('test/runner/completion/remote.txt', remove_blank_lines=True)
return [i for i in images if i.startswith(prefix)]
def complete_remote_shell(prefix, parsed_args, **_):
"""
:type prefix: unicode
:type parsed_args: any
:rtype: list[str]
"""
del parsed_args
images = read_lines_without_comments('test/runner/completion/remote.txt', remove_blank_lines=True)
# 2008 doesn't support SSH so we do not add to the list of valid images
images.extend(["windows/%s" % i for i in read_lines_without_comments('test/runner/completion/windows.txt', remove_blank_lines=True) if i != '2008'])
return [i for i in images if i.startswith(prefix)]
@ -722,8 +750,7 @@ def complete_windows(prefix, parsed_args, **_):
:type parsed_args: any
:rtype: list[str]
"""
with open('test/runner/completion/windows.txt', 'r') as completion_fd:
images = completion_fd.read().splitlines()
images = read_lines_without_comments('test/runner/completion/windows.txt', remove_blank_lines=True)
return [i for i in images if i.startswith(prefix) and (not parsed_args.windows or i not in parsed_args.windows)]
@ -734,8 +761,7 @@ def complete_network_platform(prefix, parsed_args, **_):
:type parsed_args: any
:rtype: list[str]
"""
with open('test/runner/completion/network.txt', 'r') as completion_fd:
images = completion_fd.read().splitlines()
images = read_lines_without_comments('test/runner/completion/network.txt', remove_blank_lines=True)
return [i for i in images if i.startswith(prefix) and (not parsed_args.platform or i not in parsed_args.platform)]
@ -776,7 +802,3 @@ def complete_sanity_test(prefix, parsed_args, **_):
tests = sorted(t.name for t in sanity_get_tests())
return [i for i in tests if i.startswith(prefix)]
if __name__ == '__main__':
main()

View file

@ -17,6 +17,7 @@ from lib.util import (
display,
SubprocessError,
is_shippable,
ConfigParser,
)
from lib.http import (
@ -34,13 +35,6 @@ from lib.docker_util import (
get_docker_container_id,
)
try:
# noinspection PyPep8Naming
import ConfigParser as configparser
except ImportError:
# noinspection PyUnresolvedReferences
import configparser
class CsCloudProvider(CloudProvider):
"""CloudStack cloud provider plugin. Sets up cloud resources before delegation."""
@ -119,7 +113,7 @@ class CsCloudProvider(CloudProvider):
def _setup_static(self):
"""Configure CloudStack tests for use with static configuration."""
parser = configparser.RawConfigParser()
parser = ConfigParser()
parser.read(self.config_static_path)
self.endpoint = parser.get('cloudstack', 'endpoint')
@ -211,7 +205,7 @@ class CsCloudProvider(CloudProvider):
containers = bridge['Containers']
container = [containers[container] for container in containers if containers[container]['Name'] == self.DOCKER_SIMULATOR_NAME][0]
return re.sub(r'/[0-9]+$', '', container['IPv4Address'])
except:
except Exception:
display.error('Failed to process the following docker network inspect output:\n%s' %
json.dumps(networks, indent=4, sort_keys=True))
raise

View file

@ -6,9 +6,7 @@ from __future__ import absolute_import, print_function
import os
from lib.util import (
ApplicationError,
display,
is_shippable,
)
from lib.cloud import (
@ -16,9 +14,6 @@ from lib.cloud import (
CloudEnvironment,
)
from lib.core_ci import (
AnsibleCoreCI, )
class GcpCloudProvider(CloudProvider):
"""GCP cloud provider plugin. Sets up cloud resources before delegation."""

View file

@ -4,20 +4,13 @@ from __future__ import absolute_import, print_function
import os
import time
try:
# noinspection PyPep8Naming
import ConfigParser as configparser
except ImportError:
# noinspection PyUnresolvedReferences
import configparser
from lib.util import (
display,
ApplicationError,
is_shippable,
run_command,
generate_password,
SubprocessError,
ConfigParser,
)
from lib.cloud import (
@ -27,15 +20,6 @@ from lib.cloud import (
from lib.core_ci import (
AnsibleCoreCI,
InstanceConnection,
)
from lib.manage_ci import (
ManagePosixCI,
)
from lib.http import (
HttpClient,
)
@ -219,7 +203,7 @@ class TowerConfig(object):
:type path: str
:rtype: TowerConfig
"""
parser = configparser.RawConfigParser()
parser = ConfigParser()
parser.read(path)
keys = (

View file

@ -21,13 +21,6 @@ from lib.docker_util import (
get_docker_container_id,
)
try:
# noinspection PyPep8Naming
import ConfigParser as configparser
except ImportError:
# noinspection PyUnresolvedReferences
import configparser
class VcenterProvider(CloudProvider):
"""VMware vcenter/esx plugin. Sets up cloud resources for tests."""

View file

@ -71,6 +71,7 @@ class EnvironmentConfig(CommonConfig):
self.python_version = self.python or '.'.join(str(i) for i in sys.version_info[:2])
self.delegate = self.tox or self.docker or self.remote
self.delegate_args = [] # type: list[str]
if self.delegate:
self.requirements = True
@ -104,9 +105,9 @@ class TestConfig(EnvironmentConfig):
self.coverage = args.coverage # type: bool
self.coverage_label = args.coverage_label # type: str
self.include = args.include # type: list [str]
self.exclude = args.exclude # type: list [str]
self.require = args.require # type: list [str]
self.include = args.include or [] # type: list [str]
self.exclude = args.exclude or [] # type: list [str]
self.require = args.require or [] # type: list [str]
self.changed = args.changed # type: bool
self.tracked = args.tracked # type: bool
@ -179,6 +180,7 @@ class IntegrationConfig(TestConfig):
self.continue_on_error = args.continue_on_error # type: bool
self.debug_strategy = args.debug_strategy # type: bool
self.changed_all_target = args.changed_all_target # type: str
self.changed_all_mode = args.changed_all_mode # type: str
self.list_targets = args.list_targets # type: bool
self.tags = args.tags
self.skip_tags = args.skip_tags
@ -237,6 +239,13 @@ class UnitsConfig(TestConfig):
self.collect_only = args.collect_only # type: bool
self.requirements_mode = args.requirements_mode if 'requirements_mode' in args else ''
if self.requirements_mode == 'only':
self.requirements = True
elif self.requirements_mode == 'skip':
self.requirements = False
class CoverageConfig(EnvironmentConfig):
"""Configuration for the coverage command."""

View file

@ -121,12 +121,11 @@ class AnsibleCoreCI(object):
self.path = "%s-%s" % (self.path, region)
self.endpoints = AWS_ENDPOINTS[region],
self.ssh_key = SshKey(args)
if self.platform == 'windows':
self.ssh_key = None
self.port = 5986
else:
self.ssh_key = SshKey(args)
self.port = 22
elif self.provider == 'parallels':
self.endpoints = self._get_parallels_endpoints()

View file

@ -76,7 +76,7 @@ def command_coverage_combine(args):
try:
original.read_file(coverage_file)
except Exception as ex: # pylint: disable=locally-disabled, broad-except
display.error(str(ex))
display.error(u'%s' % ex)
continue
for filename in original.measured_files():

View file

@ -33,6 +33,7 @@ from lib.core_ci import (
from lib.manage_ci import (
ManagePosixCI,
ManageWindowsCI,
)
from lib.util import (
@ -51,6 +52,8 @@ from lib.docker_util import (
docker_rm,
docker_run,
docker_available,
docker_network_disconnect,
get_docker_networks,
)
from lib.cloud import (
@ -140,7 +143,7 @@ def delegate_tox(args, exclude, require, integration_targets):
tox.append('--')
cmd = generate_command(args, os.path.abspath('test/runner/test.py'), options, exclude, require)
cmd = generate_command(args, os.path.abspath('bin/ansible-test'), options, exclude, require)
if not args.python:
cmd += ['--python', version]
@ -192,7 +195,7 @@ def delegate_docker(args, exclude, require, integration_targets):
'--docker-util': 1,
}
cmd = generate_command(args, '/root/ansible/test/runner/test.py', options, exclude, require)
cmd = generate_command(args, '/root/ansible/bin/ansible-test', options, exclude, require)
if isinstance(args, TestConfig):
if args.coverage and not args.coverage_label:
@ -274,6 +277,34 @@ def delegate_docker(args, exclude, require, integration_targets):
if isinstance(args, UnitsConfig) and not args.python:
cmd += ['--python', 'default']
# run unit tests unprivileged to prevent stray writes to the source tree
# also disconnect from the network once requirements have been installed
if isinstance(args, UnitsConfig):
writable_dirs = [
'/root/ansible/.pytest_cache',
]
docker_exec(args, test_id, ['mkdir', '-p'] + writable_dirs)
docker_exec(args, test_id, ['chmod', '777'] + writable_dirs)
docker_exec(args, test_id, ['find', '/root/ansible/test/results/', '-type', 'd', '-exec', 'chmod', '777', '{}', '+'])
docker_exec(args, test_id, ['chmod', '755', '/root'])
docker_exec(args, test_id, ['chmod', '644', '/root/ansible/%s' % args.metadata_path])
docker_exec(args, test_id, ['useradd', 'pytest', '--create-home'])
docker_exec(args, test_id, cmd + ['--requirements-mode', 'only'], options=cmd_options)
networks = get_docker_networks(args, test_id)
for network in networks:
docker_network_disconnect(args, test_id, network)
cmd += ['--requirements-mode', 'skip']
cmd_options += ['--user', 'pytest']
try:
docker_exec(args, test_id, cmd, options=cmd_options)
finally:
@ -324,30 +355,35 @@ def delegate_remote(args, exclude, require, integration_targets):
core_ci.wait()
options = {
'--remote': 1,
}
if platform == 'windows':
# Windows doesn't need the ansible-test fluff, just run the SSH command
manage = ManageWindowsCI(core_ci)
cmd = ['powershell.exe']
else:
options = {
'--remote': 1,
}
cmd = generate_command(args, 'ansible/test/runner/test.py', options, exclude, require)
cmd = generate_command(args, 'ansible/bin/ansible-test', options, exclude, require)
if httptester_id:
cmd += ['--inject-httptester']
if httptester_id:
cmd += ['--inject-httptester']
if isinstance(args, TestConfig):
if args.coverage and not args.coverage_label:
cmd += ['--coverage-label', 'remote-%s-%s' % (platform, version)]
if isinstance(args, TestConfig):
if args.coverage and not args.coverage_label:
cmd += ['--coverage-label', 'remote-%s-%s' % (platform, version)]
if isinstance(args, IntegrationConfig):
if not args.allow_destructive:
cmd.append('--allow-destructive')
if isinstance(args, IntegrationConfig):
if not args.allow_destructive:
cmd.append('--allow-destructive')
# remote instances are only expected to have a single python version available
if isinstance(args, UnitsConfig) and not args.python:
cmd += ['--python', 'default']
# remote instances are only expected to have a single python version available
if isinstance(args, UnitsConfig) and not args.python:
cmd += ['--python', 'default']
manage = ManagePosixCI(core_ci)
manage = ManagePosixCI(core_ci)
manage.setup()
if isinstance(args, IntegrationConfig):
cloud_platforms = get_cloud_providers(args)
@ -358,8 +394,9 @@ def delegate_remote(args, exclude, require, integration_targets):
manage.ssh(cmd, ssh_options)
success = True
finally:
manage.ssh('rm -rf /tmp/results && cp -a ansible/test/results /tmp/results && chmod -R a+r /tmp/results')
manage.download('/tmp/results', 'test')
if platform != 'windows':
manage.ssh('rm -rf /tmp/results && cp -a ansible/test/results /tmp/results && chmod -R a+r /tmp/results')
manage.download('/tmp/results', 'test')
finally:
if args.remote_terminate == 'always' or (args.remote_terminate == 'success' and success):
core_ci.stop()
@ -421,6 +458,8 @@ def filter_options(args, argv, options, exclude, require):
'--changed-from': 1,
'--changed-path': 1,
'--metadata': 1,
'--exclude': 1,
'--require': 1,
})
elif isinstance(args, SanityConfig):
options.update({
@ -445,6 +484,9 @@ def filter_options(args, argv, options, exclude, require):
yield arg
for arg in args.delegate_args:
yield arg
for target in exclude:
yield '--exclude'
yield target

View file

@ -67,6 +67,17 @@ def get_docker_container_ip(args, container_id):
return ipaddress
def get_docker_networks(args, container_id):
"""
:param args: EnvironmentConfig
:param container_id: str
:rtype: list[str]
"""
results = docker_inspect(args, container_id)
networks = sorted(results[0]['NetworkSettings']['Networks'])
return networks
def docker_pull(args, image):
"""
:type args: EnvironmentConfig
@ -161,8 +172,17 @@ def docker_inspect(args, container_id):
except SubprocessError as ex:
try:
return json.loads(ex.stdout)
except:
raise ex # pylint: disable=locally-disabled, raising-bad-type
except Exception:
raise ex
def docker_network_disconnect(args, container_id, network):
"""
:param args: EnvironmentConfig
:param container_id: str
:param network: str
"""
docker_command(args, ['network', 'disconnect', network, container_id], capture=True)
def docker_network_inspect(args, network):
@ -180,8 +200,8 @@ def docker_network_inspect(args, network):
except SubprocessError as ex:
try:
return json.loads(ex.stdout)
except:
raise ex # pylint: disable=locally-disabled, raising-bad-type
except Exception:
raise ex
def docker_exec(args, container_id, cmd, options=None, capture=False, stdin=None, stdout=None):

View file

@ -12,7 +12,10 @@ import time
import textwrap
import functools
import pipes
import sys
import hashlib
import difflib
import filecmp
import lib.pytar
import lib.thread
@ -49,6 +52,9 @@ from lib.util import (
raw_command,
get_coverage_path,
get_available_port,
generate_pip_command,
find_python,
get_docker_completion,
)
from lib.docker_util import (
@ -148,9 +154,10 @@ def create_shell_command(command):
return cmd
def install_command_requirements(args):
def install_command_requirements(args, python_version=None):
"""
:type args: EnvironmentConfig
:type python_version: str | None
"""
generate_egg_info(args)
@ -168,7 +175,10 @@ def install_command_requirements(args):
if args.junit:
packages.append('junit-xml')
pip = args.pip_command
if not python_version:
python_version = args.python_version
pip = generate_pip_command(find_python(python_version))
commands = [generate_pip_install(pip, args.command, packages=packages)]
@ -534,6 +544,7 @@ def command_windows_integration(args):
instance.result.stop()
# noinspection PyUnusedLocal
def windows_init(args, internal_targets): # pylint: disable=locally-disabled, unused-argument
"""
:type args: WindowsIntegrationConfig
@ -642,8 +653,19 @@ def command_integration_filter(args, targets, init_callback=None):
"""
targets = tuple(target for target in targets if 'hidden/' not in target.aliases)
changes = get_changes_filter(args)
require = (args.require or []) + changes
exclude = (args.exclude or [])
# special behavior when the --changed-all-target target is selected based on changes
if args.changed_all_target in changes:
# act as though the --changed-all-target target was in the include list
if args.changed_all_mode == 'include' and args.changed_all_target not in args.include:
args.include.append(args.changed_all_target)
args.delegate_args += ['--include', args.changed_all_target]
# act as though the --changed-all-target target was in the exclude list
elif args.changed_all_mode == 'exclude' and args.changed_all_target not in args.exclude:
args.exclude.append(args.changed_all_target)
require = args.require + changes
exclude = args.exclude
internal_targets = walk_internal_targets(targets, args.include, exclude, require)
environment_exclude = get_integration_filter(args, internal_targets)
@ -666,7 +688,7 @@ def command_integration_filter(args, targets, init_callback=None):
cloud_init(args, internal_targets)
if args.delegate:
raise Delegate(require=changes, exclude=exclude, integration_targets=internal_targets)
raise Delegate(require=require, exclude=exclude, integration_targets=internal_targets)
install_command_requirements(args)
@ -721,6 +743,8 @@ def command_integration_filtered(args, targets, all_targets):
results = {}
current_environment = None # type: EnvironmentDescription | None
for target in targets_iter:
if args.start_at and not found:
found = target.name == args.start_at
@ -737,7 +761,8 @@ def command_integration_filtered(args, targets, all_targets):
cloud_environment = get_cloud_environment(args, target)
original_environment = EnvironmentDescription(args)
original_environment = current_environment if current_environment else EnvironmentDescription(args)
current_environment = None
display.info('>>> Environment Description\n%s' % original_environment, verbosity=3)
@ -796,9 +821,11 @@ def command_integration_filtered(args, targets, all_targets):
display.verbosity = args.verbosity = 6
start_time = time.time()
original_environment.validate(target.name, throw=True)
current_environment = EnvironmentDescription(args)
end_time = time.time()
EnvironmentDescription.check(original_environment, current_environment, target.name, throw=True)
results[target.name]['validation_seconds'] = int(end_time - start_time)
passed.append(target)
@ -1124,7 +1151,7 @@ def command_units(args):
:type args: UnitsConfig
"""
changes = get_changes_filter(args)
require = (args.require or []) + changes
require = args.require + changes
include, exclude = walk_external_targets(walk_units_targets(), args.include, args.exclude, require)
if not include:
@ -1133,8 +1160,6 @@ def command_units(args):
if args.delegate:
raise Delegate(require=changes)
install_command_requirements(args)
version_commands = []
for version in SUPPORTED_PYTHON_VERSIONS:
@ -1142,12 +1167,16 @@ def command_units(args):
if args.python and version != args.python_version:
continue
if args.requirements_mode != 'skip':
install_command_requirements(args, version)
env = ansible_environment(args)
cmd = [
'pytest',
'--boxed',
'-r', 'a',
'-n', 'auto',
'--color',
'yes' if args.color else 'no',
'--junit-xml',
@ -1167,6 +1196,9 @@ def command_units(args):
version_commands.append((version, cmd, env))
if args.requirements_mode == 'only':
sys.exit()
for version, command, env in version_commands:
display.info('Unit test with Python %s' % version)
@ -1444,15 +1476,9 @@ def get_integration_docker_filter(args, targets):
display.warning('Excluding tests marked "%s" which require --docker-privileged to run under docker: %s'
% (skip.rstrip('/'), ', '.join(skipped)))
docker_image = args.docker.split('@')[0] # strip SHA for proper tag comparison
python_version = 2 # images are expected to default to python 2 unless otherwise specified
if docker_image.endswith('py3'):
python_version = 3 # docker images ending in 'py3' are expected to default to python 3
if docker_image.endswith(':default'):
python_version = 3 # docker images tagged 'default' are expected to default to python 3
python_version = int(get_docker_completion().get(args.docker_raw, {}).get('python', str(python_version)))
if args.python: # specifying a numeric --python option overrides the default python
if args.python.startswith('3'):
@ -1515,27 +1541,84 @@ class EnvironmentDescription(object):
self.data = {}
return
warnings = []
versions = ['']
versions += SUPPORTED_PYTHON_VERSIONS
versions += list(set(v.split('.')[0] for v in SUPPORTED_PYTHON_VERSIONS))
python_paths = dict((v, find_executable('python%s' % v, required=False)) for v in sorted(versions))
python_versions = dict((v, self.get_version([python_paths[v], '-V'])) for v in sorted(python_paths) if python_paths[v])
pip_paths = dict((v, find_executable('pip%s' % v, required=False)) for v in sorted(versions))
pip_versions = dict((v, self.get_version([pip_paths[v], '--version'])) for v in sorted(pip_paths) if pip_paths[v])
program_versions = dict((v, self.get_version([python_paths[v], 'test/runner/versions.py'], warnings)) for v in sorted(python_paths) if python_paths[v])
pip_interpreters = dict((v, self.get_shebang(pip_paths[v])) for v in sorted(pip_paths) if pip_paths[v])
known_hosts_hash = self.get_hash(os.path.expanduser('~/.ssh/known_hosts'))
for version in sorted(versions):
self.check_python_pip_association(version, python_paths, pip_paths, pip_interpreters, warnings)
for warning in warnings:
display.warning(warning, unique=True)
self.data = dict(
python_paths=python_paths,
python_versions=python_versions,
pip_paths=pip_paths,
pip_versions=pip_versions,
program_versions=program_versions,
pip_interpreters=pip_interpreters,
known_hosts_hash=known_hosts_hash,
warnings=warnings,
)
@staticmethod
def check_python_pip_association(version, python_paths, pip_paths, pip_interpreters, warnings):
"""
:type version: str
:param python_paths: dict[str, str]
:param pip_paths: dict[str, str]
:param pip_interpreters: dict[str, str]
:param warnings: list[str]
"""
python_label = 'Python%s' % (' %s' % version if version else '')
pip_path = pip_paths.get(version)
python_path = python_paths.get(version)
if not python_path and not pip_path:
# neither python or pip is present for this version
return
if not python_path:
warnings.append('A %s interpreter was not found, yet a matching pip was found at "%s".' % (python_label, pip_path))
return
if not pip_path:
warnings.append('A %s interpreter was found at "%s", yet a matching pip was not found.' % (python_label, python_path))
return
pip_shebang = pip_interpreters.get(version)
match = re.search(r'#!\s*(?P<command>[^\s]+)', pip_shebang)
if not match:
warnings.append('A %s pip was found at "%s", but it does not have a valid shebang: %s' % (python_label, pip_path, pip_shebang))
return
pip_interpreter = os.path.realpath(match.group('command'))
python_interpreter = os.path.realpath(python_path)
if pip_interpreter == python_interpreter:
return
try:
identical = filecmp.cmp(pip_interpreter, python_interpreter)
except OSError:
identical = False
if identical:
return
warnings.append('A %s pip was found at "%s", but it uses interpreter "%s" instead of "%s".' % (
python_label, pip_path, pip_interpreter, python_interpreter))
def __str__(self):
"""
:rtype: str
@ -1550,18 +1633,40 @@ class EnvironmentDescription(object):
"""
current = EnvironmentDescription(self.args)
original_json = str(self)
return self.check(self, current, target_name, throw)
@staticmethod
def check(original, current, target_name, throw):
"""
:type original: EnvironmentDescription
:type current: EnvironmentDescription
:type target_name: str
:type throw: bool
:rtype: bool
"""
original_json = str(original)
current_json = str(current)
if original_json == current_json:
return True
unified_diff = '\n'.join(difflib.unified_diff(
a=original_json.splitlines(),
b=current_json.splitlines(),
fromfile='original.json',
tofile='current.json',
lineterm='',
))
message = ('Test target "%s" has changed the test environment!\n'
'If these changes are necessary, they must be reverted before the test finishes.\n'
'>>> Original Environment\n'
'%s\n'
'>>> Current Environment\n'
'%s' % (target_name, original_json, current_json))
'%s\n'
'>>> Environment Diff\n'
'%s'
% (target_name, original_json, current_json, unified_diff))
if throw:
raise ApplicationError(message)
@ -1571,17 +1676,19 @@ class EnvironmentDescription(object):
return False
@staticmethod
def get_version(command):
def get_version(command, warnings):
"""
:type command: list[str]
:rtype: str
:type warnings: list[str]
:rtype: list[str]
"""
try:
stdout, stderr = raw_command(command, capture=True, cmd_verbosity=2)
except SubprocessError:
except SubprocessError as ex:
warnings.append(u'%s' % ex)
return None # all failures are equal, we don't care why it failed, only that it did
return (stdout or '').strip() + (stderr or '').strip()
return [line.strip() for line in ((stdout or '').strip() + (stderr or '').strip()).splitlines()]
@staticmethod
def get_shebang(path):
@ -1590,7 +1697,7 @@ class EnvironmentDescription(object):
:rtype: str
"""
with open(path) as script_fd:
return script_fd.readline()
return script_fd.readline().strip()
@staticmethod
def get_hash(path):

View file

@ -32,6 +32,22 @@ class ManageWindowsCI(object):
:type core_ci: AnsibleCoreCI
"""
self.core_ci = core_ci
self.ssh_args = ['-i', self.core_ci.ssh_key.key]
ssh_options = dict(
BatchMode='yes',
StrictHostKeyChecking='no',
UserKnownHostsFile='/dev/null',
ServerAliveInterval=15,
ServerAliveCountMax=4,
)
for ssh_option in sorted(ssh_options):
self.ssh_args += ['-o', '%s=%s' % (ssh_option, ssh_options[ssh_option])]
def setup(self):
"""Used in delegate_remote to setup the host, no action is required for Windows."""
pass
def wait(self):
"""Wait for instance to respond to ansible ping."""
@ -59,6 +75,24 @@ class ManageWindowsCI(object):
raise ApplicationError('Timeout waiting for %s/%s instance %s.' %
(self.core_ci.platform, self.core_ci.version, self.core_ci.instance_id))
def ssh(self, command, options=None):
"""
:type command: str | list[str]
:type options: list[str] | None
"""
if not options:
options = []
if isinstance(command, list):
command = ' '.join(pipes.quote(c) for c in command)
run_command(self.core_ci.args,
['ssh', '-tt', '-q'] + self.ssh_args +
options +
['-p', '22',
'%s@%s' % (self.core_ci.connection.username, self.core_ci.connection.hostname)] +
[command])
class ManageNetworkCI(object):
"""Manage access to a network instance provided by Ansible Core CI."""

View file

@ -37,6 +37,7 @@ class DefaultTarFilter(TarFilter):
'.tox',
'.git',
'.idea',
'.pytest_cache',
'__pycache__',
'ansible.egg-info',
)

View file

@ -15,9 +15,10 @@ from lib.util import (
run_command,
import_plugins,
load_plugins,
parse_to_dict,
parse_to_list_of_dict,
ABC,
is_binary_file,
read_lines_without_comments,
)
from lib.ansible_util import (
@ -57,7 +58,7 @@ def command_sanity(args):
:type args: SanityConfig
"""
changes = get_changes_filter(args)
require = (args.require or []) + changes
require = args.require + changes
targets = SanityTargets(args.include, args.exclude, require)
if not targets.include:
@ -134,8 +135,8 @@ def collect_code_smell_tests():
"""
:rtype: tuple[SanityCodeSmellTest]
"""
with open('test/sanity/code-smell/skip.txt', 'r') as skip_fd:
skip_tests = skip_fd.read().splitlines()
skip_file = 'test/sanity/code-smell/skip.txt'
skip_tests = read_lines_without_comments(skip_file, remove_blank_lines=True)
paths = glob.glob('test/sanity/code-smell/*')
paths = sorted(p for p in paths if os.access(p, os.X_OK) and os.path.isfile(p) and os.path.basename(p) not in skip_tests)
@ -304,7 +305,7 @@ class SanityCodeSmellTest(SanityTest):
if stdout and not stderr:
if pattern:
matches = [parse_to_dict(pattern, line) for line in stdout.splitlines()]
matches = parse_to_list_of_dict(pattern, stdout)
messages = [SanityMessage(
message=m['message'],

View file

@ -15,6 +15,7 @@ from lib.util import (
SubprocessError,
display,
intercept_command,
read_lines_without_comments,
)
from lib.ansible_util import (
@ -35,8 +36,8 @@ class AnsibleDocTest(SanityMultipleVersion):
:type python_version: str
:rtype: TestResult
"""
with open('test/sanity/ansible-doc/skip.txt', 'r') as skip_fd:
skip_modules = set(skip_fd.read().splitlines())
skip_file = 'test/sanity/ansible-doc/skip.txt'
skip_modules = set(read_lines_without_comments(skip_file, remove_blank_lines=True))
modules = sorted(set(m for i in targets.include_external for m in i.modules) -
set(m for i in targets.exclude_external for m in i.modules) -

View file

@ -2,7 +2,6 @@
from __future__ import absolute_import, print_function
import os
import re
from lib.sanity import (
SanityMultipleVersion,
@ -17,6 +16,8 @@ from lib.util import (
run_command,
display,
find_python,
read_lines_without_comments,
parse_to_list_of_dict,
)
from lib.config import (
@ -37,12 +38,10 @@ class CompileTest(SanityMultipleVersion):
:type python_version: str
:rtype: TestResult
"""
# optional list of regex patterns to exclude from tests
skip_file = 'test/sanity/compile/python%s-skip.txt' % python_version
if os.path.exists(skip_file):
with open(skip_file, 'r') as skip_fd:
skip_paths = skip_fd.read().splitlines()
skip_paths = read_lines_without_comments(skip_file)
else:
skip_paths = []
@ -73,7 +72,7 @@ class CompileTest(SanityMultipleVersion):
pattern = r'^(?P<path>[^:]*):(?P<line>[0-9]+):(?P<column>[0-9]+): (?P<message>.*)$'
results = [re.search(pattern, line).groupdict() for line in stdout.splitlines()]
results = parse_to_list_of_dict(pattern, stdout)
results = [SanityMessage(
message=r['message'],
@ -87,6 +86,9 @@ class CompileTest(SanityMultipleVersion):
for path in skip_paths:
line += 1
if not path:
continue
if not os.path.exists(path):
# Keep files out of the list which no longer exist in the repo.
results.append(SanityMessage(

View file

@ -2,7 +2,6 @@
from __future__ import absolute_import, print_function
import os
import re
from lib.sanity import (
SanityMultipleVersion,
@ -19,6 +18,9 @@ from lib.util import (
remove_tree,
display,
find_python,
read_lines_without_comments,
parse_to_list_of_dict,
make_dirs,
)
from lib.ansible_util import (
@ -43,9 +45,8 @@ class ImportTest(SanityMultipleVersion):
:type python_version: str
:rtype: TestResult
"""
with open('test/sanity/import/skip.txt', 'r') as skip_fd:
skip_paths = skip_fd.read().splitlines()
skip_file = 'test/sanity/import/skip.txt'
skip_paths = read_lines_without_comments(skip_file, remove_blank_lines=True)
skip_paths_set = set(skip_paths)
paths = sorted(
@ -81,9 +82,24 @@ class ImportTest(SanityMultipleVersion):
if not args.explain:
os.symlink(os.path.abspath('test/sanity/import/importer.py'), importer_path)
# create a minimal python library
python_path = os.path.abspath('test/runner/.tox/import/lib')
ansible_path = os.path.join(python_path, 'ansible')
ansible_init = os.path.join(ansible_path, '__init__.py')
ansible_link = os.path.join(ansible_path, 'module_utils')
if not args.explain:
make_dirs(ansible_path)
with open(ansible_init, 'w'):
pass
if not os.path.exists(ansible_link):
os.symlink('../../../../../../lib/ansible/module_utils', ansible_link)
# activate the virtual environment
env['PATH'] = '%s:%s' % (virtual_environment_bin, env['PATH'])
env['PYTHONPATH'] = os.path.abspath('test/sanity/import/lib')
env['PYTHONPATH'] = python_path
# make sure coverage is available in the virtual environment if needed
if args.coverage:
@ -112,7 +128,7 @@ class ImportTest(SanityMultipleVersion):
pattern = r'^(?P<path>[^:]*):(?P<line>[0-9]+):(?P<column>[0-9]+): (?P<message>.*)$'
results = [re.search(pattern, line).groupdict() for line in ex.stdout.splitlines()]
results = parse_to_list_of_dict(pattern, ex.stdout)
results = [SanityMessage(
message=r['message'],
@ -121,7 +137,7 @@ class ImportTest(SanityMultipleVersion):
column=int(r['column']),
) for r in results]
results = [result for result in results if result.path not in skip_paths]
results = [result for result in results if result.path not in skip_paths_set]
if results:
return SanityFailure(self.name, messages=results, python_version=python_version)

View file

@ -4,6 +4,7 @@ from __future__ import absolute_import, print_function
import json
import textwrap
import re
import os
from lib.sanity import (
SanitySingleVersion,
@ -36,6 +37,8 @@ from lib.util import (
class IntegrationAliasesTest(SanitySingleVersion):
"""Sanity test to evaluate integration test aliases."""
SHIPPABLE_YML = 'shippable.yml'
DISABLED = 'disabled/'
UNSTABLE = 'unstable/'
UNSUPPORTED = 'unsupported/'
@ -86,7 +89,7 @@ class IntegrationAliasesTest(SanitySingleVersion):
:rtype: list[str]
"""
if not self._shippable_yml_lines:
with open('shippable.yml', 'r') as shippable_yml_fd:
with open(self.SHIPPABLE_YML, 'r') as shippable_yml_fd:
self._shippable_yml_lines = shippable_yml_fd.read().splitlines()
return self._shippable_yml_lines
@ -143,6 +146,12 @@ class IntegrationAliasesTest(SanitySingleVersion):
if args.explain:
return SanitySuccess(self.name)
if not os.path.isfile(self.SHIPPABLE_YML):
return SanityFailure(self.name, messages=[SanityMessage(
message='file missing',
path=self.SHIPPABLE_YML,
)])
results = dict(
comments=[],
labels={},

View file

@ -15,6 +15,8 @@ from lib.util import (
SubprocessError,
display,
run_command,
read_lines_without_comments,
parse_to_list_of_dict,
)
from lib.config import (
@ -37,17 +39,14 @@ class Pep8Test(SanitySingleVersion):
:type targets: SanityTargets
:rtype: TestResult
"""
with open(PEP8_SKIP_PATH, 'r') as skip_fd:
skip_paths = skip_fd.read().splitlines()
skip_paths = read_lines_without_comments(PEP8_SKIP_PATH)
legacy_paths = read_lines_without_comments(PEP8_LEGACY_PATH)
with open(PEP8_LEGACY_PATH, 'r') as legacy_fd:
legacy_paths = legacy_fd.read().splitlines()
legacy_ignore_file = 'test/sanity/pep8/legacy-ignore.txt'
legacy_ignore = set(read_lines_without_comments(legacy_ignore_file, remove_blank_lines=True))
with open('test/sanity/pep8/legacy-ignore.txt', 'r') as ignore_fd:
legacy_ignore = set(ignore_fd.read().splitlines())
with open('test/sanity/pep8/current-ignore.txt', 'r') as ignore_fd:
current_ignore = sorted(ignore_fd.read().splitlines())
current_ignore_file = 'test/sanity/pep8/current-ignore.txt'
current_ignore = sorted(read_lines_without_comments(current_ignore_file, remove_blank_lines=True))
skip_paths_set = set(skip_paths)
legacy_paths_set = set(legacy_paths)
@ -82,7 +81,7 @@ class Pep8Test(SanitySingleVersion):
if stdout:
pattern = '^(?P<path>[^:]*):(?P<line>[0-9]+):(?P<column>[0-9]+): (?P<code>[WE][0-9]{3}) (?P<message>.*)$'
results = [re.search(pattern, line).groupdict() for line in stdout.splitlines()]
results = parse_to_list_of_dict(pattern, stdout)
else:
results = []
@ -106,6 +105,9 @@ class Pep8Test(SanitySingleVersion):
for path in legacy_paths:
line += 1
if not path:
continue
if not os.path.exists(path):
# Keep files out of the list which no longer exist in the repo.
errors.append(SanityMessage(
@ -133,6 +135,9 @@ class Pep8Test(SanitySingleVersion):
for path in skip_paths:
line += 1
if not path:
continue
if not os.path.exists(path):
# Keep files out of the list which no longer exist in the repo.
errors.append(SanityMessage(

View file

@ -18,6 +18,7 @@ from lib.util import (
SubprocessError,
run_command,
find_executable,
read_lines_without_comments,
)
from lib.config import (
@ -41,30 +42,31 @@ class PslintTest(SanitySingleVersion):
:type targets: SanityTargets
:rtype: TestResult
"""
with open(PSLINT_SKIP_PATH, 'r') as skip_fd:
skip_paths = skip_fd.read().splitlines()
skip_paths = read_lines_without_comments(PSLINT_SKIP_PATH)
invalid_ignores = []
with open(PSLINT_IGNORE_PATH, 'r') as ignore_fd:
ignore_entries = ignore_fd.read().splitlines()
ignore = collections.defaultdict(dict)
line = 0
ignore_entries = read_lines_without_comments(PSLINT_IGNORE_PATH)
ignore = collections.defaultdict(dict)
line = 0
for ignore_entry in ignore_entries:
line += 1
for ignore_entry in ignore_entries:
line += 1
if ' ' not in ignore_entry:
invalid_ignores.append((line, 'Invalid syntax'))
continue
if not ignore_entry:
continue
path, code = ignore_entry.split(' ', 1)
if ' ' not in ignore_entry:
invalid_ignores.append((line, 'Invalid syntax'))
continue
if not os.path.exists(path):
invalid_ignores.append((line, 'Remove "%s" since it does not exist' % path))
continue
path, code = ignore_entry.split(' ', 1)
ignore[path][code] = line
if not os.path.exists(path):
invalid_ignores.append((line, 'Remove "%s" since it does not exist' % path))
continue
ignore[path][code] = line
paths = sorted(i.path for i in targets.include if os.path.splitext(i.path)[1] in ('.ps1', '.psm1', '.psd1') and i.path not in skip_paths)
@ -138,6 +140,9 @@ class PslintTest(SanitySingleVersion):
for path in skip_paths:
line += 1
if not path:
continue
if not os.path.exists(path):
# Keep files out of the list which no longer exist in the repo.
errors.append(SanityMessage(

View file

@ -6,11 +6,6 @@ import json
import os
import datetime
try:
import ConfigParser as configparser
except ImportError:
import configparser
from lib.sanity import (
SanitySingleVersion,
SanityMessage,
@ -23,7 +18,8 @@ from lib.util import (
SubprocessError,
run_command,
display,
find_executable,
read_lines_without_comments,
ConfigParser,
)
from lib.executor import (
@ -69,43 +65,44 @@ class PylintTest(SanitySingleVersion):
display.warning('Skipping pylint on unsupported Python version %s.' % args.python_version)
return SanitySkipped(self.name)
with open(PYLINT_SKIP_PATH, 'r') as skip_fd:
skip_paths = skip_fd.read().splitlines()
skip_paths = read_lines_without_comments(PYLINT_SKIP_PATH)
invalid_ignores = []
supported_versions = set(SUPPORTED_PYTHON_VERSIONS) - set(UNSUPPORTED_PYTHON_VERSIONS)
supported_versions = set([v.split('.')[0] for v in supported_versions]) | supported_versions
with open(PYLINT_IGNORE_PATH, 'r') as ignore_fd:
ignore_entries = ignore_fd.read().splitlines()
ignore = collections.defaultdict(dict)
line = 0
ignore_entries = read_lines_without_comments(PYLINT_IGNORE_PATH)
ignore = collections.defaultdict(dict)
line = 0
for ignore_entry in ignore_entries:
line += 1
for ignore_entry in ignore_entries:
line += 1
if ' ' not in ignore_entry:
invalid_ignores.append((line, 'Invalid syntax'))
if not ignore_entry:
continue
if ' ' not in ignore_entry:
invalid_ignores.append((line, 'Invalid syntax'))
continue
path, code = ignore_entry.split(' ', 1)
if not os.path.exists(path):
invalid_ignores.append((line, 'Remove "%s" since it does not exist' % path))
continue
if ' ' in code:
code, version = code.split(' ', 1)
if version not in supported_versions:
invalid_ignores.append((line, 'Invalid version: %s' % version))
continue
path, code = ignore_entry.split(' ', 1)
if version != args.python_version and version != args.python_version.split('.')[0]:
continue # ignore version specific entries for other versions
if not os.path.exists(path):
invalid_ignores.append((line, 'Remove "%s" since it does not exist' % path))
continue
if ' ' in code:
code, version = code.split(' ', 1)
if version not in supported_versions:
invalid_ignores.append((line, 'Invalid version: %s' % version))
continue
if version != args.python_version and version != args.python_version.split('.')[0]:
continue # ignore version specific entries for other versions
ignore[path][code] = line
ignore[path][code] = line
skip_paths_set = set(skip_paths)
@ -193,6 +190,9 @@ class PylintTest(SanitySingleVersion):
for path in skip_paths:
line += 1
if not path:
continue
if not os.path.exists(path):
# Keep files out of the list which no longer exist in the repo.
errors.append(SanityMessage(
@ -240,7 +240,7 @@ class PylintTest(SanitySingleVersion):
if not os.path.exists(rcfile):
rcfile = 'test/sanity/pylint/config/default'
parser = configparser.SafeConfigParser()
parser = ConfigParser()
parser.read(rcfile)
if parser.has_section('ansible-test'):
@ -263,7 +263,7 @@ class PylintTest(SanitySingleVersion):
] + paths
env = ansible_environment(args)
env['PYTHONPATH'] += '%s%s' % (os.pathsep, self.plugin_dir)
env['PYTHONPATH'] += '%s%s' % (os.path.pathsep, self.plugin_dir)
if paths:
try:

View file

@ -14,9 +14,9 @@ from lib.sanity import (
from lib.util import (
SubprocessError,
run_command,
parse_to_dict,
parse_to_list_of_dict,
display,
find_executable,
read_lines_without_comments,
)
from lib.config import (
@ -40,8 +40,8 @@ class RstcheckTest(SanitySingleVersion):
display.warning('Skipping rstcheck on unsupported Python version %s.' % args.python_version)
return SanitySkipped(self.name)
with open('test/sanity/rstcheck/ignore-substitutions.txt', 'r') as ignore_fd:
ignore_substitutions = sorted(set(ignore_fd.read().splitlines()))
ignore_file = 'test/sanity/rstcheck/ignore-substitutions.txt'
ignore_substitutions = sorted(set(read_lines_without_comments(ignore_file, remove_blank_lines=True)))
paths = sorted(i.path for i in targets.include if os.path.splitext(i.path)[1] in ('.rst',))
@ -71,7 +71,7 @@ class RstcheckTest(SanitySingleVersion):
pattern = r'^(?P<path>[^:]*):(?P<line>[0-9]+): \((?P<level>INFO|WARNING|ERROR|SEVERE)/[0-4]\) (?P<message>.*)$'
results = [parse_to_dict(pattern, line) for line in stderr.splitlines()]
results = parse_to_list_of_dict(pattern, stderr)
results = [SanityMessage(
message=r['message'],

View file

@ -19,6 +19,7 @@ from lib.sanity import (
from lib.util import (
SubprocessError,
run_command,
read_lines_without_comments,
)
from lib.config import (
@ -34,11 +35,11 @@ class ShellcheckTest(SanitySingleVersion):
:type targets: SanityTargets
:rtype: TestResult
"""
with open('test/sanity/shellcheck/skip.txt', 'r') as skip_fd:
skip_paths = set(skip_fd.read().splitlines())
skip_file = 'test/sanity/shellcheck/skip.txt'
skip_paths = set(read_lines_without_comments(skip_file, remove_blank_lines=True))
with open('test/sanity/shellcheck/exclude.txt', 'r') as exclude_fd:
exclude = set(exclude_fd.read().splitlines())
exclude_file = 'test/sanity/shellcheck/exclude.txt'
exclude = set(read_lines_without_comments(exclude_file, remove_blank_lines=True))
paths = sorted(i.path for i in targets.include if os.path.splitext(i.path)[1] == '.sh' and i.path not in skip_paths)

View file

@ -17,6 +17,7 @@ from lib.util import (
SubprocessError,
display,
run_command,
read_lines_without_comments,
)
from lib.ansible_util import (
@ -44,9 +45,7 @@ class ValidateModulesTest(SanitySingleVersion):
:type targets: SanityTargets
:rtype: TestResult
"""
with open(VALIDATE_SKIP_PATH, 'r') as skip_fd:
skip_paths = skip_fd.read().splitlines()
skip_paths = read_lines_without_comments(VALIDATE_SKIP_PATH)
skip_paths_set = set(skip_paths)
env = ansible_environment(args, color=False)
@ -65,21 +64,23 @@ class ValidateModulesTest(SanitySingleVersion):
invalid_ignores = []
with open(VALIDATE_IGNORE_PATH, 'r') as ignore_fd:
ignore_entries = ignore_fd.read().splitlines()
ignore = collections.defaultdict(dict)
line = 0
ignore_entries = read_lines_without_comments(VALIDATE_IGNORE_PATH)
ignore = collections.defaultdict(dict)
line = 0
for ignore_entry in ignore_entries:
line += 1
for ignore_entry in ignore_entries:
line += 1
if ' ' not in ignore_entry:
invalid_ignores.append((line, 'Invalid syntax'))
continue
if not ignore_entry:
continue
path, code = ignore_entry.split(' ', 1)
if ' ' not in ignore_entry:
invalid_ignores.append((line, 'Invalid syntax'))
continue
ignore[path][code] = line
path, code = ignore_entry.split(' ', 1)
ignore[path][code] = line
if args.base_branch:
cmd.extend([
@ -139,9 +140,14 @@ class ValidateModulesTest(SanitySingleVersion):
confidence=calculate_confidence(VALIDATE_IGNORE_PATH, line, args.metadata) if args.metadata.changes else None,
))
line = 0
for path in skip_paths:
line += 1
if not path:
continue
if not os.path.exists(path):
# Keep files out of the list which no longer exist in the repo.
errors.append(SanityMessage(

View file

@ -58,7 +58,8 @@ class YamllintTest(SanitySingleVersion):
return SanitySuccess(self.name)
def test_paths(self, args, paths):
@staticmethod
def test_paths(args, paths):
"""
:type args: SanityConfig
:type paths: list[str]

View file

@ -12,6 +12,7 @@ import sys
from lib.util import (
ApplicationError,
read_lines_without_comments,
)
MODULE_EXTENSIONS = '.py', '.ps1'
@ -31,7 +32,7 @@ def find_target_completion(target_func, prefix):
matches = walk_completion_targets(targets, prefix, short)
return matches
except Exception as ex: # pylint: disable=locally-disabled, broad-except
return [str(ex)]
return [u'%s' % ex]
def walk_completion_targets(targets, prefix, short=False):
@ -509,8 +510,8 @@ class IntegrationTarget(CompletionTarget):
# static_aliases
try:
with open(os.path.join(path, 'aliases'), 'r') as aliases_file:
static_aliases = tuple(aliases_file.read().splitlines())
aliases_path = os.path.join(path, 'aliases')
static_aliases = tuple(read_lines_without_comments(aliases_path, remove_blank_lines=True))
except IOError as ex:
if ex.errno != errno.ENOENT:
raise

View file

@ -30,7 +30,7 @@ class WrappedThread(threading.Thread):
Run action and capture results or exception.
Do not override. Do not call directly. Executed by the start() method.
"""
# noinspection PyBroadException
# noinspection PyBroadException, PyPep8
try:
self._result.put((self.action(), None))
except: # pylint: disable=locally-disabled, bare-except

View file

@ -5,7 +5,6 @@ from __future__ import absolute_import, print_function
import atexit
import contextlib
import errno
import filecmp
import fcntl
import inspect
import json
@ -32,6 +31,13 @@ except ImportError:
from abc import ABCMeta
ABC = ABCMeta('ABC', (), {})
try:
# noinspection PyCompatibility
from ConfigParser import SafeConfigParser as ConfigParser
except ImportError:
# noinspection PyCompatibility
from configparser import ConfigParser
DOCKER_COMPLETION = {}
coverage_path = '' # pylint: disable=locally-disabled, invalid-name
@ -42,8 +48,7 @@ def get_docker_completion():
:rtype: dict[str, str]
"""
if not DOCKER_COMPLETION:
with open('test/runner/completion/docker.txt', 'r') as completion_fd:
images = completion_fd.read().splitlines()
images = read_lines_without_comments('test/runner/completion/docker.txt', remove_blank_lines=True)
DOCKER_COMPLETION.update(dict(kvp for kvp in [parse_docker_completion(i) for i in images] if kvp))
@ -81,6 +86,23 @@ def remove_file(path):
os.remove(path)
def read_lines_without_comments(path, remove_blank_lines=False):
"""
:type path: str
:type remove_blank_lines: bool
:rtype: list[str]
"""
with open(path, 'r') as path_fd:
lines = path_fd.read().splitlines()
lines = [re.sub(r' *#.*$', '', line) for line in lines]
if remove_blank_lines:
lines = [line for line in lines if line]
return lines
def find_executable(executable, cwd=None, path=None, required=True):
"""
:type executable: str
@ -101,10 +123,10 @@ def find_executable(executable, cwd=None, path=None, required=True):
match = executable
else:
if path is None:
path = os.environ.get('PATH', os.defpath)
path = os.environ.get('PATH', os.path.defpath)
if path:
path_dirs = path.split(os.pathsep)
path_dirs = path.split(os.path.pathsep)
seen_dirs = set()
for path_dir in path_dirs:
@ -181,7 +203,7 @@ def intercept_command(args, cmd, target_name, capture=False, env=None, data=None
coverage_file = os.path.abspath(os.path.join(inject_path, '..', 'output', '%s=%s=%s=%s=coverage' % (
args.command, target_name, args.coverage_label or 'local-%s' % version, 'python-%s' % version)))
env['PATH'] = inject_path + os.pathsep + env['PATH']
env['PATH'] = inject_path + os.path.pathsep + env['PATH']
env['ANSIBLE_TEST_PYTHON_VERSION'] = version
env['ANSIBLE_TEST_PYTHON_INTERPRETER'] = interpreter
@ -368,7 +390,7 @@ def common_environment():
"""Common environment used for executing all programs."""
env = dict(
LC_ALL='en_US.UTF-8',
PATH=os.environ.get('PATH', os.defpath),
PATH=os.environ.get('PATH', os.path.defpath),
)
required = (
@ -721,18 +743,27 @@ def docker_qualify_image(name):
return config.get('name', name)
def parse_to_dict(pattern, value):
def parse_to_list_of_dict(pattern, value):
"""
:type pattern: str
:type value: str
:return: dict[str, str]
:return: list[dict[str, str]]
"""
match = re.search(pattern, value)
matched = []
unmatched = []
if match is None:
raise Exception('Pattern "%s" did not match value: %s' % (pattern, value))
for line in value.splitlines():
match = re.search(pattern, line)
return match.groupdict()
if match:
matched.append(match.groupdict())
else:
unmatched.append(line)
if unmatched:
raise Exception('Pattern "%s" did not match values:\n%s' % (pattern, '\n'.join(unmatched)))
return matched
def get_available_port():

View file

@ -17,6 +17,7 @@ if [ ! -f /usr/bin/virtualenv ] && [ -f /usr/bin/virtualenv-3 ]; then
fi
# Improve prompts on remote host for interactive use.
# shellcheck disable=SC1117
cat << EOF > ~/.bashrc
alias ls='ls --color=auto'
export PS1='\[\e]0;\u@\h: \w\a\]\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '

View file

@ -76,6 +76,7 @@ if [ ! -f "${HOME}/.ssh/id_rsa.pub" ]; then
fi
# Improve prompts on remote host for interactive use.
# shellcheck disable=SC1117
cat << EOF > ~/.bashrc
alias ls='ls -G'
export PS1='\[\e]0;\u@\h: \w\a\]\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '

15
test/runner/versions.py Executable file
View file

@ -0,0 +1,15 @@
#!/usr/bin/env python
"""Show python and pip versions."""
import os
import sys
try:
import pip
except ImportError:
pip = None
print(sys.version)
if pip:
print('pip %s from %s' % (pip.__version__, os.path.dirname(pip.__file__)))

View file

@ -3,18 +3,34 @@
import os
import re
import subprocess
import sys
def main():
base_dir = os.getcwd() + os.sep
base_dir = os.getcwd() + os.path.sep
docs_dir = os.path.abspath('docs/docsite')
cmd = ['make', 'singlehtmldocs']
sphinx = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=docs_dir)
stdout, stderr = sphinx.communicate()
stdout = stdout.decode('utf-8')
stderr = stderr.decode('utf-8')
if sphinx.returncode != 0:
raise subprocess.CalledProcessError(sphinx.returncode, cmd, output=stdout, stderr=stderr)
sys.stderr.write("Command '%s' failed with status code: %d\n" % (' '.join(cmd), sphinx.returncode))
if stdout.strip():
stdout = simplify_stdout(stdout)
sys.stderr.write("--> Standard Output\n")
sys.stderr.write("%s\n" % stdout.strip())
if stderr.strip():
sys.stderr.write("--> Standard Error\n")
sys.stderr.write("%s\n" % stderr.strip())
sys.exit(1)
with open('docs/docsite/rst_warnings', 'r') as warnings_fd:
output = warnings_fd.read().strip()
@ -97,5 +113,40 @@ def main():
print('test/sanity/code-smell/docs-build.py:0:0: remove `%s` from the `ignore_codes` list as it is no longer needed' % code)
def simplify_stdout(value):
"""Simplify output by omitting earlier 'rendering: ...' messages."""
lines = value.strip().splitlines()
rendering = []
keep = []
def truncate_rendering():
"""Keep last rendering line (if any) with a message about omitted lines as needed."""
if not rendering:
return
notice = rendering[-1]
if len(rendering) > 1:
notice += ' (%d previous rendering line(s) omitted)' % (len(rendering) - 1)
keep.append(notice)
rendering[:] = []
for line in lines:
if line.startswith('rendering: '):
rendering.append(line)
continue
truncate_rendering()
keep.append(line)
truncate_rendering()
result = '\n'.join(keep)
return result
if __name__ == '__main__':
main()

View file

@ -1,7 +1,6 @@
#!/usr/bin/env python
import os
import re
import sys

View file

@ -0,0 +1,4 @@
{
"always": true,
"output": "path-message"
}

View file

@ -0,0 +1,35 @@
#!/usr/bin/env python
import os
def main():
skip_dirs = set([
'.tox',
])
for root, dirs, files in os.walk('.'):
for skip_dir in skip_dirs:
if skip_dir in dirs:
dirs.remove(skip_dir)
if root == '.':
root = ''
elif root.startswith('./'):
root = root[2:]
for file in files:
path = os.path.join(root, file)
if not os.path.exists(path):
print('%s: broken symlinks are not allowed' % path)
for directory in dirs:
path = os.path.join(root, directory)
if os.path.islink(path):
print('%s: symlinks to directories are not allowed' % path)
if __name__ == '__main__':
main()

View file

@ -1 +0,0 @@
"""Empty placeholder for import sanity test."""

View file

@ -1 +0,0 @@
../../../../../lib/ansible/module_utils

View file

@ -1,7 +1,6 @@
[MESSAGES CONTROL]
disable=
no-self-use,
too-few-public-methods,
too-many-arguments,
too-many-branches,

View file

@ -1,3 +1,4 @@
lib/ansible/module_utils/k8s/inventory.py catching-non-exception
lib/ansible/module_utils/network/iosxr/iosxr.py ansible-format-automatic-specification
lib/ansible/modules/cloud/amazon/aws_api_gateway.py ansible-format-automatic-specification
lib/ansible/modules/cloud/amazon/aws_kms.py ansible-format-automatic-specification
@ -68,49 +69,3 @@ lib/ansible/modules/storage/purestorage/purefa_pg.py ansible-format-automatic-sp
lib/ansible/modules/system/firewalld.py ansible-format-automatic-specification
lib/ansible/plugins/cliconf/junos.py ansible-no-format-on-bytestring
lib/ansible/plugins/cliconf/nxos.py ansible-format-automatic-specification
test/runner/injector/importer.py missing-docstring 3.7
test/runner/injector/injector.py missing-docstring 3.7
test/runner/lib/ansible_util.py missing-docstring 3.7
test/runner/lib/changes.py missing-docstring 3.7
test/runner/lib/classification.py missing-docstring 3.7
test/runner/lib/cloud/__init__.py missing-docstring 3.7
test/runner/lib/cloud/aws.py missing-docstring 3.7
test/runner/lib/cloud/azure.py missing-docstring 3.7
test/runner/lib/cloud/cs.py missing-docstring 3.7
test/runner/lib/cloud/vcenter.py missing-docstring 3.7
test/runner/lib/config.py missing-docstring 3.7
test/runner/lib/core_ci.py missing-docstring 3.7
test/runner/lib/cover.py missing-docstring 3.7
test/runner/lib/delegation.py missing-docstring 3.7
test/runner/lib/delegation.py redefined-variable-type 2.7
test/runner/lib/diff.py missing-docstring 3.7
test/runner/lib/docker_util.py missing-docstring 3.7
test/runner/lib/executor.py missing-docstring 3.7
test/runner/lib/git.py missing-docstring 3.7
test/runner/lib/http.py missing-docstring 3.7
test/runner/lib/import_analysis.py missing-docstring 3.7
test/runner/lib/manage_ci.py missing-docstring 3.7
test/runner/lib/metadata.py missing-docstring 3.7
test/runner/lib/powershell_import_analysis.py missing-docstring 3.7
test/runner/lib/pytar.py missing-docstring 3.7
test/runner/lib/sanity/__init__.py missing-docstring 3.7
test/runner/lib/sanity/ansible_doc.py missing-docstring 3.7
test/runner/lib/sanity/compile.py missing-docstring 3.7
test/runner/lib/sanity/import.py missing-docstring 3.7
test/runner/lib/sanity/pep8.py missing-docstring 3.7
test/runner/lib/sanity/pslint.py missing-docstring 3.7
test/runner/lib/sanity/pylint.py missing-docstring 3.7
test/runner/lib/sanity/rstcheck.py missing-docstring 3.7
test/runner/lib/sanity/sanity_docs.py missing-docstring 3.7
test/runner/lib/sanity/shellcheck.py missing-docstring 3.7
test/runner/lib/sanity/validate_modules.py missing-docstring 3.7
test/runner/lib/sanity/yamllint.py missing-docstring 3.7
test/runner/lib/target.py missing-docstring 3.7
test/runner/lib/test.py missing-docstring 3.7
test/runner/lib/thread.py missing-docstring 3.7
test/runner/lib/util.py missing-docstring 3.7
test/runner/retry.py missing-docstring 3.7
test/runner/shippable.py missing-docstring 3.7
test/runner/test.py missing-docstring 3.7
test/runner/units/test_diff.py missing-docstring 3.7
test/sanity/import/importer.py missing-docstring 3.7

View file

@ -925,10 +925,6 @@ lib/ansible/modules/clustering/k8s/_kubernetes.py E322
lib/ansible/modules/clustering/k8s/_kubernetes.py E323
lib/ansible/modules/clustering/k8s/_kubernetes.py E324
lib/ansible/modules/clustering/k8s/_kubernetes.py E325
lib/ansible/modules/clustering/k8s/k8s_raw.py E321
lib/ansible/modules/clustering/k8s/k8s_scale.py E321
lib/ansible/modules/clustering/openshift/openshift_raw.py E321
lib/ansible/modules/clustering/openshift/openshift_scale.py E321
lib/ansible/modules/clustering/znode.py E324
lib/ansible/modules/clustering/znode.py E325
lib/ansible/modules/clustering/znode.py E326

View file

@ -39,6 +39,9 @@ class TestGalaxy(unittest.TestCase):
'''creating prerequisites for installing a role; setUpClass occurs ONCE whereas setUp occurs with every method tested.'''
# class data for easy viewing: role_dir, role_tar, role_name, role_req, role_path
cls.temp_dir = tempfile.mkdtemp(prefix='ansible-test_galaxy-')
os.chdir(cls.temp_dir)
if os.path.exists("./delete_me"):
shutil.rmtree("./delete_me")
@ -89,6 +92,9 @@ class TestGalaxy(unittest.TestCase):
if os.path.isdir(cls.role_path):
shutil.rmtree(cls.role_path)
os.chdir('/')
shutil.rmtree(cls.temp_dir)
def setUp(self):
self.default_args = ['ansible-galaxy']

View file

@ -24,7 +24,15 @@ def pytest_configure():
coverage_instances.append(obj)
if not coverage_instances:
return
coverage_config = os.environ.get('_ANSIBLE_COVERAGE_CONFIG')
if not coverage_config:
return
cov = coverage.Coverage(config_file=coverage_config)
coverage_instances.append(cov)
else:
cov = None
os_exit = os._exit
@ -36,3 +44,6 @@ def pytest_configure():
os_exit(*args, **kwargs)
os._exit = coverage_exit
if cov:
cov.start()

View file

@ -55,21 +55,23 @@ class TestAnsibleModuleLogSmokeTest:
class TestAnsibleModuleLogSyslog:
"""Test the AnsibleModule Log Method"""
PY2_OUTPUT_DATA = {
u'Text string': b'Text string',
u'Toshio くらとみ non-ascii test': u'Toshio くらとみ non-ascii test'.encode('utf-8'),
b'Byte string': b'Byte string',
u'Toshio くらとみ non-ascii test'.encode('utf-8'): u'Toshio くらとみ non-ascii test'.encode('utf-8'),
b'non-utf8 :\xff: test': b'non-utf8 :\xff: test'.decode('utf-8', 'replace').encode('utf-8'),
}
PY2_OUTPUT_DATA = [
(u'Text string', b'Text string'),
(u'Toshio くらとみ non-ascii test', u'Toshio くらとみ non-ascii test'.encode('utf-8')),
(b'Byte string', b'Byte string'),
(u'Toshio くらとみ non-ascii test'.encode('utf-8'), u'Toshio くらとみ non-ascii test'.encode('utf-8')),
(b'non-utf8 :\xff: test', b'non-utf8 :\xff: test'.decode('utf-8', 'replace').encode('utf-8')),
]
PY3_OUTPUT_DATA = {
u'Text string': u'Text string',
u'Toshio くらとみ non-ascii test': u'Toshio くらとみ non-ascii test',
b'Byte string': u'Byte string',
u'Toshio くらとみ non-ascii test'.encode('utf-8'): u'Toshio くらとみ non-ascii test',
b'non-utf8 :\xff: test': b'non-utf8 :\xff: test'.decode('utf-8', 'replace')
}
PY3_OUTPUT_DATA = [
(u'Text string', u'Text string'),
(u'Toshio くらとみ non-ascii test', u'Toshio くらとみ non-ascii test'),
(b'Byte string', u'Byte string'),
(u'Toshio くらとみ non-ascii test'.encode('utf-8'), u'Toshio くらとみ non-ascii test'),
(b'non-utf8 :\xff: test', b'non-utf8 :\xff: test'.decode('utf-8', 'replace')),
]
OUTPUT_DATA = PY3_OUTPUT_DATA if PY3 else PY2_OUTPUT_DATA
@pytest.mark.parametrize('no_log, stdin', (product((True, False), [{}])), indirect=['stdin'])
def test_no_log(self, am, mocker, no_log):
@ -85,8 +87,7 @@ class TestAnsibleModuleLogSyslog:
# pylint bug: https://github.com/PyCQA/pylint/issues/511
@pytest.mark.parametrize('msg, param, stdin',
((m, p, {}) for m, p in
(PY3_OUTPUT_DATA.items() if PY3 else PY2_OUTPUT_DATA.items())), # pylint: disable=undefined-variable
((m, p, {}) for m, p in OUTPUT_DATA), # pylint: disable=undefined-variable
indirect=['stdin'])
def test_output_matches(self, am, mocker, msg, param):
"""Check that log messages are sent correctly"""
@ -101,13 +102,13 @@ class TestAnsibleModuleLogSyslog:
class TestAnsibleModuleLogJournal:
"""Test the AnsibleModule Log Method"""
OUTPUT_DATA = {
u'Text string': u'Text string',
u'Toshio くらとみ non-ascii test': u'Toshio くらとみ non-ascii test',
b'Byte string': u'Byte string',
u'Toshio くらとみ non-ascii test'.encode('utf-8'): u'Toshio くらとみ non-ascii test',
b'non-utf8 :\xff: test': b'non-utf8 :\xff: test'.decode('utf-8', 'replace')
}
OUTPUT_DATA = [
(u'Text string', u'Text string'),
(u'Toshio くらとみ non-ascii test', u'Toshio くらとみ non-ascii test'),
(b'Byte string', u'Byte string'),
(u'Toshio くらとみ non-ascii test'.encode('utf-8'), u'Toshio くらとみ non-ascii test'),
(b'non-utf8 :\xff: test', b'non-utf8 :\xff: test'.decode('utf-8', 'replace')),
]
@pytest.mark.parametrize('no_log, stdin', (product((True, False), [{}])), indirect=['stdin'])
def test_no_log(self, am, mocker, no_log):
@ -127,7 +128,7 @@ class TestAnsibleModuleLogJournal:
# pylint bug: https://github.com/PyCQA/pylint/issues/511
@pytest.mark.parametrize('msg, param, stdin',
((m, p, {}) for m, p in OUTPUT_DATA.items()), # pylint: disable=undefined-variable
((m, p, {}) for m, p in OUTPUT_DATA), # pylint: disable=undefined-variable
indirect=['stdin'])
def test_output_matches(self, am, mocker, msg, param):
journal_send = mocker.patch('systemd.journal.send')

View file

@ -260,8 +260,6 @@ class AciRest(unittest.TestCase):
error_text = to_native(u"Unable to parse output as XML, see 'raw' output. None (line 0)", errors='surrogate_or_strict')
elif PY2:
error_text = "Unable to parse output as XML, see 'raw' output. Document is empty, line 1, column 1 (line 1)"
elif sys.version_info >= (3, 7):
error_text = to_native(u"Unable to parse output as XML, see 'raw' output. None (line 0)", errors='surrogate_or_strict')
else:
error_text = "Unable to parse output as XML, see 'raw' output. Document is empty, line 1, column 1 (<string>, line 1)"

View file

@ -73,8 +73,8 @@ HOW_MANY_DOTS = (
'PostgreSQL does not support column with more than 4 dots'),
)
VALID_QUOTES = ((test, VALID[test]) for test in VALID)
INVALID_QUOTES = ((test[0], test[1], INVALID[test]) for test in INVALID)
VALID_QUOTES = ((test, VALID[test]) for test in sorted(VALID))
INVALID_QUOTES = ((test[0], test[1], INVALID[test]) for test in sorted(INVALID))
@pytest.mark.parametrize("identifier, quoted_identifier", VALID_QUOTES)

View file

@ -915,7 +915,7 @@ PRIVACY_POLICY_URL="http://www.intel.com/privacy"
]
@pytest.mark.parametrize("stdin, testcase", product([{}], TESTSETS), ids=lambda x: x['name'], indirect=['stdin'])
@pytest.mark.parametrize("stdin, testcase", product([{}], TESTSETS), ids=lambda x: x.get('name'), indirect=['stdin'])
def test_distribution_version(am, mocker, testcase):
"""tests the distribution parsing code of the Facts class

View file

@ -85,19 +85,19 @@ URLS = {
}
@pytest.mark.parametrize('url, is_ssh_url', ((k, v['is_ssh_url']) for k, v in URLS.items()))
@pytest.mark.parametrize('url, is_ssh_url', ((k, URLS[k]['is_ssh_url']) for k in sorted(URLS)))
def test_is_ssh_url(url, is_ssh_url):
assert known_hosts.is_ssh_url(url) == is_ssh_url
@pytest.mark.parametrize('url, fqdn, port', ((k, v['get_fqdn'], v['port']) for k, v in URLS.items()))
@pytest.mark.parametrize('url, fqdn, port', ((k, URLS[k]['get_fqdn'], URLS[k]['port']) for k in sorted(URLS)))
def test_get_fqdn_and_port(url, fqdn, port):
assert known_hosts.get_fqdn_and_port(url) == (fqdn, port)
@pytest.mark.parametrize('fqdn, port, add_host_key_cmd, stdin',
((v['get_fqdn'], v['port'], v['add_host_key_cmd'], {})
for v in URLS.values() if v['is_ssh_url']),
((URLS[k]['get_fqdn'], URLS[k]['port'], URLS[k]['add_host_key_cmd'], {})
for k in sorted(URLS) if URLS[k]['is_ssh_url']),
indirect=['stdin'])
def test_add_host_key(am, mocker, fqdn, port, add_host_key_cmd):
get_bin_path = mocker.MagicMock()

View file

@ -118,7 +118,7 @@ def test_get_nonexistent_stack(placeboify):
assert cfn_module.get_stack_facts(connection, 'ansible-test-nonexist') is None
def test_missing_template_body(placeboify):
def test_missing_template_body():
m = FakeModule()
with pytest.raises(Exception, message='Expected module to fail with no template') as exc_info:
cfn_module.create_stack(

View file

@ -180,33 +180,33 @@ def test_delete_pipeline(placeboify, maybe_sleep):
assert changed is True
def test_build_unique_id_different(placeboify, maybe_sleep):
def test_build_unique_id_different():
m = FakeModule(**{'name': 'ansible-unittest-1', 'description': 'test-unique-id'})
m2 = FakeModule(**{'name': 'ansible-unittest-1', 'description': 'test-unique-id-different'})
assert data_pipeline.build_unique_id(m) != data_pipeline.build_unique_id(m2)
def test_build_unique_id_same(placeboify, maybe_sleep):
def test_build_unique_id_same():
m = FakeModule(**{'name': 'ansible-unittest-1', 'description': 'test-unique-id', 'tags': {'ansible': 'test'}})
m2 = FakeModule(**{'name': 'ansible-unittest-1', 'description': 'test-unique-id', 'tags': {'ansible': 'test'}})
assert data_pipeline.build_unique_id(m) == data_pipeline.build_unique_id(m2)
def test_build_unique_id_obj(placeboify, maybe_sleep):
def test_build_unique_id_obj():
# check that the object can be different and the unique id should be the same; should be able to modify objects
m = FakeModule(**{'name': 'ansible-unittest-1', 'objects': [{'first': 'object'}]})
m2 = FakeModule(**{'name': 'ansible-unittest-1', 'objects': [{'second': 'object'}]})
assert data_pipeline.build_unique_id(m) == data_pipeline.build_unique_id(m2)
def test_format_tags(placeboify, maybe_sleep):
def test_format_tags():
unformatted_tags = {'key1': 'val1', 'key2': 'val2', 'key3': 'val3'}
formatted_tags = data_pipeline.format_tags(unformatted_tags)
for tag_set in formatted_tags:
assert unformatted_tags[tag_set['key']] == tag_set['value']
def test_format_empty_tags(placeboify, maybe_sleep):
def test_format_empty_tags():
unformatted_tags = {}
formatted_tags = data_pipeline.format_tags(unformatted_tags)
assert formatted_tags == []

View file

@ -23,6 +23,8 @@ import io
import inspect
from shutil import copyfile, move
import difflib
import tempfile
import shutil
class AnsibleFailJson(Exception):
@ -169,22 +171,26 @@ class TestInterfacesFileModule(unittest.TestCase):
}
for testname, options_list in testcases.items():
for testfile in self.getTestFiles():
path = os.path.join(fixture_path, testfile)
lines, ifaces = interfaces_file.read_interfaces_file(module, path)
backupp = module.backup_local(path)
options = options_list[0]
for state in ['present', 'absent']:
fail_json_iterations = []
options['state'] = state
try:
_, lines = interfaces_file.setInterfaceOption(module, lines, options['iface'], options['option'], options['value'], options['state'])
except AnsibleFailJson as e:
fail_json_iterations.append("fail_json message: %s\noptions:\n%s" %
(str(e), json.dumps(options, sort_keys=True, indent=4, separators=(',', ': '))))
interfaces_file.write_changes(module, [d['line'] for d in lines if 'line' in d], path)
with tempfile.NamedTemporaryFile() as temp_file:
src_path = os.path.join(fixture_path, testfile)
path = temp_file.name
shutil.copy(src_path, path)
lines, ifaces = interfaces_file.read_interfaces_file(module, path)
backupp = module.backup_local(path)
options = options_list[0]
for state in ['present', 'absent']:
fail_json_iterations = []
options['state'] = state
try:
_, lines = interfaces_file.setInterfaceOption(module, lines,
options['iface'], options['option'], options['value'], options['state'])
except AnsibleFailJson as e:
fail_json_iterations.append("fail_json message: %s\noptions:\n%s" %
(str(e), json.dumps(options, sort_keys=True, indent=4, separators=(',', ': '))))
interfaces_file.write_changes(module, [d['line'] for d in lines if 'line' in d], path)
self.compareStringWithFile("\n=====\n".join(fail_json_iterations), "%s_%s.exceptions.txt" % (testfile, testname))
self.compareStringWithFile("\n=====\n".join(fail_json_iterations), "%s_%s.exceptions.txt" % (testfile, testname))
self.compareInterfacesLinesToFile(lines, testfile, "%s_%s" % (testfile, testname))
self.compareInterfacesToFile(ifaces, testfile, "%s_%s.json" % (testfile, testname))
self.compareFileToBackup(path, backupp)
self.compareInterfacesLinesToFile(lines, testfile, "%s_%s" % (testfile, testname))
self.compareInterfacesToFile(ifaces, testfile, "%s_%s.json" % (testfile, testname))
self.compareFileToBackup(path, backupp)

View file

@ -13,9 +13,11 @@ target="shippable/${cloud}/group${group}/"
stage="${S:-prod}"
changed_all_target="shippable/${cloud}/smoketest/"
if [ "${group}" == "1" ]; then
# only run smoketest tests for group1
changed_all_target="shippable/${cloud}/smoketest/"
changed_all_mode="include"
if ! ansible-test integration "${changed_all_target}" --list-targets > /dev/null 2>&1; then
# no smoketest tests are available for this cloud
@ -23,10 +25,10 @@ if [ "${group}" == "1" ]; then
fi
else
# smoketest tests already covered by group1
changed_all_target="none"
changed_all_mode="exclude"
fi
# shellcheck disable=SC2086
ansible-test integration --color -v --retry-on-error "${target}" ${COVERAGE:+"$COVERAGE"} ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} \
--remote-terminate always --remote-stage "${stage}" \
--docker --python "${python}" --changed-all-target "${changed_all_target}"
--docker --python "${python}" --changed-all-target "${changed_all_target}" --changed-all-mode "${changed_all_mode}"

View file

@ -23,10 +23,10 @@ if [ -d /home/shippable/cache/ ]; then
ls -la /home/shippable/cache/
fi
which python
command -v python
python -V
which pip
command -v pip
pip --version
pip list --disable-pip-version-check

View file

@ -19,6 +19,7 @@ python_versions=(
2.6
3.5
3.6
3.7
2.7
)
@ -26,7 +27,7 @@ python_versions=(
single_version=2012-R2
# shellcheck disable=SC2086
ansible-test windows-integration "${target}" --explain ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} 2>&1 \
ansible-test windows-integration --explain ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} 2>&1 \
| { grep ' windows-integration: .* (targeted)$' || true; } > /tmp/windows.txt
if [ -s /tmp/windows.txt ] || [ "${CHANGED:+$CHANGED}" == "" ]; then
@ -54,6 +55,7 @@ fi
for version in "${python_versions[@]}"; do
changed_all_target="all"
changed_all_mode="default"
if [ "${version}" == "2.7" ]; then
# smoketest tests for python 2.7
@ -61,12 +63,13 @@ for version in "${python_versions[@]}"; do
# with change detection enabled run tests for anything changed
# use the smoketest tests for any change that triggers all tests
ci="${target}"
changed_all_target="shippable/windows/smoketest/"
if [ "${target}" == "shippable/windows/group1/" ]; then
# only run smoketest tests for group1
changed_all_target="shippable/windows/smoketest/"
changed_all_mode="include"
else
# smoketest tests already covered by group1
changed_all_target="none"
changed_all_mode="exclude"
fi
else
# without change detection enabled run entire test group
@ -88,7 +91,7 @@ for version in "${python_versions[@]}"; do
# shellcheck disable=SC2086
ansible-test windows-integration --color -v --retry-on-error "${ci}" ${COVERAGE:+"$COVERAGE"} ${CHANGED:+"$CHANGED"} ${UNSTABLE:+"$UNSTABLE"} \
"${platforms[@]}" --changed-all-target "${changed_all_target}" \
"${platforms[@]}" --changed-all-target "${changed_all_target}" --changed-all-mode "${changed_all_mode}" \
--docker default --python "${version}" \
--remote-terminate "${terminate}" --remote-stage "${stage}" --remote-provider "${provider}"
done

View file

@ -21,6 +21,7 @@ passenv =
[pytest]
xfail_strict = true
cache_dir = .pytest_cache
[flake8]
# These are things that the devs don't agree make the code more readable