Compare commits

...

371 commits

Author SHA1 Message Date
Brian Coca
3b5d7400de Merge pull request #15230 from someword/15228
Only set distribution_release on CoreOS nodes that autoupdate.  Fixes…
2016-03-31 17:46:52 -04:00
Derek Olsen
39b16f294c Only set distribution_release on CoreOS nodes that autoupdate. Fixes #15228 2016-03-31 14:39:55 -07:00
James Cammarata
7de237c5a1 New release v2.0.0.2-1 2016-01-14 17:17:42 -05:00
James Cammarata
6c85e2ed3c Fix typo in 0b86aa6 2016-01-14 16:51:31 -05:00
James Cammarata
92136d8d84 Hack to work around callback API change for v2_playbook_on_start 2016-01-14 16:51:31 -05:00
Toshio Kuratomi
42e66c3511 Add python-setuptools to the requirements for running ansible as
python-setuptools contains the egginfo needed to make pkg_resources
work.
2016-01-14 16:51:31 -05:00
James Cammarata
a50d1ea756 New release v2.0.0.1-1 2016-01-12 17:53:30 -05:00
Toshio Kuratomi
98a79c595d Update extras submodule ref. taiga_issue being left out was a packagigng bug 2016-01-12 12:15:21 -08:00
sebastianneubauer
d78dbe9730 added galaxy data
not tested, but something like this seems to be missing in the Manifest.in
2016-01-12 12:26:54 -05:00
James Cammarata
607850e676 re-adding the dummy debian changlog entry for packaging 2016-01-12 12:21:29 -05:00
James Cammarata
2eb59bedcf New release v2.0.0.0-1 2016-01-12 08:43:33 -05:00
James Cammarata
14bf6b016c Split up comma-separated tags properly
Fixes #13795
2016-01-12 08:18:37 -05:00
Peter Sprygada
28fecc9ce1 bugfix in nxos shared module for including defaults 2016-01-11 22:49:01 -05:00
Peter Sprygada
2c647f18e5 bugfix in ios shared module for including defaults 2016-01-11 22:48:54 -05:00
Peter Sprygada
4e087bb14f bugfix in eos shared module for including defaults 2016-01-11 22:48:43 -05:00
nitzmahone
1f7fbf29f9 updated new windows module list in changelog 2016-01-11 16:24:30 -08:00
Peter Sprygada
0681c2fd8f deletes nxapi from shared modules
The nxapi module has been superseded by the nxos shared module and is not longer needed. This commit removes (deletes) nxapi from module_utils.  All custom modules that have used nxapi should be using nxos instead.
2016-01-10 15:13:18 -05:00
Peter Sprygada
4a6235e320 adds network config file parser to shared modules
This commit adds a new shared module that parses network device configuration
files.  It is used to build modules that work with the various supported
network device operating systems
2016-01-10 15:12:45 -05:00
Peter Sprygada
94f13c7271 adds shared module shell for creating cli based transports
This commit add a new shared module shell that is used to build connections
to network devices that operate in a CLI environment.  This commit supercedes
the issh.py and cli.py commits and removes them from module_utils.
2016-01-10 15:12:29 -05:00
Peter Sprygada
93eb60161c initial add of openswitch shared module
This commit adds a new shared module openswitch for building modules that
work with OpenSwitch.  This shared module supports connectivity to
OpenSwitch devices over SSH, CLI or REST.  It also adds an openswitch
documentation fragment for use in modules
2016-01-10 15:12:15 -05:00
Peter Sprygada
2f1fc85002 adds shared module nxos for building cisco nxos modules
This commit refactors the nxapi into a new shared module nxos that supports
connectivity over both ssh (cli) and nxapi.  It supercedes the nxapi shared
module and removes it from module_utils.  This commit also adds a
documentation fragement supporting the nxos shared module
2016-01-10 15:12:05 -05:00
Peter Sprygada
91f35363a2 adds new iosxr shared module for developing modules that work with IOS XR devices
This commit adds a new shared module for working with Cisco IOS XR devices over
CLI (SSH).  It also provides a documentation fragement for the commmon arguments
provided by the iosxr module.
2016-01-10 15:11:55 -05:00
Peter Sprygada
f76633b347 updates the ios shared module with new shell
This update refactor the ios shared module to use the new shell shared
library instead of issh and cli.  It also adds the ios documentation
fragment to be used when building ios based modules.
2016-01-10 15:11:37 -05:00
Peter Sprygada
9c9bfb962e initial add of eos shared module
This adds a shared module for communicating with Arista EOS devices over
SSH (cli) or JSON-RPC (eapi).  This modules replaces the eapi.py module
previously added to module_utils.  This commit includes a documentation
fragment that describes the eos common arguments
2016-01-10 15:11:08 -05:00
Brian Coca
0e27fa4540 restructure vars_prompt and fix regression
pushed it to use the existing propmpt from display and moved the vars prompt code there also for uniformity
changed vars_prompt to check extra vars vs the empty play.vars to restore 1.9 behaviour
sipmlified the code as it didn't need to check for syntax again (tqm is made none prior based on that)
fixes #13770
2016-01-08 17:56:13 -05:00
Matt Martz
3a972c9170 Restore ability for a module to specify WANT_JSON 2016-01-08 12:49:08 -05:00
Brian Coca
91c7691a92 noted that regex_escape was added in 2.0
fixes #13759
2016-01-07 19:02:08 -05:00
James Cammarata
502ad88506 New release v2.0.0-0.9.rc4 2016-01-07 13:41:47 -05:00
Toshio Kuratomi
80f0380312 Fix typo 2016-01-07 06:01:44 -08:00
Toshio Kuratomi
a8f160d2fd More fixes for unicode handling in the connection plugins.
Tested that ssh, docker, local, lxc-libvirt, chroot all work with the
updated unicode integration test.
2016-01-07 06:01:34 -08:00
Brian Coca
6be48dd14c Revert "Show version without supplying a dummy action"
This reverts commit 21775d7866.
Parsing before action will fail if one of the action specific options is used
As per issue #13743
2016-01-07 08:29:43 -05:00
James Cammarata
fcd074d40f Merge branch 'ktosiek-fix-playbook-hanging' into stable-2.0 2016-01-06 14:01:35 -05:00
muffl0n
21775d7866 Show version without supplying a dummy action
fixes #12004
parsing x2 does not seem to break anything
2016-01-06 11:54:19 -05:00
Abhijit Menon-Sen
ab536c8aa8 Strip string terms before templating
The earlier code did call terms.strip(), but ignored the return value
instead of passing that in to templar.template(). Clearly an oversight.
2016-01-06 11:25:55 -05:00
James Cammarata
93f37f5969 Don't drop noops from task counting code in linear strategy 2016-01-06 09:58:20 -05:00
Tomasz Kontusz
ec3b7b7de8 linear strategy: don't look at tasks from the next block 2016-01-05 21:49:08 +01:00
Toshio Kuratomi
7a4914aa9b Fix exception catching to be importable on python3 2016-01-05 12:04:22 -08:00
Brian Coca
9ca5da82ff move hostvars.vars to vars
this fixes duplication under hostvars and exposes all vars in the vars dict
which makes dynamic reference possible on 'non hostvars'
2016-01-05 14:36:21 -05:00
Brian Coca
3d608ef9fa simplified diff handling in callback
no need for the copy or other complexity
2016-01-05 14:27:33 -05:00
Brian Coca
c14e099dd7 now handles 'non file diffs'
this allows modules to pass back a 'diff' dict and it will still show using the file interface
2016-01-05 14:27:33 -05:00
Toshio Kuratomi
7e318e8398 Update extras submodule ref 2016-01-05 07:52:44 -08:00
Toshio Kuratomi
add2e9cbd1 Fix problems with non-ascii values passed as part of the command to connection plugins
@drybjed discovered this with non-ascii environment variables and
command line arguments to script and raw module.
2016-01-05 07:46:24 -08:00
Abhijit Menon-Sen
9f93c9c84b Clean up debug logging around _low_level_execute_command
We were logging the command to be executed many times, which made debug
logs very hard to read. Now we do it only once.

Also makes the logged ssh command line cut-and-paste-able (the lack of
which has confused a number of people by now; the problem being that we
pass the command as a single argument to execve(), so it doesn't need an
extra level of quoting as it does when you try to run it by hand).
2016-01-05 07:46:13 -08:00
nitzmahone
b4e0b5503c move core submodule pointer 2016-01-04 16:10:29 -08:00
Michael Scherer
5536ddd118 Do not set 'changed' to True when using group_by
Since group_by is not changing in any way to the remote
system, there is no change. This also make things more consistent
with the set_fact plugin.
2016-01-04 16:23:25 -05:00
Yannig Perré
021ed1aa8b Replace to_string by to_unicode.
Fix https://github.com/ansible/ansible/issues/13707
2016-01-01 08:18:18 -08:00
Brian Coca
a703f3a6d2 added newer vars to 'reset_vars'
these vars pass back info to the task about the connection
moved to their own block at start at file for readability and
added the newer standard vars
2015-12-28 12:50:04 -05:00
Brian Coca
f7ea8b32a9 minor fix to become docs 2015-12-28 10:25:35 -05:00
Stephen Medina
ba7d2db8ad clarify idempotence explanation
Small typo; wasn't sure what to replace it with.
2015-12-28 10:24:55 -05:00
Brian Coca
b50ed10a84 updated release cycle to 4 months instead of 2 2015-12-27 14:17:58 -05:00
Toshio Kuratomi
b4fae25a96 CHANGELOG entry for bigip* validate_certs change 2015-12-25 12:24:19 -08:00
Toshio Kuratomi
6142736946 Oops, core needs to stay on stable-2.0 2015-12-25 12:16:18 -08:00
Toshio Kuratomi
25fc4217df Update submodule refs 2015-12-25 12:15:49 -08:00
Toshio Kuratomi
d71e8fb870 bigip changes as requested by bcoca and abadger:
* Fix to error if validate_cert is True and python doesn't support it.
* Only globally disable certificate checking if really needed.  Use
  bigip verify parameter if available instead.
* Remove public disable certificate function to make it less likely
  people will attempt to reuse that
2015-12-25 12:15:24 -08:00
Rene Moser
369ed9feed cloudstack: test_cs_instance: more integration tests
cloudstack: extend test_cs_instance addressing recovering

cloudstack: test_cs_instance: add tests for using display_name as indentifier.
2015-12-23 17:48:49 -08:00
Toshio Kuratomi
ebd3b35d02 Update submodule refs 2015-12-23 14:29:36 -08:00
Toshio Kuratomi
82df9041e7 Going to do this in the connection plugin
Revert "Transform the command we pass to subprocess into a byte string in _low_level-exec_command"

This reverts commit 6d76cb40c5.
2015-12-23 13:28:42 -08:00
Toshio Kuratomi
dd59fc176e Going to do this in the connection plugin
Revert "Fix make tests-py3 on devel. Fix for https://github.com/ansible/ansible/issues/13638."

This reverts commit 725e40c5e6.
2015-12-23 13:28:42 -08:00
Toshio Kuratomi
9af054addf Going to do this in the connection plugin
Revert "Convert to bytes later so that make_become_command can jsut operate on text type."

This reverts commit bfc082fb07.
2015-12-23 13:28:42 -08:00
Brian Coca
7f29cb9dc6 corrected role path search order
the unfraking was matching roles in current dir as it always returns a full path,
pushed to the bottom as match of last resort
fixes #13645
2015-12-23 15:15:58 -05:00
Brian Coca
2786908bac fixed tests to follow new invocation structure
also added maxdiff setting to see issues clearly when they happen
2015-12-23 11:45:50 -05:00
Brian Coca
43331d8c31 better module error handling
* now module errors clearly state msg=MODULE FAILURE
* module's stdout and stderr go into module_stdout and module_stderr keys
which only appear during parsing failure
* invocation module_args are deleted from results provided by action
plugin as errors can keep us from overwriting and then disclosing info that
was meant to be kept hidden due to no_log
* fixed invocation module_args set by basic.py as it was creating different
keys as the invocation in action plugin base.
* results now merge
2015-12-23 10:59:01 -05:00
Brian Coca
579a2ff739 fix no_log disclosure when using aliases 2015-12-22 17:17:06 -05:00
Toshio Kuratomi
bfc082fb07 Convert to bytes later so that make_become_command can jsut operate on text type.
Conflicts:
	lib/ansible/plugins/action/__init__.py
2015-12-22 08:33:36 -08:00
Yannig Perré
725e40c5e6 Fix make tests-py3 on devel. Fix for https://github.com/ansible/ansible/issues/13638. 2015-12-22 08:33:36 -08:00
Monty Taylor
8216a659fa Also convert ints to bool for type=bool 2015-12-22 10:25:25 -05:00
James Cammarata
a2120a3d63 Version bump for 2.0.0-0.8.rc3 2015-12-21 17:59:27 -05:00
Toshio Kuratomi
6d76cb40c5 Transform the command we pass to subprocess into a byte string in _low_level-exec_command 2015-12-21 13:55:06 -08:00
James Cammarata
b0e7ea78af Actually disable parallel makes for integration runner 2015-12-21 16:12:27 -05:00
Toshio Kuratomi
6a0d2116b8 Delete new_inventory from stable-2.0 as it isn't used there.
leaving in devel for now as some people are planning on using it as
a starting place to update inventory in 2.1
2015-12-21 13:11:16 -08:00
James Cammarata
37908735d4 Dropping instance size back down since we're not doing parallel builds 2015-12-21 15:55:16 -05:00
James Cammarata
c0248873da Integration test runner tweaks 2015-12-21 15:53:19 -05:00
James Cammarata
75695f5c70 Kick up the integration runner test image size 2015-12-21 15:53:19 -05:00
James Cammarata
0a6bc57fa5 Parallelize make command for integration test runner
Also adds a new var, used by the prepare_tests role, to prevent it from
deleting the temp test directory at the start of each play to avoid any
potential race conditions
2015-12-21 14:09:02 -05:00
Yannig Perré
1d18964daa Merge role params into variables separately from other variables
Fixes #13617
2015-12-21 13:55:17 -05:00
Branko Majic
8eb3963cd9 Adding documentation for the 'dig' lookup (#13126). 2015-12-21 13:49:49 -05:00
Brian Coca
c605cd37f6 allow for non standard hostnames
* Changed parse_addresses to throw exceptions instead of passing None
* Switched callers to trap and pass through the original values.
* Added very verbose notice
* Look at deprecating this and possibly validate at plugin instead
fixes #13608
2015-12-21 13:42:47 -05:00
Brian Coca
e1698fb4bf role search path clarified 2015-12-21 13:16:48 -05:00
James Cammarata
81f09f3fbd Disable docker test for Fedora, due to broken packaging 2015-12-21 10:58:49 -05:00
James Cammarata
b77d834239 Uncomment docker test for stable-2.0 2015-12-21 10:58:35 -05:00
James Cammarata
7607e55a5a Save output of integration test results to files we can archive 2015-12-21 10:58:02 -05:00
James Cammarata
d512717cd2 Fix logic in PlayIterator when inserting tasks during rescue/always
Because the fail_state is potentially non-zero in these block sections,
the prior logic led to included tasks not being inserted at all.

Related issue: #13605
2015-12-21 10:57:24 -05:00
Matt Clay
ce17557b84 Fixed import typo for memcache module in tests.
The typo caused the test for the memcached cache plugin to be skipped
even when the necessary memcache python module was installed.
2015-12-21 10:27:36 -05:00
Toshio Kuratomi
80e109e1ad And change the task a little more since different shlex versions are handling the quotes differently 2015-12-20 11:52:50 -08:00
Toshio Kuratomi
fdc562e3c3 Fix test playbook syntax 2015-12-20 11:47:17 -08:00
Toshio Kuratomi
1dcfd7ba02 Since the velox test server seems to be dropping using iptables to drop requests from aws, test via a different website instead 2015-12-20 11:38:54 -08:00
James Cammarata
9cfa2d7e28 Fixing bugs in conditional testing with until and some integration runner tweaks 2015-12-20 12:45:33 -05:00
Toshio Kuratomi
5bd01c09d6 Troubleshooting has reduced us to this 2015-12-20 12:45:28 -05:00
Toshio Kuratomi
61bd0e1310 Fix the fedora host detection 2015-12-20 12:45:09 -05:00
Toshio Kuratomi
bb1047c483 What is going on here 2015-12-20 12:45:03 -05:00
James Cammarata
c2a29a01a2 Removing update all for test deps, it didn't fix the problem 2015-12-20 12:44:20 -05:00
James Cammarata
6a1ebaa1bc Fix typo in integration test runner role 2015-12-20 12:44:10 -05:00
Toshio Kuratomi
1d1a04008e Update mysql setup to handle installing mysql with dnf too. 2015-12-20 12:43:47 -05:00
James Cammarata
1cde02058f Make integration tests run in parallel with async 2015-12-20 12:41:53 -05:00
Toshio Kuratomi
48675550bf Try updating the centos7 image to a newer version (trying to resolve issue being unable to connect to some webservers) 2015-12-20 09:13:03 -08:00
Toshio Kuratomi
3955ea5a8a Fixes for tests that assumed yum as package manager for systems that
have dnf
2015-12-20 08:11:05 -08:00
Toshio Kuratomi
454c8ff5b8 Switch from yum to package when installing sudo so that dnf is handled as well 2015-12-20 08:10:58 -08:00
Toshio Kuratomi
ba0b4afb6b Revert "removed invocation info as it is not no_log aware"
This reverts commit 6127a8585e.
2015-12-19 15:51:29 -08:00
Toshio Kuratomi
76507bd567 Make return invocation information so that our sanitized copy will take precedence over what the executor knows. 2015-12-19 15:51:22 -08:00
Toshio Kuratomi
d2bf615780 Fix unittests for return of invocation from fail_json and exit_json 2015-12-19 15:50:43 -08:00
Brian Coca
3840fb267c corrected service detection in docker versions
now if 1 == bash it falls back into tool detection
2015-12-19 16:16:28 -05:00
Toshio Kuratomi
fbccfc8b61 Add package module to squash list 2015-12-19 13:01:48 -08:00
Toshio Kuratomi
2d179b376b Remove args from get_name() as we can't tell if any of the args are no_log 2015-12-19 11:51:51 -08:00
Toshio Kuratomi
a0add9dda5 Comment to explain why we strip _ansible_notify specially 2015-12-19 11:32:31 -08:00
Toshio Kuratomi
de9517dcc8 Also need redhat-rpm-config to compile pycrypto 2015-12-19 11:27:55 -08:00
Toshio Kuratomi
bef2b70eae Make sure that yum is present on redhat family systems (makes things also work on fedora systems where dnf is the default) 2015-12-19 11:24:27 -08:00
Brian Coca
df08ba37fe removed invocation info as it is not no_log aware
This was added in 1.9 and 2.0 tried to copy, but since it cannot
obey no_log restrictions I commented it out. I did not remove as
it is still very useful for module invocation debugging.
2015-12-19 11:45:59 -05:00
Toshio Kuratomi
6dc42e376f update submodule refs 2015-12-18 22:18:25 -08:00
Toshio Kuratomi
9689f00bb1 Add a Fedora latest host into the mix 2015-12-18 22:15:29 -08:00
James Cammarata
2f0e8d9654 Make integration runner ec2 add_hosts use valid host names 2015-12-18 16:04:54 -08:00
Toshio Kuratomi
6d9235e36d Fix the fedora host detection 2015-12-18 15:43:39 -08:00
Toshio Kuratomi
eb606cc18b What is going on here 2015-12-18 15:43:32 -08:00
Toshio Kuratomi
85ef768d51 Bugfix the fedora 23 install task 2015-12-18 14:39:27 -08:00
Toshio Kuratomi
457e32128f Ubuntu images with hvm ssd 2015-12-18 14:39:20 -08:00
Toshio Kuratomi
e8ce341f9c Fedora 23 needs to have python2 packages installed 2015-12-18 14:08:43 -08:00
Toshio Kuratomi
87214005d2 Make tests that use kennethreitz retry. 2015-12-18 11:51:16 -08:00
Brian Coca
dc0a4e74d6 added note about add_hosts 2015-12-18 13:57:58 -05:00
Brian Coca
43409fa16c added missing features to changelog 2015-12-18 12:15:08 -05:00
James Cammarata
b912923ef2 Do a full yum update to make sure packages are latest version
For the deps setup of integration tests, as we sometimes see odd
errors we can't reproduce, which may be related to slightly out of
date package dependencies.
2015-12-18 08:51:57 -08:00
James Cammarata
46bc8253e1 Add awk to integration test deps list 2015-12-18 08:51:42 -08:00
Toshio Kuratomi
1f70fc6424 Add state=latest to pip install of pycrypto 2015-12-18 08:51:34 -08:00
James Cammarata
ed4ad5f6fb Add ca-certificates update to the integration deps playbook 2015-12-18 08:51:25 -08:00
Toshio Kuratomi
69eb22c652 Install an updated version of pycrypto on Ubuntu12 from pip 2015-12-18 08:51:14 -08:00
Toshio Kuratomi
dca880d04a kennetreitz.org times out but www.kennethreitz.org is fine 2015-12-18 08:34:07 -08:00
Toshio Kuratomi
0c6364c771 debug line needs var not msg 2015-12-18 08:33:55 -08:00
James Cammarata
c9cf07e3b5 Fixing bugs with {changed,failed}_when and until with registered vars
* Saving of the registered variable was occuring after the tests for
  changed/failed_when.
* Each of the above fields and until were being post_validated too early,
  so variables which were not defined at that time were causing task
  failures.

Fixes #13591
2015-12-18 11:08:06 -05:00
James Cammarata
bcd66059ae Use --source instead of -e for awk in integration Makefile 2015-12-17 20:53:43 -05:00
Toshio Kuratomi
72e6dbcd12 Fix get_url tests in light of distros backporting SNI support 2015-12-17 17:53:38 -08:00
James Cammarata
184c2985fc Consolidating package lines for virtualenv install in test deps integration 2015-12-17 17:46:12 -05:00
James Cammarata
4abbe8a989 Moving apt cache update to top to ensure cache is updated before deps installed 2015-12-17 17:46:12 -05:00
James Cammarata
ce159ce2eb Switch virtualenv dep installation from pip to package manager 2015-12-17 17:46:12 -05:00
James Cammarata
fdb94b9a7a Adding pip install of virtualenv to test deps integration role 2015-12-17 17:45:59 -05:00
Chris Meyers
be0e826091 remove .gitignore 2015-12-17 14:18:12 -08:00
Chris Meyers
f12ed5eba8 symbolic link role for testing 2015-12-17 14:18:06 -08:00
Chris Meyers
b1d5e9ff3c removed ansible_python_interpreter
* added missed renames of ansible_deps to ansible_test_deps
* removed acidential inventory.dynamic file
* modified README for ansible_test_deps role
2015-12-17 14:17:58 -08:00
Chris Meyers
dcb732b416 rename role ansible_deps to ansible_test_deps 2015-12-17 14:17:49 -08:00
Chris Meyers
ad6ec9610e playbook that Ansible jenkins runs moved into core
The playbook is already running in jenkins and works. This moves the
assets into core for ease of maintenance going forward.
2015-12-17 14:17:41 -08:00
Chris Meyers
fcceac5ff2 trigger jenkins integration tests 2015-12-17 14:17:35 -08:00
Toshio Kuratomi
0a319a28f2 Update submodule for mysql_user fix 2015-12-17 13:47:07 -08:00
James Cammarata
17fe156016 Fixing a mistake from tweaking list stuff too much
Use the action only if the task name is not set
2015-12-17 16:34:03 -05:00
James Cammarata
5824a6d9bb Further tweaks to the output format of list tasks/tags 2015-12-17 16:31:02 -05:00
James Cammarata
59cac03d3b Make --list-tasks respect tags
Also makes the output closer to the appearance of v1

Fixes #13260
2015-12-17 16:11:00 -05:00
Toshio Kuratomi
cc2e0c1e3d Update submodule to pull in mysqldump fix 2015-12-17 13:02:11 -08:00
Toshio Kuratomi
624556eb16 Add recent bugfixes to changelog 2015-12-17 10:35:24 -08:00
Toshio Kuratomi
64737c80f9 Update submodule refs 2015-12-17 10:21:50 -08:00
Brian Coca
5e9d182229 changed test to use filter for accurate reporting 2015-12-17 12:28:17 -05:00
James Cammarata
1f7ed5d8ee Fix handling of environment inheritence, and template each inherited env
Environments were not being templated individually, so a variable environment
value was causing the exception regarding dicts to be hit. Also, environments
as inherited were coming through with the tasks listed first, followed by the
parents, so they were being merged backwards. Reversing the list of environments
fixed this.
2015-12-17 09:46:41 -05:00
James Cammarata
f41dd578b3 Make sure we're using the original host when processing include results
Also fixes a bug where we were passing an incorrect number of parameters to
_do_handler_run() when processing an include file in a handler task/block.

Fixes #13560
2015-12-16 19:13:03 -05:00
James Cammarata
407d76b8d5 Fixing template integration test for python 2.6 versions
No longer immediately fallback to to_json if simplejson is not installed
2015-12-16 18:24:12 -05:00
Toshio Kuratomi
d2c43c421f Update core module ref for mysql_user fix 2015-12-16 14:10:00 -08:00
James Cammarata
6c509b5364 Updating the porting guide to note the complex args/bare vars change
Related to #13518
2015-12-16 16:37:55 -05:00
Brian Coca
4bce9947e5 fixed formating issues with rst 2015-12-16 16:37:55 -05:00
Toshio Kuratomi
4ad7135690 Add first draft of porting guide for 2.0 2015-12-16 16:37:55 -05:00
James Cammarata
bf9f6ce09c All variables in complex args again
Also updates the CHANGELOG to note the slight change, where bare variables
in args are no longer allowed to be bare variables

Fixes #13518
2015-12-16 16:37:55 -05:00
Brian Coca
84a6b543a9 ensure additional args are a dict
They should already be templated before hitting this point?
otherwise the class needs the task variables passed into it.
relates to #13518 and  https://groups.google.com/forum/#!msg/ansible-project/esoBcphHwHs/Ghgi_M-MCQAJ
2015-12-16 15:32:26 -05:00
chouseknecht
6cf7b792a3 Fix docs. The search command works with both galaxy.ansible.com and galaxy-qa.ansible.com. 2015-12-16 15:11:45 -05:00
chouseknecht
abec9331d8 Fixed documentation typos and bits that needed clarification. Fixed missing spaces in VALID_ACTIONS. 2015-12-16 15:11:41 -05:00
chouseknecht
ba16f2ae95 Add a section to intro_configuration for Galaxy. 2015-12-16 15:11:34 -05:00
chouseknecht
1e605d727c Define and handle ignore_certs correctly. Preserve search term order. Tweak to Galaxy docsite. 2015-12-16 15:11:30 -05:00
chouseknecht
6c48d52b4a Make sure it is clear that new commands require using the Galaxy 2.0 Beta site. 2015-12-16 15:11:26 -05:00
chouseknecht
8bb598daa2 Updated ansible-galaxy man page. Removed -b option for import. 2015-12-16 15:11:22 -05:00
chouseknecht
21edd6a5ab Fix overloaded options. Show an error when no action given. Don't show a helpful list of commands and descriptions. 2015-12-16 15:11:05 -05:00
Brian Coca
e10e03f4d9 added releases doc 2015-12-16 15:10:48 -05:00
nitzmahone
d5bdea0bab fix plugin loading for Windows modules
force plugin loader to only consider .py files, since that's the only place docs can live ATM...
2015-12-16 11:47:52 -08:00
Toshio Kuratomi
cd1bdaa61d Update submodule refs for mysql refactor 2015-12-16 11:14:39 -08:00
Jonathan Mainguy
9f69b6b585 Add shared connection code for mysql modules 2015-12-16 11:06:21 -08:00
James Cammarata
9e52a7c769 Attempt at fixing strategy unit test failures on py2.6 and py3 2015-12-16 13:57:13 -05:00
Toshio Kuratomi
8fd438000a Update url to site that has an invalid certificate 2015-12-16 09:47:56 -08:00
James Cammarata
54ce8327cb Disabling docker test for stable-2.0 due to versioning issues 2015-12-16 11:24:29 -05:00
James Cammarata
82519ab2a6 Preserve the cumulative path for checking includes which have parents
Otherwise, each relative include path is checked on its own, rather
than in relation to the (possibly relative) path of its parent, meaning
includes multiple level deep may fail to find the correct (or any) file.

Fixes #13472
2015-12-16 11:23:33 -05:00
Toshio Kuratomi
5761c333a6 Update submodule ref to fix docs build 2015-12-16 08:13:54 -08:00
Toshio Kuratomi
72b852a814 Update submodule refs 2015-12-16 08:03:34 -08:00
Toshio Kuratomi
897569c1ca Conditionally create the CustomHTTPSConnection class only if we have the required baseclasses.
Fixes #11918
2015-12-16 08:02:11 -08:00
Toshio Kuratomi
8e2cb2abb2 Fixes for proxy on RHEL5 2015-12-16 08:02:03 -08:00
Toshio Kuratomi
b8d9b106de First attempt to fix https certificate errors through a proxy with python-2.7.9+
Fixes #12549
2015-12-16 08:01:56 -08:00
David
f68c0043de Fix typo 2015-12-16 10:53:10 -05:00
Brian Coca
48fb0cdecb debug now validates its params
simplified var handling
made default message the same as in pre 2.0
fixes #13532
2015-12-16 10:42:30 -05:00
James Cammarata
28602cddb0 Use the original host rather than the serialized one when processing results
Fixes #13526
Fixes #13564
Fixes #13566
2015-12-16 01:50:14 -05:00
Yannig Perré
5011d63f7e Fix a part of python 3 tests (make tests-py3, see https://github.com/ansible/ansible/issues/13553 for more details). 2015-12-15 07:57:05 -08:00
Brian Coca
5ab25e2dfe clean debug output to match prev versions 2015-12-15 09:28:39 -05:00
Brian Coca
6e1bb6c87d renamed ssh.py shared module file to clarify 2015-12-15 08:44:43 -05:00
Peter Sprygada
ac7dc6bd81 the ssh shared module will try to use keys if the password is not supplied
The current ssh shared module forces only password based authentication.  This
change will allow the ssh module to use keys if a password is not provided.
2015-12-15 08:39:17 -05:00
Michael Scherer
b6dac26224 Make module_utils.known_hosts.get_fqdn work on ipv6 2015-12-14 11:10:02 -08:00
Michael Scherer
a7a3a34987 Add tests for ansible.module_utils.known_hosts 2015-12-14 11:09:49 -08:00
Toshio Kuratomi
6df187da84 Fix for template module not creating a file that was not present when force=false 2015-12-14 10:57:34 -08:00
Monty Taylor
cf95619229 Optionally only use UUIDs for openstack hosts on duplicates
The OpenStack inventory lists hostnames as the UUIDs because hostsnames
are not guarnateed to be unique on OpenStack. However, for the common
case, this is just confusing.

The new behavior is a visible change, so make it an opt-in via config.

Only turn the hostnames to UUIDs if there are duplicate hostnames.
2015-12-14 13:27:33 -05:00
Monty Taylor
17a37844d4 Fix the refresh flag in openstack inventory
Refresh will update the dogpile cache from shade, but doesn't cause
the ansible side json cache to be invalidated. It's a simple oversight.
2015-12-14 13:27:33 -05:00
gp
0697c6e59d Fix typo in galaxy.rst
Fix typo
2015-12-14 09:15:15 -08:00
Toshio Kuratomi
66be9d06c4 Minor: Correct type pyhton => python 2015-12-14 08:51:33 -08:00
Toshio Kuratomi
62b2c9eb4e Update submodule refs 2015-12-14 08:34:49 -08:00
Jonathan Mainguy
887319f95c add tests for encrypted hash mysql_user 2015-12-14 08:02:11 -08:00
James Cammarata
fc233bf3f4 Use an octal representation that works from 2.4->3+ for known_hosts 2015-12-14 10:44:22 -05:00
James Cammarata
8dcba63022 Fixing up some non-py3 things for unit tests 2015-12-14 10:44:22 -05:00
Hans-Joachim Kliemeck
60f5fe76a1 use default settings from ansible.cfg 2015-12-14 09:19:15 -05:00
James Cammarata
0b66ec0ddd Cleanup strategy tests broken by new forking strategy 2015-12-14 03:07:51 -05:00
James Cammarata
e7bf204db4 A few tweaks to improve new forking code 2015-12-14 03:07:48 -05:00
James Cammarata
c8e6461dee Changing the way workers are forked 2015-12-14 01:16:55 -05:00
Brian Coca
3214ef8832 added that ansible-pull is now shallow to changelog 2015-12-13 12:18:28 -05:00
Robin Roth
6a5444c2fe add --full flag to ansible-pull man page
add --full flag that was added in #13502
2015-12-13 12:17:36 -05:00
Usman Ehtesham Gul
d4e06695fd Fix Doc mistake
Fix Doc mistake in ansible/docsite/rst/playbooks_variables.rst
2015-12-13 09:50:45 -05:00
Robin Roth
d369e3ec72 fix whitespace 2015-12-13 09:37:48 -05:00
Robin Roth
bff8db3c8f make shallow clone the default for ansibel-pull 2015-12-13 09:37:48 -05:00
Robin Roth
8c6010cc86 add depth option to ansible-pull
Allows shallow checkouts in ansible-pull by adding `--depth 1` (or higher number)
2015-12-13 09:37:48 -05:00
Brian Coca
13ffd3ee2d include all packaging in tarball
not juse rpm spec file
2015-12-13 00:34:56 -05:00
Brian Coca
f9bff8e488 simplified skippy
thanks agaffney!
2015-12-12 17:51:25 -05:00
Brian Coca
6d83bf8ca9 changed shell delimiters for csh
fixes #13459
2015-12-12 16:13:20 -05:00
Brian Coca
17f06dd310 removed unused imports in galaxy/cli 2015-12-12 13:43:46 -05:00
Jan Warchoł
82f893d65a Explain how 'run_once' interacts with 'serial' 2015-12-12 00:33:21 -05:00
Brian Coca
40b55eda4c avoid set to unique hosts to preserver order
swiched to using a list comp and set to still unique but keep expected order
fixes #13522
2015-12-11 15:40:48 -05:00
Brian Coca
7678ab05d8 removed merge conflict 2015-12-11 15:10:48 -05:00
James Cammarata
5b801afa38 Don't mark hosts failed if they've moved to a rescue portion of a block
Fixes #13521
2015-12-11 14:58:38 -05:00
Brian Coca
71cc791677 narrow down exception catching in block builds
this was obscuring other errors and should have always been narrow scope
2015-12-11 13:13:19 -05:00
Brian Coca
3d1b3e2bb8 removed 'bare' example in environment
now shows how to use explicit templating
2015-12-11 09:35:19 -05:00
Brian Coca
b0e5556d9a fix make complaint when git is not installed 2015-12-10 21:50:11 -05:00
James Cammarata
75e6fb30d5 Fixing up docker integration tests a bit 2015-12-10 13:10:40 -05:00
Toshio Kuratomi
9b5ec8f359 Update submodule ref 2015-12-10 08:10:13 -08:00
Toshio Kuratomi
2dbcd8fc9e become_pass needs to be bytes when it is passed to ssh.
Fixes #13240
2015-12-10 07:30:20 -08:00
Charles Paul
82789df231 allow custom callbacks with adhoc cli for scripting
missing import of CallbackBase
2015-12-10 07:05:41 -08:00
James Cammarata
3fafc6d655 Missed one place we were appending the incorrectly escaped item to raw params 2015-12-09 17:59:06 -05:00
Toshio Kuratomi
9b864811f8 Note that handlers inside of includes are not possible at the moment 2015-12-09 13:53:04 -08:00
nitzmahone
ba62a8d559 Windows doc updates 2015-12-09 16:36:23 -05:00
Toshio Kuratomi
b793fbc60a Remove the funcd connection plugin 2015-12-09 13:30:37 -08:00
nitzmahone
7e6d546632 added winrm CP notes to changelog 2015-12-09 15:50:37 -05:00
Brian Coca
45d2f59858 changed deprecation to removal warning 2015-12-09 12:39:11 -08:00
Toshio Kuratomi
dd6e10ce43 Update submodule refs 2015-12-09 12:11:30 -08:00
James Cammarata
6694e97a31 Make on_file_diff callback item-aware 2015-12-09 14:52:06 -05:00
Brian Coca
699bb847a0 reenabled --tree for ansible adhoc command
previous fix to avoid callbacks now conflicted with tree optoin
which is implemented as a callback in 2.0
2015-12-09 10:14:43 -08:00
Brian Coca
c02ca22894 adhoc avoids callbacks by default as it did before
Previous emptying of whitelist only affected callbacks that were
constructed for need whitelist. This now works for all callbacks.
2015-12-09 10:03:39 -08:00
Toshio Kuratomi
1f9d66ef60 Clarify language of delegate_facts documentation 2015-12-09 08:46:49 -08:00
Toshio Kuratomi
8f6f2fc920 Code smell test for specifying both required and default in FieldAttributes 2015-12-09 08:46:49 -08:00
Brian Coca
4d7710806e attribute defaults that are containers are a copy
This is simpler way to prevent persistent containers across instances
of classes that use field attributes
2015-12-09 08:42:14 -08:00
Brian Coca
c2665da3e8 removed unused 'pattern' from ansible.cfg
also moved the config param to a 'deprecated' list in constants.py
added TODO for producing a deprecation warning for such vars
2015-12-09 08:42:14 -08:00
Brian Coca
186259eb80 removed default from hosts to make it requried
prevents writing a play w/o a hosts entry which would default to
all/empty
2015-12-09 08:42:14 -08:00
Brian Coca
e571bf94d4 Revert "avoid persistent containers in attribute defaults"
This reverts commit 87969868d4.
found better way to do it
2015-12-09 08:42:14 -08:00
Brian Coca
565b5097da clarified warning from tree callback 2015-12-09 08:40:56 -08:00
chouseknecht
0a6f8b2bf3 Galaxy 2.0 2015-12-09 11:30:05 -05:00
Brian Coca
f649dc71e7 avoid persistent containers in attribute defaults
moved from the field attribute declaration and created a placeholder
which then is resolved in the field attribute class.

this is to avoid unwanted persistent of the defaults across objects which introduces
stealth bugs when multiple objects of the same kind are used in succession while
not overriding the default values.
2015-12-09 11:11:49 -05:00
Brian Coca
d93e1f4ccf added delegate_facts docs 2015-12-08 14:18:11 -08:00
Brian Coca
9eb9f55a31 keep string type filters as strings
now we don't try to convert types if using a filter that outputs a specifically formated string
made list of filters configurable
2015-12-08 12:52:44 -08:00
David L Ballenger
dbcfce03d2 Add ssh_host support for MacOSX El Capitan.
OS X El Capitan moved the /etc/ssh_* files into /etc/ssh/. This fix
adds a distribution version check for Darwin to set the keydir
appropriately on El Capitan and later.
2015-12-08 15:36:49 -05:00
Jeremy Audet
b3cfb630dc Make "make webdocs" compatible with Python 3
The `webdocs` make target fails under Python 3. It fails due to a variety of
syntax errors, such as the use of `except Foo, e` and `print 'foo'`. Fix #13463
by making code compatible with both Python 2 and 3.
2015-12-08 15:31:35 -05:00
James Cammarata
058e02137a Preserve original token when appending to _raw_params in parse_kv
Fixes #13311
2015-12-08 15:06:01 -05:00
Brian Coca
666cb07614 fixed typo in tree callback, added default dir
this would allow it to work with playbooks also
2015-12-08 11:59:51 -08:00
Brian Coca
df04955572 updated with delegate_facts directive 2015-12-08 11:53:36 -08:00
James Cammarata
422092b8bc Fix typo from 5ae850c 2015-12-08 14:34:37 -05:00
James Cammarata
5ae850c3b2 Make fact delegating configurable, defaulting to 1.x behavior 2015-12-08 14:00:17 -05:00
Brian Coca
7c8e1b41bb Revert "Fix always_run support in the action plugin for template when copying"
This reverts commit 45670eff81.
2015-12-08 09:26:24 -08:00
Brian Coca
43bfd16666 have always_run override check mode for a task
Fixes #13418
2015-12-08 09:26:13 -08:00
James Cammarata
45670eff81 Fix always_run support in the action plugin for template when copying
Fixes #13418
2015-12-08 11:55:35 -05:00
James Cammarata
50e5b0f8e9 Merge pull request #13467 from bcoca/adhoc_callbk_fix
adhoc does not load plugins by default
2015-12-08 11:28:20 -05:00
Brian Coca
e69064d0fc Merge pull request #13451 from bcoca/doas_fix
fixed doas from getting stuck when needing passwords
2015-12-08 11:13:01 -05:00
Peter Sprygada
1aa775196b adds new device argument to nxapi command arguments
The device argument allows a dict of nxapi parameters to be passed to
the module to simplify passing the nxapi parameters
2015-12-08 10:36:47 -05:00
Brian Coca
b07451eef8 adhoc does not load plugins by default
reimplemented feature from 1.x which kept additional callbacks from
poluting adhoc unless specifically asked for through configuration.
2015-12-08 06:37:15 -08:00
James Cammarata
cc98528ecb Version bump for 2.0.0-0.7.rc2 2015-12-07 13:13:37 -05:00
Brian Coca
f241c70740 corrected usage of ec2.py's profile option
this was never introduced into ansible-playbook though the docs
stated otherwise. We still explain how to use the env var to get the
same result.
2015-12-07 09:56:05 -08:00
Yannig Perré
2ed2c12f60 Fix issue when var name is the same as content.
See https://github.com/ansible/ansible/issues/13453 for more details.
2015-12-07 10:08:13 -05:00
Nils Steinger
d85b8adba6 More meaningful string representation for meta tasks (like 'noop' and 'flush_handlers') 2015-12-07 10:08:13 -05:00
Brian Coca
8e16b481d0 added extract filter to changelog 2015-12-07 10:08:13 -05:00
Peter Sprygada
d89dbf19fb bugfix for ios.py shared module argument creation
This patch fixes a bug in module_utils/ios.py where the the wrong shared
module arguments are being generated.  This bug prevented the shared module
from operating correctly.  This patch should be generally applied.
2015-12-07 06:59:43 -08:00
Toshio Kuratomi
40c01f3739 Use self.args when we parse arguments that way the arguments can be constructed manually 2015-12-06 22:19:11 -08:00
Toshio Kuratomi
71ffa5abdc Add representers so we can output yaml for all the types we read in from yaml 2015-12-06 22:19:02 -08:00
Brian Coca
0533e0bc96 fixed doas from getting stuck when needing passwords
Also adjusted test to match new doas become output
fixes #13449
2015-12-06 00:35:28 -05:00
Luca Berruti
3974b13a5a Make no_target_syslog consistent.
no_target_syslog = False --> do log on target
2015-12-05 16:14:36 -05:00
Brian Coca
d04d5bf0d5 only set become defaults at last possible moment
tasks were overriding commandline with their defaults, not with the
explicit setting, removed the setting of defaults from task init and
pushed down to play context at last possible moment.
fixes #13362
2015-12-05 15:59:51 -05:00
Brian Coca
8a733d990f simplified get_hosts code to have 1 retrun point 2015-12-05 10:13:09 -05:00
Nils Steinger
895fc48700 Remove duplicates from host list *before* caching it
Ansible previously added hosts to the host list multiple times for commands
like `ansible -i 'localhost,' -c local -m ping 'localhost,localhost'
--list-hosts`.
8d5f36a fixed the obvious error, but still added the un-deduplicated list to a
cache, so all future invocations of get_hosts() would retrieve a
non-deduplicated list.
This caused problems down the line: For some reason, Ansible only ever
schedules "flush_handlers" tasks (instead of scheduling any actual tasks from
the playbook) for hosts that are contained in the host lists multiple times.
This probably happens because the host states are stored in a dictionary
indexed by the hostnames, so duplicate hostname would cause the state to be
overwritten by subsequent invocations of … something.
2015-12-05 10:13:09 -05:00
Brian Coca
2068ff8926 updated pull location in changelog
it was in between of backslash description and example
2015-12-05 01:48:41 -05:00
sam-at-github
da6670cca4 Add fullstop to make sentence make sense. Touch parargraph while at it. 2015-12-04 21:35:39 -05:00
“Brice
d0337a8928 comment examples in default hosts file 2015-12-04 16:02:26 -08:00
Toshio Kuratomi
4c21d58f4c Transform exceptions into ansible messages via to_unicode instead of str to avoid tracebacks.
Fixes #13385
2015-12-04 11:52:07 -08:00
Florian Haas
86ca0bf3b1 Correct connection type returned by libvirt_lxc inventory script
The correct connection type for LXC containers managed via libvirt is
libvirt_lxc, not lxc.
2015-12-04 11:00:37 -08:00
James Cammarata
627576a955 Adding a uuid field so we can track host equality across serialization too 2015-12-04 13:34:10 -05:00
Toshio Kuratomi
3aa4db5083 Update submodule refs 2015-12-04 09:59:30 -08:00
James Cammarata
cd76552724 Changing up how host (in)equality is checked
Fixes #13397
2015-12-04 12:58:16 -05:00
Brian Coca
f630e140d2 fixed ansible-pull broken options
* sudo was not working, now it supports full become
* now default checkout dir works, not only when specifying
* paths for checkout dir get expanded
* fixed limit options for playbook
* added verbose and debug info
2015-12-03 20:51:51 -08:00
Brian Coca
c03b8ef0c2 return unique list of hosts 2015-12-03 19:44:31 -08:00
Brian Coca
46718ac3f4 reverted to previous pull checkout dir behaviour
This fixes bugs with not finding plays when not specifying checkout dir
Also makes it backwards compatible
2015-12-03 19:44:31 -08:00
Brian Coca
3e5c7c540b corrected playbook path, reformated options help
the last just to make the help consistent and readable
2015-12-03 18:24:10 -08:00
Brian Coca
7950f09d19 Now and/or shell expressions depend on shell plugin
This should fix issues with fish shell users as && and || are
not valid syntax, fish uses actual 'and' and 'or' programs.
Also updated to allow for fish backticks pushed quotes to subshell,
fish seems to handle spaces w/o them.
Lastly, removed encompassing subshell () for fish compatibility.
fixes #13199
2015-12-03 16:43:02 -08:00
Toshio Kuratomi
f8911adbbc For now, skip tests of module_utils/basic functions that are failing on
py3 (these are only run on the target hosts, not on the controller).
2015-12-03 14:26:25 -08:00
James Cammarata
6aa1b6d9b1 Properly compare object references for Hosts when adding new ones
Fixes #13397
2015-12-03 15:27:10 -05:00
James Cammarata
013ace9ab2 fix sorting of groups for host vars
Fixes #13371
2015-12-03 14:23:14 -05:00
James Cammarata
0d0ed35ba4 Properly default remote_user for delegated-to hosts
Fixes #13323
2015-12-03 11:33:11 -05:00
Toshio Kuratomi
b9fbfaf64e Also some unicode tests for return_values() 2015-12-02 21:12:33 -08:00
Toshio Kuratomi
2c5c7b54f6 Add some test data that has unicode values 2015-12-02 21:12:26 -08:00
Toshio Kuratomi
c1aeda59bd Don't compare or merge str with unicode
Fixes #13387
2015-12-02 21:12:18 -08:00
Brian Coca
0f813fd76a updated docs for 2.0 api 2015-12-02 12:11:58 -08:00
James Cammarata
38c11e2239 Default msg param to AnsibleError to avoid serialization problems 2015-12-02 14:18:13 -05:00
James Cammarata
ed4a06d8ef Don't use play vars in HostVars
Fixes #13398
2015-12-02 14:18:13 -05:00
Toshio Kuratomi
65f4cbf487 Fix template test results on python2.6 2015-12-02 10:34:14 -08:00
muffl0n
48a3922d56 Add example for regex_replace using named groups 2015-12-02 09:44:26 -08:00
Matt Martz
ca838d75e3 Get v2_playbook_on_start working
* Move self._tqm.load_callbacks() earlier to ensure that v2_on_playbook_start can fire
* Pass the playbook instance to v2_on_playbook_start
* Add a _file_name instance attribute to the playbook
2015-12-02 12:41:18 -05:00
Abhijit Menon-Sen
f339184e29 Use CLI.expand_tilde also for the vault --output file 2015-12-02 09:24:36 -08:00
Brian Coca
91f71b0ace added remote environment var setting to changelog 2015-12-02 09:09:35 -08:00
Abhijit Menon-Sen
f2f310472f Make module_lang default to whatever LANG is set to on the control node 2015-12-02 09:07:26 -08:00
Matt Martz
de7dc5d07f Catch additional assertion errors for load_list_of_blocks 2015-12-02 09:04:59 -08:00
Brian Coca
ae5cfb2898 better error on invalid task lists 2015-12-02 08:14:37 -08:00
James Cammarata
381409140e Minor tweak and comment addition to 974a0ce3 2015-12-02 09:10:20 -05:00
Christoph Dittmann
be92f909ee Update debug messages and comments
The comment was taken literally from lib/plugins/strategy/linear.py and
makes no sense in free.py where we have no noop tasks.

Also update the debug messages.
2015-12-02 09:00:27 -05:00
Christoph Dittmann
974a0ce3fb Fix issue #13370
all_blocks is referenced after the loop over included_files, so it needs
to be initialized before this loop, not inside.
2015-12-02 09:00:26 -05:00
Christoph Dittmann
1f1febaa0d Let PlayIterator.add_tasks accept empty task lists
PlayIterator.add_tasks raised an error when trying to add an empty task
list.  This was the root cause of ansible issue #13370.
2015-12-02 09:00:26 -05:00
Brian Coca
3c25ae2e10 updated new module list
added missing modules and fixed alphabetical ordering
2015-12-01 23:53:43 -08:00
Brian Coca
d9218ce33f reformated test, changed big assert to with_items
much easier to see the individual condition that causes the failure
when using with_items and evaluating each part of the assert individually
2015-12-01 21:26:36 -08:00
Brian Coca
346a9fe87d unconditionally set vars on init to avoid issues with var precedence 2015-12-01 21:25:43 -08:00
Peter Sprygada
5b5c6c4f47 fixes a syntax issue with module_utils/eapi.py
This patch fixes an issue with the common args dict in the eapi shared
module.  This patch is required for the eapi shared module to be properly
imported and is therefore should be applied to all instances.
2015-12-01 20:46:11 -08:00
Peter Sprygada
02d059271c initial add of ssh shared module.
This ssh shared module is used for building modules that require an
interactive shell environment such as those required for connecting
to network devices
2015-12-01 19:15:41 -08:00
Peter Sprygada
a6771b2255 adds module create function for eapi.py shared module
This commit changes the way modules create an instance of AnsibleModule to
now use a common function, eapi_module.  This function will now automatically
append the common argument spec to the module argument_spec.  Module
arguments can override common module arguments
2015-12-01 19:14:38 -08:00
Peter Sprygada
a9e8b54246 initial add of the ios shared module
This adds shared module support for building modules that connect to Cisco
IOS devices.  It builds on the module_utils/ssh.py shared module.
2015-12-01 19:05:14 -08:00
Brian Coca
d2108e9ff3 fixed signature for init on callbacks
also removed passing display to base class which already handles this
2015-12-01 14:07:47 -08:00
nitzmahone
59dadc4f6b allow shell plugin to affect remote module filename
Fix for 13368, added get_remote_filename to shell plugins, powershell version appends .ps1 if necessary, base shell plugin no-ops
2015-12-01 14:02:01 -08:00
Brian Coca
927d28e5d5 added pull's code sig verification to changelog 2015-12-01 09:55:06 -08:00
Toshio Kuratomi
a61718cfc5 Revert "Note that su now works with local connection"
This reverts commit 93ef35e6a9.

bcoca already added this
2015-12-01 09:49:08 -08:00
Toshio Kuratomi
93ef35e6a9 Note that su now works with local connection 2015-12-01 09:14:10 -08:00
Toshio Kuratomi
b0e22d7701 _connect no longer takes a port argument 2015-12-01 09:12:55 -08:00
Brian Coca
1b7db6316e updated changelog to show su now works with local 2015-12-01 09:11:30 -08:00
Brian Coca
ca8c6e8e1c ignore password flags in become conflict check
since all the --ask pass options end up triggering the same code
and are functionally equivalent, ignore them when it comes to checking
privilege escalation conflicts. This allows using -K when --become-method=su
and so on.
2015-12-01 08:57:33 -08:00
Brian Coca
204e27ca66 avoid inheritance issues with default=dict declaration at class level
this should avoid the issue of subsequent plays not prompting for a var
prompted for in a previous play.
fixes #13363
2015-11-30 15:12:53 -08:00
James Cammarata
f96730003b Also make sure remote_user is defaulted correctly for delegated hosts
Fixes #13323
2015-11-30 16:16:11 -05:00
Toshio Kuratomi
4f3f79d37b Call the function :-)
Fixes #13330
2015-11-30 12:34:55 -08:00
James Cammarata
89f0207007 Ensure port is (re)set for delegated-to hosts
Fixes #13265
2015-11-30 14:41:05 -05:00
Brian Coca
f8ed1c003a fixed typo 2015-11-30 09:21:26 -08:00
Brian Coca
c5cd908c33 allow for bad stdout return from make temp dir command
fixes #13359
2015-11-30 09:20:24 -08:00
Brian Coca
958da26d18 corrected become_methods class variable in winrm
This should now correctly react when using become with winrm
fixes #13331
2015-11-30 08:35:33 -08:00
James Cammarata
a5d6be6dd2 Make sure run_once tasks properly set variables for all active hosts
Fixes #13267
2015-11-30 11:28:28 -05:00
James Cammarata
7af506e7cf Use text_type instead of unicode 2015-11-30 11:28:25 -05:00
James Cammarata
3a0f2475b2 Make sure the uuid in vars is string 2015-11-30 10:27:21 -05:00
James Cammarata
2db3f12027 Re-implement lookup wantlist
Fixes #13285
2015-11-29 23:45:14 -05:00
Yannig Perré
bb52b45ea0 Do not copy variable_manager each time. Instead, keep host and local variable_manager sync.
Fix https://github.com/ansible/ansible/issues/13221
2015-11-29 23:15:01 -05:00
James Cammarata
4114a3097f Tweak location of stats callback execution and properly relocate stats output code 2015-11-28 14:02:50 -05:00
Monty Taylor
73a269f9a5 Put in trap for args being None
_normalize_old_style_args can return None. If it does, the loop
"for args in args" blows up.
2015-11-28 13:44:44 -05:00
James Cammarata
737e467b8a Trigger on_stats just once, not once for each play
Fixes #13271
2015-11-28 13:37:02 -05:00
Abhijit Menon-Sen
cac0eea291 Explicitly accept become_success in awaiting_prompt state
If we request escalation with a password, we start in expecting_prompt
state. If the escalation then succeeds without the password, i.e., the
become_success response arrives, we must explicitly move into the next
state (awaiting_escalation, which immediately goes into ready_to_send),
so that we no longer try to apply the timeout.

Otherwise, we would leak the success notification and eventually
timeout. But if the module response did arrive before the timeout
expired, the "process has already exited" test would do the right
thing by accident (which is why it didn't fail more often).

Fixes #13289
2015-11-28 10:22:35 -05:00
James Cammarata
54843d88ee Re-adding role_name/role_uuid variables 2015-11-28 10:00:42 -05:00
Yannig Perré
47651e6c22 More restrictive test against variable name to allow setting variable starting with _. 2015-11-28 10:00:38 -05:00
Yannig Perré
9e6ec4c6b0 Switch parameters validation after parsing in order to be more consistent between old and new style. 2015-11-28 10:00:34 -05:00
Raphael Badin
6457f88aab Fix missing word in developing_modules.rst 2015-11-28 10:00:25 -05:00
dizzler
e210da3659 Fix typo in modules_core.rst 2015-11-28 10:00:22 -05:00
René Moser
e0ecaac90d changelog: minor formating fix 2015-11-28 10:00:18 -05:00
Brian Coca
dbedcd3538 avoids prompting for vars during syntax check
fixes #13319
2015-11-27 11:46:32 -08:00
Kerim Satirli
4e6442fd19 removes editorial
I feel that Ansible is above the "my hosted Git community is better than yours" discussion and thus removed the editorial around Bitbucket
2015-11-27 11:15:57 -08:00
Chris Church
126249d69a Add assertions for ansible_date_time in setup result. 2015-11-27 00:49:02 -05:00
Toshio Kuratomi
056372690f Do not double transform to unicode 2015-11-25 07:56:06 -08:00
Charles Paul
9cee982a62 fixing errors with utf-8 values
removing utf-8 stanza

changing cast to binary_type instead

using to_unicode
2015-11-25 07:55:52 -08:00
Brian Coca
b69942a6d2 added missing : 2015-11-25 10:57:55 -08:00
Brian Coca
d9858ee73a added missing events to base class 2015-11-25 10:57:55 -08:00
Brian Coca
64bcab9253 fixes to fetch action module
* now only runs remote checksum when needed (fixes #12290)
 * unified return points to simplify program flow
2015-11-25 10:57:04 -08:00
James Cammarata
fc4326dc0c Fix ssh state issues by simply assuming it's never connected 2015-11-24 12:01:42 -05:00
James Cammarata
92ea5c9f7b Properly check for prompting state when re-using ssh connection
Fixes #13278
2015-11-24 09:10:51 -05:00
Yannig Perré
90021104d5 Use to_unicode instead of str() 2015-11-23 16:49:00 -05:00
Yannig Perré
8bd5abaf1e Allow debug var parameter to accept a list or dict. Fix https://github.com/ansible/ansible/issues/13252 2015-11-23 16:49:00 -05:00
Guido Günther
1ab60564ae Add integration tests for zypper
Modeled after the yum tests but also tests local package installations
as fixed with PR#1256.

This depends on PRs #1256, #1261 and #1262 in ansible-modules-extra.
2015-11-23 15:22:29 -05:00
Brano Zarnovican
af2e94e3c7 test_hg fix: remove reference to "head"
ERROR! error while evaluating conditional: head.stat.isreg

This is remnant from earlier change 50e5d81777
which removed stat on head file..
2015-11-23 14:34:21 -05:00
Brano Zarnovican
5378a6003a test_svn fix: remove hardcoded "~/ansible_testing/svn" path 2015-11-23 14:33:48 -05:00
Sebastien Couture
2859933a79 We should give pipes.quote() a string every time 2015-11-23 14:27:56 -05:00
James Cammarata
42bffeec7c Template (and include vars) PlaybookInclude paths
Fixes #13249
2015-11-23 14:04:14 -05:00
Chris Church
9a8e95bff3 Modify task executor to reuse connection inside a loop. Fix WinRM connection to set _connected properly and display when remote shell is opened/closed. Add integration test using raw + with_items. 2015-11-23 14:04:00 -05:00
René Moser
54ec2a0b84 docsite: cloudstack: fix missing quotes in example 2015-11-23 14:03:55 -05:00
Brian Coca
1e9e6339d1 marked spot that should send per item reulsts 2015-11-23 14:03:51 -05:00
Toshio Kuratomi
5bc3efe34b Update submodule refs 2015-11-23 09:02:12 -08:00
Chris Church
7d19ad82eb Recommend using pywinrm >= 0.1.1 from PyPI instead of GitHub version. 2015-11-20 15:40:34 -08:00
Gilles Cornu
c1bb3aea06 Documentation: Update the Vagrant Guide
This is an attempt to solve #7665.

Revert the change applied by f56a6e0951
(#12310), as the inventory generated by Vagrant still rely on the legacy
`_ssh` setting names for backwards compatibility reasons.
See also https://github.com/mitchellh/vagrant/issues/6570
2015-11-21 14:03:24 -08:00
Toshio Kuratomi
011df4ad24 Update docker_login so docs work 2015-11-20 13:59:02 -08:00
Toshio Kuratomi
c71ef9e3d7 Fix non-module plugins picking up files that did not end in .py.
This was caused by accessing the cache using the passed in mod_type
rather than the suffix that we calculate with knowledge of whether this
is a module or non-module plugin.
2015-11-20 13:51:29 -08:00
Toshio Kuratomi
e9ed190f7b Update submodule refs 2015-11-20 12:45:27 -08:00
Toshio Kuratomi
fe090b0fb9 Add docker_login module to the changelog 2015-11-20 12:44:48 -08:00
Toshio Kuratomi
50a924bb04 Docker cp sets file ownership to root:root so we can't use it.
Fixes #13219
2015-11-20 07:54:07 -08:00
Toshio Kuratomi
1e968d34cb Simplify code a little 2015-11-19 09:56:58 -08:00
Joern Heissler
c99fffa936 Use ansible_host in synchronize module
Fixes #13073
2015-11-19 09:56:49 -08:00
nitzmahone
f2225395f9 winrm error handling tweaks 2015-11-19 09:58:33 -05:00
nitzmahone
c33f60435b fast winrm put_file without size restrictions 2015-11-19 09:58:28 -05:00
James Cammarata
ab8bb57f5e Version bump for 2.0.0-0.6.rc1 2015-11-19 09:15:45 -05:00
224 changed files with 6176 additions and 2210 deletions

View file

@ -24,6 +24,7 @@ script:
- ./test/code-smell/replace-urlopen.sh .
- ./test/code-smell/use-compat-six.sh lib
- ./test/code-smell/boilerplate.sh
- ./test/code-smell/required-and-default-attributes.sh
- if test x"$TOXENV" != x'py24' ; then tox ; fi
- if test x"$TOXENV" = x'py24' ; then python2.4 -V && python2.4 -m compileall -fq -x 'module_utils/(a10|rax|openstack|ec2|gce).py' lib/ansible/module_utils ; fi
#- make -C docsite all

View file

@ -2,6 +2,15 @@ Ansible Changes By Release
==========================
## 2.0 "Over the Hills and Far Away" - ACTIVE DEVELOPMENT
## 2.1 TBD - ACTIVE DEVELOPMENT
####New Modules:
* cloudstack: cs_volume
####New Filters:
* extract
## 2.0 "Over the Hills and Far Away"
###Major Changes:
@ -24,10 +33,12 @@ Ansible Changes By Release
by setting the `ANSIBLE_NULL_REPRESENTATION` environment variable.
* Added `meta: refresh_inventory` to force rereading the inventory in a play.
This re-executes inventory scripts, but does not force them to ignore any cache they might use.
* Now when you delegate an action that returns ansible_facts, these facts will be applied to the delegated host, unlike before when they were applied to the current host.
* New delegate_facts directive, a boolean that allows you to apply facts to the delegated host (true/yes) instead of the inventory_hostname (no/false) which is the default and previous behaviour.
* local connections now work with 'su' as a privilege escalation method
* New ssh configuration variables(`ansible_ssh_common_args`, `ansible_ssh_extra_args`) can be used to configure a
per-group or per-host ssh ProxyCommand or set any other ssh options.
`ansible_ssh_extra_args` is used to set options that are accepted only by ssh (not sftp or scp, which have their own analogous settings).
* ansible-pull can now verify the code it runs when using git as a source repository, using git's code signing and verification features.
* Backslashes used when specifying parameters in jinja2 expressions in YAML dicts sometimes needed to be escaped twice.
This has been fixed so that escaping once works. Here's an example of how playbooks need to be modified:
@ -71,9 +82,38 @@ newline being stripped you can change your playbook like this:
"msg": "Testing some things"
```
* When specifying complex args as a variable, the variable must use the full jinja2
variable syntax ('{{var_name}}') - bare variable names there are no longer accepted.
In fact, even specifying args with variables has been deprecated, and will not be
allowed in future versions:
```
---
- hosts: localhost
connection: local
gather_facts: false
vars:
my_dirs:
- { path: /tmp/3a, state: directory, mode: 0755 }
- { path: /tmp/3b, state: directory, mode: 0700 }
tasks:
- file:
args: "{{item}}"
with_items: my_dirs
```
* The bigip\* networking modules have a new parameter, validate_certs. When
True (the default) the module will validate any hosts it connects to against
the TLS certificates it presents when run on new enough python versions. If
the python version is too old to validate certificates or you used certificates
that cannot be validated against available CAs you will need to add
validate_certs=no to your playbook for those tasks.
###Plugins
* Rewritten dnf module that should be faster and less prone to encountering bugs in cornercases
* WinRM connection plugin passes all vars named `ansible_winrm_*` to the underlying pywinrm client. This allows, for instance, `ansible_winrm_server_cert_validation=ignore` to be used with newer versions of pywinrm to disable certificate validation on Python 2.7.9+.
* WinRM connection plugin put_file is significantly faster and no longer has file size limitations.
####Deprecated Modules (new ones in parens):
@ -94,23 +134,30 @@ newline being stripped you can change your playbook like this:
* amazon: ec2_eni
* amazon: ec2_eni_facts
* amazon: ec2_remote_facts
* amazon: ec2_vpc_igw
* amazon: ec2_vpc_net
* amazon: ec2_vpc_route_table
* amazon: ec2_vpc_route_table_facts
* amazon: ec2_vpc_subnet
* amazon: ec2_vpc_subnet_facts
* amazon: ec2_win_password
* amazon: ecs_cluster
* amazon: ecs_task
* amazon: ecs_taskdefinition
* amazon: elasticache_subnet_group
* amazon: elasticache_subnet_group_facts
* amazon: iam
* amazon: iam_cert
* amazon: iam_policy
* amazon: route53_zone
* amazon: route53_facts
* amazon: route53_health_check
* amazon: route53_zone
* amazon: sts_assume_role
* amazon: s3_bucket
* amazon: s3_lifecycle
* amazon: s3_logging
* amazon: sqs_queue
* amazon: sns_topic
* amazon: sts_assume_role
* apk
* bigip_gtm_wide_ip
* bundler
@ -151,24 +198,29 @@ newline being stripped you can change your playbook like this:
* cloudstack: cs_template
* cloudstack: cs_user
* cloudstack: cs_vmsnapshot
* cronvar
* datadog_monitor
* deploy_helper
* docker: docker_login
* dpkg_selections
* elasticsearch_plugin
* expect
* find
* google: gce_tag
* hall
* ipify_facts
* iptables
* libvirt: virt_net
* libvirt: virt_pool
* maven_artifact
* openstack: os_ironic
* openstack: os_ironic_node
* openstack: os_auth
* openstack: os_client_config
* openstack: os_floating_ip
* openstack: os_image
* openstack: os_image_facts
* openstack: os_floating_ip
* openstack: os_ironic
* openstack: os_ironic_node
* openstack: os_keypair
* openstack: os_network
* openstack: os_network_facts
* openstack: os_nova_flavor
@ -184,6 +236,7 @@ newline being stripped you can change your playbook like this:
* openstack: os_server_volume
* openstack: os_subnet
* openstack: os_subnet_facts
* openstack: os_user
* openstack: os_user_group
* openstack: os_volume
* openvswitch_db.
@ -194,14 +247,15 @@ newline being stripped you can change your playbook like this:
* profitbricks: profitbricks
* profitbricks: profitbricks_datacenter
* profitbricks: profitbricks_nic
* profitbricks: profitbricks_snapshot
* profitbricks: profitbricks_volume
* profitbricks: profitbricks_volume_attachments
* proxmox
* proxmox_template
* profitbricks: profitbricks_snapshot
* proxmox: proxmox
* proxmox: proxmox_template
* puppet
* pushover
* pushbullet
* rax: rax_clb_ssl
* rax: rax_mon_alarm
* rax: rax_mon_check
* rax: rax_mon_entity
@ -211,6 +265,7 @@ newline being stripped you can change your playbook like this:
* rabbitmq_exchange
* rabbitmq_queue
* selinux_permissive
* sendgrid
* sensu_check
* sensu_subscription
* seport
@ -222,21 +277,24 @@ newline being stripped you can change your playbook like this:
* vertica_role
* vertica_schema
* vertica_user
* vmware: vmware_datacenter
* vmware: vca_fw
* vmware: vca_nat
* vmware: vmware_cluster
* vmware: vmware_datacenter
* vmware: vmware_dns_config
* vmware: vmware_dvs_host
* vmware: vmware_dvs_portgroup
* vmware: vmware_dvswitch
* vmware: vmware_host
* vmware: vmware_vmkernel_ip_config
* vmware: vmware_migrate_vmk
* vmware: vmware_portgroup
* vmware: vmware_target_canonical_facts
* vmware: vmware_vm_facts
* vmware: vmware_vm_vss_dvs_migrate
* vmware: vmware_vmkernel
* vmware: vmware_vmkernel_ip_config
* vmware: vmware_vsan_cluster
* vmware: vmware_vswitch
* vmware: vca_fw
* vmware: vca_nat
* vmware: vsphere_copy
* webfaction_app
* webfaction_db
@ -244,17 +302,22 @@ newline being stripped you can change your playbook like this:
* webfaction_mailbox
* webfaction_site
* win_acl
* win_dotnet_ngen
* win_environment
* win_firewall_rule
* win_package
* win_scheduled_task
* win_iis_virtualdirectory
* win_iis_webapplication
* win_iis_webapppool
* win_iis_webbinding
* win_iis_website
* win_lineinfile
* win_nssm
* win_package
* win_regedit
* win_scheduled_task
* win_unzip
* win_updates
* win_webpicmd
* xenserver_facts
* zabbix_host
* zabbix_hostmacro
@ -318,9 +381,21 @@ newline being stripped you can change your playbook like this:
* Lookup, vars and action plugin pathing has been normalized, all now follow the same sequence to find relative files.
* We do not ignore the explicitly set login user for ssh when it matches the 'current user' anymore, this allows overriding .ssh/config when it is set
explicitly. Leaving it unset will still use the same user and respect .ssh/config. This also means ansible_ssh_user can now return a None value.
* Handling of undefined variables has changed. In most places they will now raise an error instead of silently injecting an empty string. Use the default filter if you want to approximate the old behaviour::
* environment variables passed to remote shells now default to 'controller' settings, with fallback to en_us.UTF8 which was the previous default.
* add_hosts is much stricter about host name and will prevent invalid names from being added.
* ansible-pull now defaults to doing shallow checkouts with git, use `--full` to return to previous behaviour.
* random cows are more random
* when: now gets the registered var after the first iteration, making it possible to break out of item loops
* Handling of undefined variables has changed. In most places they will now raise an error instead of silently injecting an empty string. Use the default filter if you want to approximate the old behaviour:
```
- debug: msg="The error message was: {{error_code |default('') }}"
```
* The yum module's detection of installed packages has been made more robust by
using /usr/bin/rpm in cases where it woud have used repoquery before.
* The pip module now properly reports changes when packages are coming from a VCS.
* Fixes for retrieving files over https when a CONNECT-only proxy is in the middle.
## 1.9.4 "Dancing In the Street" - Oct 9, 2015

View file

@ -4,12 +4,14 @@ prune ticket_stubs
prune packaging
prune test
prune hacking
include README.md packaging/rpm/ansible.spec COPYING
include README.md COPYING
include examples/hosts
include examples/ansible.cfg
include lib/ansible/module_utils/powershell.ps1
recursive-include lib/ansible/modules *
recursive-include lib/ansible/galaxy/data *
recursive-include docs *
recursive-include packaging *
include Makefile
include VERSION
include MANIFEST.in

View file

@ -44,7 +44,7 @@ GIT_HASH := $(shell git log -n 1 --format="%h")
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD | sed 's/[-_.\/]//g')
GITINFO = .$(GIT_HASH).$(GIT_BRANCH)
else
GITINFO = ''
GITINFO = ""
endif
ifeq ($(shell echo $(OS) | egrep -c 'Darwin|FreeBSD|OpenBSD'),1)

View file

@ -55,3 +55,4 @@ Ansible was created by [Michael DeHaan](https://github.com/mpdehaan) (michael.de
Ansible is sponsored by [Ansible, Inc](http://ansible.com)

View file

@ -4,11 +4,12 @@ Ansible Releases at a Glance
Active Development
++++++++++++++++++
2.0 "Over the Hills and Far Away" - in progress
2.1 "TBD" - in progress
Released
++++++++
2.0.0 "Over the Hills and Far Away" 01-12-2015
1.9.4 "Dancing In the Streets" 10-09-2015
1.9.3 "Dancing In the Streets" 09-03-2015
1.9.2 "Dancing In the Streets" 06-24-2015

View file

@ -1 +1 @@
2.0.0 0.5.beta3
2.0.0.2 1

View file

@ -60,6 +60,7 @@ if __name__ == '__main__':
try:
display = Display()
display.debug("starting run")
sub = None
try:

View file

@ -27,11 +27,11 @@ result['all'] = {}
pipe = Popen(['virsh', '-q', '-c', 'lxc:///', 'list', '--name', '--all'], stdout=PIPE, universal_newlines=True)
result['all']['hosts'] = [x[:-1] for x in pipe.stdout.readlines()]
result['all']['vars'] = {}
result['all']['vars']['ansible_connection'] = 'lxc'
result['all']['vars']['ansible_connection'] = 'libvirt_lxc'
if len(sys.argv) == 2 and sys.argv[1] == '--list':
print(json.dumps(result))
elif len(sys.argv) == 3 and sys.argv[1] == '--host':
print(json.dumps({'ansible_connection': 'lxc'}))
print(json.dumps({'ansible_connection': 'libvirt_lxc'}))
else:
print("Need an argument, either --list or --host <host>")

View file

@ -32,6 +32,13 @@
# all of them and present them as one contiguous inventory.
#
# See the adjacent openstack.yml file for an example config file
# There are two ansible inventory specific options that can be set in
# the inventory section.
# expand_hostvars controls whether or not the inventory will make extra API
# calls to fill out additional information about each server
# use_hostnames changes the behavior from registering every host with its UUID
# and making a group of its hostname to only doing this if the
# hostname in question has more than one server
import argparse
import collections
@ -51,7 +58,7 @@ import shade.inventory
CONFIG_FILES = ['/etc/ansible/openstack.yaml']
def get_groups_from_server(server_vars):
def get_groups_from_server(server_vars, namegroup=True):
groups = []
region = server_vars['region']
@ -76,7 +83,8 @@ def get_groups_from_server(server_vars):
groups.append(extra_group)
groups.append('instance-%s' % server_vars['id'])
groups.append(server_vars['name'])
if namegroup:
groups.append(server_vars['name'])
for key in ('flavor', 'image'):
if 'name' in server_vars[key]:
@ -94,9 +102,9 @@ def get_groups_from_server(server_vars):
return groups
def get_host_groups(inventory):
def get_host_groups(inventory, refresh=False):
(cache_file, cache_expiration_time) = get_cache_settings()
if is_cache_stale(cache_file, cache_expiration_time):
if is_cache_stale(cache_file, cache_expiration_time, refresh=refresh):
groups = to_json(get_host_groups_from_cloud(inventory))
open(cache_file, 'w').write(groups)
else:
@ -106,23 +114,44 @@ def get_host_groups(inventory):
def get_host_groups_from_cloud(inventory):
groups = collections.defaultdict(list)
firstpass = collections.defaultdict(list)
hostvars = {}
for server in inventory.list_hosts():
list_args = {}
if hasattr(inventory, 'extra_config'):
use_hostnames = inventory.extra_config['use_hostnames']
list_args['expand'] = inventory.extra_config['expand_hostvars']
else:
use_hostnames = False
for server in inventory.list_hosts(**list_args):
if 'interface_ip' not in server:
continue
for group in get_groups_from_server(server):
groups[group].append(server['id'])
hostvars[server['id']] = dict(
ansible_ssh_host=server['interface_ip'],
openstack=server,
)
firstpass[server['name']].append(server)
for name, servers in firstpass.items():
if len(servers) == 1 and use_hostnames:
server = servers[0]
hostvars[name] = dict(
ansible_ssh_host=server['interface_ip'],
openstack=server)
for group in get_groups_from_server(server, namegroup=False):
groups[group].append(server['name'])
else:
for server in servers:
server_id = server['id']
hostvars[server_id] = dict(
ansible_ssh_host=server['interface_ip'],
openstack=server)
for group in get_groups_from_server(server, namegroup=True):
groups[group].append(server_id)
groups['_meta'] = {'hostvars': hostvars}
return groups
def is_cache_stale(cache_file, cache_expiration_time):
def is_cache_stale(cache_file, cache_expiration_time, refresh=False):
''' Determines if cache file has expired, or if it is still valid '''
if refresh:
return True
if os.path.isfile(cache_file):
mod_time = os.path.getmtime(cache_file)
current_time = time.time()
@ -169,14 +198,24 @@ def main():
try:
config_files = os_client_config.config.CONFIG_FILES + CONFIG_FILES
shade.simple_logging(debug=args.debug)
inventory = shade.inventory.OpenStackInventory(
inventory_args = dict(
refresh=args.refresh,
config_files=config_files,
private=args.private,
)
if hasattr(shade.inventory.OpenStackInventory, 'extra_config'):
inventory_args.update(dict(
config_key='ansible',
config_defaults={
'use_hostnames': False,
'expand_hostvars': True,
}
))
inventory = shade.inventory.OpenStackInventory(**inventory_args)
if args.list:
output = get_host_groups(inventory)
output = get_host_groups(inventory, refresh=args.refresh)
elif args.host:
output = to_json(inventory.get_host(args.host))
print(output)

View file

@ -26,3 +26,6 @@ clouds:
username: stack
password: stack
project_name: stack
ansible:
use_hostnames: True
expand_hostvars: False

View file

@ -12,7 +12,7 @@ ansible-galaxy - manage roles using galaxy.ansible.com
SYNOPSIS
--------
ansible-galaxy [init|info|install|list|remove] [--help] [options] ...
ansible-galaxy [delete|import|info|init|install|list|login|remove|search|setup] [--help] [options] ...
DESCRIPTION
@ -20,7 +20,7 @@ DESCRIPTION
*Ansible Galaxy* is a shared repository for Ansible roles.
The ansible-galaxy command can be used to manage these roles,
or by creating a skeleton framework for roles you'd like to upload to Galaxy.
or for creating a skeleton framework for roles you'd like to upload to Galaxy.
COMMON OPTIONS
--------------
@ -29,7 +29,6 @@ COMMON OPTIONS
Show a help message related to the given sub-command.
INSTALL
-------
@ -145,6 +144,204 @@ The path to the directory containing your roles. The default is the *roles_path*
configured in your *ansible.cfg* file (/etc/ansible/roles if not configured)
SEARCH
------
The *search* sub-command returns a filtered list of roles found on the remote
server.
USAGE
~~~~~
$ ansible-galaxy search [options] [searchterm1 searchterm2]
OPTIONS
~~~~~~~
*--galaxy-tags*::
Provide a comma separated list of Galaxy Tags on which to filter.
*--platforms*::
Provide a comma separated list of Platforms on which to filter.
*--author*::
Specify the username of a Galaxy contributor on which to filter.
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
INFO
----
The *info* sub-command shows detailed information for a specific role.
Details returned about the role included information from the local copy
as well as information from galaxy.ansible.com.
USAGE
~~~~~
$ ansible-galaxy info [options] role_name[, version]
OPTIONS
~~~~~~~
*-p* 'ROLES_PATH', *--roles-path=*'ROLES_PATH'::
The path to the directory containing your roles. The default is the *roles_path*
configured in your *ansible.cfg* file (/etc/ansible/roles if not configured)
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
LOGIN
-----
The *login* sub-command is used to authenticate with galaxy.ansible.com.
Authentication is required to use the import, delete and setup commands.
It will authenticate the user, retrieve a token from Galaxy, and store it
in the user's home directory.
USAGE
~~~~~
$ ansible-galaxy login [options]
The *login* sub-command prompts for a *GitHub* username and password. It does
NOT send your password to Galaxy. It actually authenticates with GitHub and
creates a personal access token. It then sends the personal access token to
Galaxy, which in turn verifies that you are you and returns a Galaxy access
token. After authentication completes the *GitHub* personal access token is
destroyed.
If you do not wish to use your GitHub password, or if you have two-factor
authentication enabled with GitHub, use the *--github-token* option to pass a
personal access token that you create. Log into GitHub, go to Settings and
click on Personal Access Token to create a token.
OPTIONS
~~~~~~~
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
*--github-token*::
Authenticate using a *GitHub* personal access token rather than a password.
IMPORT
------
Import a role from *GitHub* to galaxy.ansible.com. Requires the user first
authenticate with galaxy.ansible.com using the *login* subcommand.
USAGE
~~~~~
$ ansible-galaxy import [options] github_user github_repo
OPTIONS
~~~~~~~
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
*--branch*::
Provide a specific branch to import. When a branch is not specified the
branch found in meta/main.yml is used. If no branch is specified in
meta/main.yml, the repo's default branch (usually master) is used.
DELETE
------
The *delete* sub-command will delete a role from galaxy.ansible.com. Requires
the user first authenticate with galaxy.ansible.com using the *login* subcommand.
USAGE
~~~~~
$ ansible-galaxy delete [options] github_user github_repo
OPTIONS
~~~~~~~
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
SETUP
-----
The *setup* sub-command creates an integration point for *Travis CI*, enabling
galaxy.ansible.com to receive notifications from *Travis* on build completion.
Requires the user first authenticate with galaxy.ansible.com using the *login*
subcommand.
USAGE
~~~~~
$ ansible-galaxy setup [options] source github_user github_repo secret
* Use *travis* as the source value. In the future additional source values may
be added.
* Provide your *Travis* user token as the secret. The token is not stored by
galaxy.ansible.com. A hash is created using github_user, github_repo
and your token. The hash value is what actually gets stored.
OPTIONS
~~~~~~~
*-c*, *--ignore-certs*::
Ignore TLS certificate errors.
*-s*, *--server*::
Override the default server https://galaxy.ansible.com.
--list::
Show your configured integrations. Provids the ID of each integration
which can be used with the remove option.
--remove::
Remove a specific integration. Provide the ID of the integration to
be removed.
AUTHOR
------

View file

@ -95,6 +95,10 @@ Force running of playbook even if unable to update playbook repository. This
can be useful, for example, to enforce run-time state when a network
connection may not always be up or possible.
*--full*::
Do a full clone of the repository. By default ansible-pull will do a shallow clone based on the last revision.
*-h*, *--help*::
Show the help message and exit.

View file

@ -15,6 +15,7 @@
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import print_function
__docformat__ = 'restructuredtext'
@ -24,9 +25,9 @@ import traceback
try:
from sphinx.application import Sphinx
except ImportError:
print "#################################"
print "Dependency missing: Python Sphinx"
print "#################################"
print("#################################")
print("Dependency missing: Python Sphinx")
print("#################################")
sys.exit(1)
import os
@ -40,7 +41,7 @@ class SphinxBuilder(object):
"""
Run the DocCommand.
"""
print "Creating html documentation ..."
print("Creating html documentation ...")
try:
buildername = 'html'
@ -69,10 +70,10 @@ class SphinxBuilder(object):
app.builder.build_all()
except ImportError, ie:
except ImportError:
traceback.print_exc()
except Exception, ex:
print >> sys.stderr, "FAIL! exiting ... (%s)" % ex
except Exception as ex:
print("FAIL! exiting ... (%s)" % ex, file=sys.stderr)
def build_docs(self):
self.app.builder.build_all()
@ -83,9 +84,9 @@ def build_rst_docs():
if __name__ == '__main__':
if '-h' in sys.argv or '--help' in sys.argv:
print "This script builds the html documentation from rst/asciidoc sources.\n"
print " Run 'make docs' to build everything."
print " Run 'make viewdocs' to build and then preview in a web browser."
print("This script builds the html documentation from rst/asciidoc sources.\n")
print(" Run 'make docs' to build everything.")
print(" Run 'make viewdocs' to build and then preview in a web browser.")
sys.exit(0)
build_rst_docs()
@ -93,4 +94,4 @@ if __name__ == '__main__':
if "view" in sys.argv:
import webbrowser
if not webbrowser.open('htmlout/index.html'):
print >> sys.stderr, "Could not open on your webbrowser."
print("Could not open on your webbrowser.", file=sys.stderr)

View file

@ -1,5 +1,5 @@
Ansible Privilege Escalation
++++++++++++++++++++++++++++
Become (Privilege Escalation)
+++++++++++++++++++++++++++++
Ansible can use existing privilege escalation systems to allow a user to execute tasks as another.
@ -7,17 +7,17 @@ Ansible can use existing privilege escalation systems to allow a user to execute
Become
``````
Before 1.9 Ansible mostly allowed the use of sudo and a limited use of su to allow a login/remote user to become a different user
and execute tasks, create resources with the 2nd user's permissions. As of 1.9 'become' supersedes the old sudo/su, while still
being backwards compatible. This new system also makes it easier to add other privilege escalation tools like pbrun (Powerbroker),
pfexec and others.
Before 1.9 Ansible mostly allowed the use of `sudo` and a limited use of `su` to allow a login/remote user to become a different user
and execute tasks, create resources with the 2nd user's permissions. As of 1.9 `become` supersedes the old sudo/su, while still
being backwards compatible. This new system also makes it easier to add other privilege escalation tools like `pbrun` (Powerbroker),
`pfexec` and others.
New directives
--------------
become
equivalent to adding 'sudo:' or 'su:' to a play or task, set to 'true'/'yes' to activate privilege escalation
equivalent to adding `sudo:` or `su:` to a play or task, set to 'true'/'yes' to activate privilege escalation
become_user
equivalent to adding 'sudo_user:' or 'su_user:' to a play or task, set to user with desired privileges

View file

@ -11,6 +11,7 @@ Learn how to build modules of your own in any language, and also how to extend A
developing_modules
developing_plugins
developing_test_pr
developing_releases
Developers will also likely be interested in the fully-discoverable in :doc:`tower`. It's great for embedding Ansible in all manner of applications.

View file

@ -17,11 +17,67 @@ This chapter discusses the Python API.
.. _python_api:
Python API
----------
The Python API is very powerful, and is how the ansible CLI and ansible-playbook
are implemented.
are implemented. In version 2.0 the core ansible got rewritten and the API was mostly rewritten.
.. _python_api_20:
Python API 2.0
--------------
In 2.0 things get a bit more complicated to start, but you end up with much more discrete and readable classes::
#!/usr/bin/python2
from collections import namedtuple
from ansible.parsing.dataloader import DataLoader
from ansible.vars import VariableManager
from ansible.inventory import Inventory
from ansible.playbook.play import Play
from ansible.executor.task_queue_manager import TaskQueueManager
Options = namedtuple('Options', ['connection','module_path', 'forks', 'remote_user', 'private_key_file', 'ssh_common_args', 'ssh_extra_args', 'sftp_extra_args', 'scp_extra_args', 'become', 'become_method', 'become_user', 'verbosity', 'check'])
# initialize needed objects
variable_manager = VariableManager()
loader = DataLoader()
options = Options(connection='local', module_path='/path/to/mymodules', forks=100, remote_user=None, private_key_file=None, ssh_common_args=None, ssh_extra_args=None, sftp_extra_args=None, scp_extra_args=None, become=None, become_method=None, become_user=None, verbosity=None, check=False)
passwords = dict(vault_pass='secret')
# create inventory and pass to var manager
inventory = Inventory(loader=loader, variable_manager=variable_manager, host_list='localhost')
variable_manager.set_inventory(inventory)
# create play with tasks
play_source = dict(
name = "Ansible Play",
hosts = 'localhost',
gather_facts = 'no',
tasks = [ dict(action=dict(module='debug', args=(msg='Hello Galaxy!'))) ]
)
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
# actually run it
tqm = None
try:
tqm = TaskQueueManager(
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
options=options,
passwords=passwords,
stdout_callback='default',
)
result = tqm.run(play)
finally:
if tqm is not None:
tqm.cleanup()
.. _python_api_old:
Python API pre 2.0
------------------
It's pretty simple::
@ -51,7 +107,7 @@ expressed in the :doc:`modules` documentation.::
A module can return any type of JSON data it wants, so Ansible can
be used as a framework to rapidly build powerful applications and scripts.
.. _detailed_api_example:
.. _detailed_api_old_example:
Detailed API Example
````````````````````
@ -87,9 +143,9 @@ The following script prints out the uptime information for all hosts::
for (hostname, result) in results['dark'].items():
print "%s >>> %s" % (hostname, result)
Advanced programmers may also wish to read the source to ansible itself, for
it uses the Runner() API (with all available options) to implement the
command line tools ``ansible`` and ``ansible-playbook``.
Advanced programmers may also wish to read the source to ansible itself,
for it uses the API (with all available options) to implement the ``ansible``
command line tools (``lib/ansible/cli/``).
.. seealso::

View file

@ -219,7 +219,7 @@ this, just have the module return a `ansible_facts` key, like so, along with oth
}
These 'facts' will be available to all statements called after that module (but not before) in the playbook.
A good idea might be make a module called 'site_facts' and always call it at the top of each playbook, though
A good idea might be to make a module called 'site_facts' and always call it at the top of each playbook, though
we're always open to improving the selection of core facts in Ansible as well.
.. _common_module_boilerplate:

View file

@ -1,55 +1,60 @@
Ansible Galaxy
++++++++++++++
"Ansible Galaxy" can either refer to a website for sharing and downloading Ansible roles, or a command line tool that helps work with roles.
"Ansible Galaxy" can either refer to a website for sharing and downloading Ansible roles, or a command line tool for managing and creating roles.
.. contents:: Topics
The Website
```````````
The website `Ansible Galaxy <https://galaxy.ansible.com>`_, is a free site for finding, downloading, rating, and reviewing all kinds of community developed Ansible roles and can be a great way to get a jumpstart on your automation projects.
The website `Ansible Galaxy <https://galaxy.ansible.com>`_, is a free site for finding, downloading, and sharing community developed Ansible roles. Downloading roles from Galaxy is a great way to jumpstart your automation projects.
You can sign up with social auth and use the download client 'ansible-galaxy' which is included in Ansible 1.4.2 and later.
Access the Galaxy web site using GitHub OAuth, and to install roles use the 'ansible-galaxy' command line tool included in Ansible 1.4.2 and later.
Read the "About" page on the Galaxy site for more information.
The ansible-galaxy command line tool
````````````````````````````````````
The command line ansible-galaxy has many different subcommands.
The ansible-galaxy command has many different sub-commands for managing roles both locally and at `galaxy.ansible.com <https://galaxy.ansible.com>`_.
.. note::
The search, login, import, delete, and setup commands in the Ansible 2.0 version of ansible-galaxy require access to the
2.0 Beta release of the Galaxy web site available at `https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_.
Use the ``--server`` option to access the beta site. For example::
$ ansible-galaxy search --server https://galaxy-qa.ansible.com mysql --author geerlingguy
Additionally, you can define a server in ansible.cfg::
[galaxy]
server=https://galaxy-qa.ansible.com
Installing Roles
----------------
The most obvious is downloading roles from the Ansible Galaxy website::
The most obvious use of the ansible-galaxy command is downloading roles from `the Ansible Galaxy website <https://galaxy.ansible.com>`_::
ansible-galaxy install username.rolename
.. _galaxy_cli_roles_path:
$ ansible-galaxy install username.rolename
roles_path
===============
==========
You can specify a particular directory where you want the downloaded roles to be placed::
ansible-galaxy install username.role -p ~/Code/ansible_roles/
$ ansible-galaxy install username.role -p ~/Code/ansible_roles/
This can be useful if you have a master folder that contains ansible galaxy roles shared across several projects. The default is the roles_path configured in your ansible.cfg file (/etc/ansible/roles if not configured).
Building out Role Scaffolding
-----------------------------
It can also be used to initialize the base structure of a new role, saving time on creating the various directories and main.yml files a role requires::
ansible-galaxy init rolename
Installing Multiple Roles From A File
-------------------------------------
=====================================
To install multiple roles, the ansible-galaxy CLI can be fed a requirements file. All versions of ansible allow the following syntax for installing roles from the Ansible Galaxy website::
ansible-galaxy install -r requirements.txt
$ ansible-galaxy install -r requirements.txt
Where the requirements.txt looks like::
@ -64,7 +69,7 @@ To request specific versions (tags) of a role, use this syntax in the roles file
Available versions will be listed on the Ansible Galaxy webpage for that role.
Advanced Control over Role Requirements Files
---------------------------------------------
=============================================
For more advanced control over where to download roles from, including support for remote repositories, Ansible 1.8 and later support a new YAML format for the role requirements file, which must end in a 'yml' extension. It works like this::
@ -77,14 +82,14 @@ And here's an example showing some specific version downloads from multiple sour
# from galaxy
- src: yatesr.timezone
# from github
# from GitHub
- src: https://github.com/bennojoy/nginx
# from github installing to a relative path
# from GitHub installing to a relative path
- src: https://github.com/bennojoy/nginx
path: vagrant/roles/
# from github, overriding the name and specifying a specific tag
# from GitHub, overriding the name and specifying a specific tag
- src: https://github.com/bennojoy/nginx
version: master
name: nginx_role
@ -93,15 +98,15 @@ And here's an example showing some specific version downloads from multiple sour
- src: https://some.webserver.example.com/files/master.tar.gz
name: http-role
# from bitbucket, if bitbucket happens to be operational right now :)
# from Bitbucket
- src: git+http://bitbucket.org/willthames/git-ansible-galaxy
version: v1.4
# from bitbucket, alternative syntax and caveats
# from Bitbucket, alternative syntax and caveats
- src: http://bitbucket.org/willthames/hg-ansible-galaxy
scm: hg
# from gitlab or other git-based scm
# from GitLab or other git-based scm
- src: git@gitlab.company.com:mygroup/ansible-base.git
scm: git
version: 0.1.0
@ -121,3 +126,283 @@ Roles pulled from galaxy work as with other SCM sourced roles above. To download
`irc.freenode.net <http://irc.freenode.net>`_
#ansible IRC chat channel
Building Role Scaffolding
-------------------------
Use the init command to initialize the base structure of a new role, saving time on creating the various directories and main.yml files a role requires::
$ ansible-galaxy init rolename
The above will create the following directory structure in the current working directory:
::
README.md
.travis.yml
defaults/
main.yml
files/
handlers/
main.yml
meta/
main.yml
templates/
tests/
inventory
test.yml
vars/
main.yml
.. note::
.travis.yml and tests/ are new in Ansible 2.0
If a directory matching the name of the role already exists in the current working directory, the init command will result in an error. To ignore the error use the --force option. Force will create the above subdirectories and files, replacing anything that matches.
Search for Roles
----------------
The search command provides for querying the Galaxy database, allowing for searching by tags, platforms, author and multiple keywords. For example:
::
$ ansible-galaxy search elasticsearch --author geerlingguy
The search command will return a list of the first 1000 results matching your search:
::
Found 2 roles matching your search:
Name Description
---- -----------
geerlingguy.elasticsearch Elasticsearch for Linux.
geerlingguy.elasticsearch-curator Elasticsearch curator for Linux.
.. note::
The format of results pictured here is new in Ansible 2.0.
Get More Information About a Role
---------------------------------
Use the info command To view more detail about a specific role:
::
$ ansible-galaxy info username.role_name
This returns everything found in Galaxy for the role:
::
Role: username.rolename
description: Installs and configures a thing, a distributed, highly available NoSQL thing.
active: True
commit: c01947b7bc89ebc0b8a2e298b87ab416aed9dd57
commit_message: Adding travis
commit_url: https://github.com/username/repo_name/commit/c01947b7bc89ebc0b8a2e298b87ab
company: My Company, Inc.
created: 2015-12-08T14:17:52.773Z
download_count: 1
forks_count: 0
github_branch:
github_repo: repo_name
github_user: username
id: 6381
is_valid: True
issue_tracker_url:
license: Apache
min_ansible_version: 1.4
modified: 2015-12-08T18:43:49.085Z
namespace: username
open_issues_count: 0
path: /Users/username/projects/roles
scm: None
src: username.repo_name
stargazers_count: 0
travis_status_url: https://travis-ci.org/username/repo_name.svg?branch=master
version:
watchers_count: 1
List Installed Roles
--------------------
The list command shows the name and version of each role installed in roles_path.
::
$ ansible-galaxy list
- chouseknecht.role-install_mongod, master
- chouseknecht.test-role-1, v1.0.2
- chrismeyersfsu.role-iptables, master
- chrismeyersfsu.role-required_vars, master
Remove an Installed Role
------------------------
The remove command will delete a role from roles_path:
::
$ ansible-galaxy remove username.rolename
Authenticate with Galaxy
------------------------
To use the import, delete and setup commands authentication with Galaxy is required. The login command will authenticate the user,retrieve a token from Galaxy, and store it in the user's home directory.
::
$ ansible-galaxy login
We need your Github login to identify you.
This information will not be sent to Galaxy, only to api.github.com.
The password will not be displayed.
Use --github-token if you do not want to enter your password.
Github Username: dsmith
Password for dsmith:
Succesfully logged into Galaxy as dsmith
As depicted above, the login command prompts for a GitHub username and password. It does NOT send your password to Galaxy. It actually authenticates with GitHub and creates a personal access token. It then sends the personal access token to Galaxy, which in turn verifies that you are you and returns a Galaxy access token. After authentication completes the GitHub personal access token is destroyed.
If you do not wish to use your GitHub password, or if you have two-factor authentication enabled with GitHub, use the --github-token option to pass a personal access token that you create. Log into GitHub, go to Settings and click on Personal Access Token to create a token.
.. note::
The login command in Ansible 2.0 requires using the Galaxy 2.0 Beta site. Use the ``--server`` option to access
`https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_. You can also add a *server* definition in the [galaxy]
section of your ansible.cfg file.
Import a Role
-------------
Roles can be imported using ansible-galaxy. The import command expects that the user previously authenticated with Galaxy using the login command.
Import any GitHub repo you have access to:
::
$ ansible-galaxy import github_user github_repo
By default the command will wait for the role to be imported by Galaxy, displaying the results as the import progresses:
::
Successfully submitted import request 41
Starting import 41: role_name=myrole repo=githubuser/ansible-role-repo ref=
Retrieving Github repo githubuser/ansible-role-repo
Accessing branch: master
Parsing and validating meta/main.yml
Parsing galaxy_tags
Parsing platforms
Adding dependencies
Parsing and validating README.md
Adding repo tags as role versions
Import completed
Status SUCCESS : warnings=0 errors=0
Use the --branch option to import a specific branch. If not specified, the default branch for the repo will be used.
If the --no-wait option is present, the command will not wait for results. Results of the most recent import for any of your roles is available on the Galaxy web site under My Imports.
.. note::
The import command in Ansible 2.0 requires using the Galaxy 2.0 Beta site. Use the ``--server`` option to access
`https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_. You can also add a *server* definition in the [galaxy]
section of your ansible.cfg file.
Delete a Role
-------------
Remove a role from the Galaxy web site using the delete command. You can delete any role that you have access to in GitHub. The delete command expects that the user previously authenticated with Galaxy using the login command.
::
$ ansible-galaxy delete github_user github_repo
This only removes the role from Galaxy. It does not impact the actual GitHub repo.
.. note::
The delete command in Ansible 2.0 requires using the Galaxy 2.0 Beta site. Use the ``--server`` option to access
`https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_. You can also add a *server* definition in the [galaxy]
section of your ansible.cfg file.
Setup Travis Integerations
--------------------------
Using the setup command you can enable notifications from `travis <http://travis-ci.org>`_. The setup command expects that the user previously authenticated with Galaxy using the login command.
::
$ ansible-galaxy setup travis github_user github_repo xxxtravistokenxxx
Added integration for travis github_user/github_repo
The setup command requires your Travis token. The Travis token is not stored in Galaxy. It is used along with the GitHub username and repo to create a hash as described in `the Travis documentation <https://docs.travis-ci.com/user/notifications/>`_. The calculated hash is stored in Galaxy and used to verify notifications received from Travis.
The setup command enables Galaxy to respond to notifications. Follow the `Travis getting started guide <https://docs.travis-ci.com/user/getting-started/>`_ to enable the Travis build process for the role repository.
When you create your .travis.yml file add the following to cause Travis to notify Galaxy when a build completes:
::
notifications:
webhooks: https://galaxy.ansible.com/api/v1/notifications/
.. note::
The setup command in Ansible 2.0 requires using the Galaxy 2.0 Beta site. Use the ``--server`` option to access
`https://galaxy-qa.ansible.com <https://galaxy-qa.ansible.com>`_. You can also add a *server* definition in the [galaxy]
section of your ansible.cfg file.
List Travis Integrations
========================
Use the --list option to display your Travis integrations:
::
$ ansible-galaxy setup --list
ID Source Repo
---------- ---------- ----------
2 travis github_user/github_repo
1 travis github_user/github_repo
Remove Travis Integrations
==========================
Use the --remove option to disable and remove a Travis integration:
::
$ ansible-galaxy setup --remove ID
Provide the ID of the integration you want disabled. Use the --list option to get the ID.

View file

@ -178,8 +178,8 @@ Now to the fun part. We create a playbook to create our infrastructure we call i
- name: ensure firewall ports opened
cs_firewall:
ip_address: {{ public_ip }}
port: {{ item.port }}
ip_address: "{{ public_ip }}"
port: "{{ item.port }}"
cidr: "{{ item.cidr | default('0.0.0.0/0') }}"
with_items: cs_firewall
when: public_ip is defined

View file

@ -6,12 +6,13 @@ Using Vagrant and Ansible
Introduction
````````````
Vagrant is a tool to manage virtual machine environments, and allows you to
configure and use reproducible work environments on top of various
virtualization and cloud platforms. It also has integration with Ansible as a
provisioner for these virtual machines, and the two tools work together well.
`Vagrant <http://vagrantup.com/>`_ is a tool to manage virtual machine
environments, and allows you to configure and use reproducible work
environments on top of various virtualization and cloud platforms.
It also has integration with Ansible as a provisioner for these virtual
machines, and the two tools work together well.
This guide will describe how to use Vagrant and Ansible together.
This guide will describe how to use Vagrant 1.7+ and Ansible together.
If you're not familiar with Vagrant, you should visit `the documentation
<http://docs.vagrantup.com/v2/>`_.
@ -27,54 +28,48 @@ Vagrant Setup
The first step once you've installed Vagrant is to create a ``Vagrantfile``
and customize it to suit your needs. This is covered in detail in the Vagrant
documentation, but here is a quick example:
.. code-block:: bash
$ mkdir vagrant-test
$ cd vagrant-test
$ vagrant init precise32 http://files.vagrantup.com/precise32.box
This will create a file called Vagrantfile that you can edit to suit your
needs. The default Vagrantfile has a lot of comments. Here is a simplified
example that includes a section to use the Ansible provisioner:
documentation, but here is a quick example that includes a section to use the
Ansible provisioner to manage a single machine:
.. code-block:: ruby
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "precise32"
config.vm.box_url = "http://files.vagrantup.com/precise32.box"
config.vm.network :public_network
# This guide is optimized for Vagrant 1.7 and above.
# Although versions 1.6.x should behave very similarly, it is recommended
# to upgrade instead of disabling the requirement below.
Vagrant.require_version ">= 1.7.0"
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
end
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
# Disable the new default behavior introduced in Vagrant 1.7, to
# ensure that all Vagrant machines will use the same SSH key pair.
# See https://github.com/mitchellh/vagrant/issues/5005
config.ssh.insert_key = false
config.vm.provision "ansible" do |ansible|
ansible.verbose = "v"
ansible.playbook = "playbook.yml"
end
end
The Vagrantfile has a lot of options, but these are the most important ones.
Notice the ``config.vm.provision`` section that refers to an Ansible playbook
called ``playbook.yml`` in the same directory as the Vagrantfile. Vagrant runs
the provisioner once the virtual machine has booted and is ready for SSH
called ``playbook.yml`` in the same directory as the ``Vagrantfile``. Vagrant
runs the provisioner once the virtual machine has booted and is ready for SSH
access.
There are a lot of Ansible options you can configure in your ``Vagrantfile``.
Visit the `Ansible Provisioner documentation
<http://docs.vagrantup.com/v2/provisioning/ansible.html>`_ for more
information.
.. code-block:: bash
$ vagrant up
This will start the VM and run the provisioning playbook.
This will start the VM, and run the provisioning playbook (on the first VM
startup).
There are a lot of Ansible options you can configure in your Vagrantfile. Some
particularly useful options are ``ansible.extra_vars``, ``ansible.sudo`` and
``ansible.sudo_user``, and ``ansible.host_key_checking`` which you can disable
to avoid SSH connection problems to new virtual machines.
Visit the `Ansible Provisioner documentation
<http://docs.vagrantup.com/v2/provisioning/ansible.html>`_ for more
information.
To re-run a playbook on an existing VM, just run:
@ -82,7 +77,19 @@ To re-run a playbook on an existing VM, just run:
$ vagrant provision
This will re-run the playbook.
This will re-run the playbook against the existing VM.
Note that having the ``ansible.verbose`` option enabled will instruct Vagrant
to show the full ``ansible-playbook`` command used behind the scene, as
illustrated by this example:
.. code-block:: bash
$ PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --private-key=/home/someone/.vagrant.d/insecure_private_key --user=vagrant --connection=ssh --limit='machine1' --inventory-file=/home/someone/coding-in-a-project/.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml
This information can be quite useful to debug integration issues and can also
be used to manually execute Ansible from a shell, as explained in the next
section.
.. _running_ansible:
@ -90,44 +97,58 @@ Running Ansible Manually
````````````````````````
Sometimes you may want to run Ansible manually against the machines. This is
pretty easy to do.
faster than kicking ``vagrant provision`` and pretty easy to do.
Vagrant automatically creates an inventory file for each Vagrant machine in
the same directory located under ``.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory``.
It configures the inventory file according to the SSH tunnel that Vagrant
automatically creates, and executes ``ansible-playbook`` with the correct
username and SSH key options to allow access. A typical automatically-created
inventory file may look something like this:
With our ``Vagrantfile`` example, Vagrant automatically creates an Ansible
inventory file in ``.vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory``.
This inventory is configured according to the SSH tunnel that Vagrant
automatically creates. A typical automatically-created inventory file for a
single machine environment may look something like this:
.. code-block:: none
# Generated by Vagrant
machine ansible_host=127.0.0.1 ansible_port=2222
.. include:: ansible_ssh_changes_note.rst
default ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222
If you want to run Ansible manually, you will want to make sure to pass
``ansible`` or ``ansible-playbook`` commands the correct arguments for the
username (usually ``vagrant``) and the SSH key (since Vagrant 1.7.0, this will be something like
``.vagrant/machines/[machine name]/[provider]/private_key``), and the autogenerated inventory file.
``ansible`` or ``ansible-playbook`` commands the correct arguments, at least
for the *username*, the *SSH private key* and the *inventory*.
Here is an example:
Here is an example using the Vagrant global insecure key (``config.ssh.insert_key``
must be set to ``false`` in your ``Vagrantfile``):
.. code-block:: bash
$ ansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory --private-key=.vagrant/machines/default/virtualbox/private_key -u vagrant playbook.yml
Note: Vagrant versions prior to 1.7.0 will use the private key located at ``~/.vagrant.d/insecure_private_key.``
$ ansible-playbook --private-key=~/.vagrant.d/insecure_private_key -u vagrant -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml
Here is a second example using the random private key that Vagrant 1.7+
automatically configures for each new VM (each key is stored in a path like
``.vagrant/machines/[machine name]/[provider]/private_key``):
.. code-block:: bash
$ ansible-playbook --private-key=.vagrant/machines/default/virtualbox/private_key -u vagrant -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory playbook.yml
Advanced Usages
```````````````
The "Tips and Tricks" chapter of the `Ansible Provisioner documentation
<http://docs.vagrantup.com/v2/provisioning/ansible.html>`_ provides detailed information about more advanced Ansible features like:
- how to parallely execute a playbook in a multi-machine environment
- how to integrate a local ``ansible.cfg`` configuration file
.. seealso::
`Vagrant Home <http://www.vagrantup.com/>`_
The Vagrant homepage with downloads
`Vagrant Documentation <http://docs.vagrantup.com/v2/>`_
Vagrant Documentation
`Ansible Provisioner <http://docs.vagrantup.com/v2/provisioning/ansible.html>`_
The Vagrant documentation for the Ansible provisioner
:doc:`playbooks`
An introduction to playbooks
`Vagrant Home <http://www.vagrantup.com/>`_
The Vagrant homepage with downloads
`Vagrant Documentation <http://docs.vagrantup.com/v2/>`_
Vagrant Documentation
`Ansible Provisioner <http://docs.vagrantup.com/v2/provisioning/ansible.html>`_
The Vagrant documentation for the Ansible provisioner
`Vagrant Issue Tracker <https://github.com/mitchellh/vagrant/issues?q=is%3Aopen+is%3Aissue+label%3Aprovisioners%2Fansible>`_
The open issues for the Ansible provisioner in the Vagrant project
:doc:`playbooks`
An introduction to playbooks

View file

@ -112,7 +112,7 @@ For example, using double rather than single quotes in the above example would
evaluate the variable on the box you were on.
So far we've been demoing simple command execution, but most Ansible modules usually do not work like
simple scripts. They make the remote system look like you state, and run the commands necessary to
simple scripts. They make the remote system look like a state, and run the commands necessary to
get it there. This is commonly referred to as 'idempotence', and is a core design goal of Ansible.
However, we also recognize that running arbitrary commands is equally important, so Ansible easily supports both.

View file

@ -897,3 +897,19 @@ The normal behaviour is for operations to copy the existing context or use the u
The default list is: nfs,vboxsf,fuse,ramfs::
special_context_filesystems = nfs,vboxsf,fuse,ramfs,myspecialfs
Galaxy Settings
---------------
The following options can be set in the [galaxy] section of ansible.cfg:
server
======
Override the default Galaxy server value of https://galaxy.ansible.com. Useful if you have a hosted version of the Galaxy web app or want to point to the testing site https://galaxy-qa.ansible.com. It does not work against private, hosted repos, which Galaxy can use for fetching and installing roles.
ignore_certs
============
If set to *yes*, ansible-galaxy will not validate TLS certificates. Handy for testing against a server with a self-signed certificate
.

View file

@ -111,9 +111,8 @@ If you use boto profiles to manage multiple AWS accounts, you can pass ``--profi
aws_access_key_id = <prod access key>
aws_secret_access_key = <prod secret key>
You can then run ``ec2.py --profile prod`` to get the inventory for the prod account, or run playbooks with: ``ansible-playbook -i 'ec2.py --profile prod' myplaybook.yml``.
Alternatively, use the ``AWS_PROFILE`` variable - e.g. ``AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml``
You can then run ``ec2.py --profile prod`` to get the inventory for the prod account, this option is not supported by ``anisble-playbook`` though.
But you can use the ``AWS_PROFILE`` variable - e.g. ``AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml``
Since each region requires its own API call, if you are only using a small set of regions, feel free to edit ``ec2.ini`` and list only the regions you are interested in. There are other config options in ``ec2.ini`` including cache control, and destination variables.

View file

@ -27,12 +27,11 @@ What Version To Pick?
`````````````````````
Because it runs so easily from source and does not require any installation of software on remote
machines, many users will actually track the development version.
machines, many users will actually track the development version.
Ansible's release cycles are usually about two months long. Due to this
short release cycle, minor bugs will generally be fixed in the next release versus maintaining
backports on the stable branch. Major bugs will still have maintenance releases when needed, though
these are infrequent.
Ansible's release cycles are usually about four months long. Due to this short release cycle,
minor bugs will generally be fixed in the next release versus maintaining backports on the stable branch.
Major bugs will still have maintenance releases when needed, though these are infrequent.
If you are wishing to run the latest released version of Ansible and you are running Red Hat Enterprise Linux (TM), CentOS, Fedora, Debian, or Ubuntu, we recommend using the OS package manager.

View file

@ -26,12 +26,12 @@ Installing on the Control Machine
On a Linux control machine::
pip install https://github.com/diyan/pywinrm/archive/master.zip#egg=pywinrm
pip install "pywinrm>=0.1.1"
Active Directory Support
++++++++++++++++++++++++
If you wish to connect to domain accounts published through Active Directory (as opposed to local accounts created on the remote host), you will need to install the "python-kerberos" module and the MIT krb5 libraries it depends on.
If you wish to connect to domain accounts published through Active Directory (as opposed to local accounts created on the remote host), you will need to install the "python-kerberos" module on the Ansible control host (and the MIT krb5 libraries it depends on). The Ansible control host also requires a properly configured computer account in Active Directory.
Installing python-kerberos dependencies
---------------------------------------
@ -131,7 +131,9 @@ To test this, ping the windows host you want to control by name then use the ip
If you get different hostnames back than the name you originally pinged, speak to your active directory administrator and get them to check that DNS Scavenging is enabled and that DNS and DHCP are updating each other.
Check your ansible controller's clock is synchronised with your domain controller. Kerberos is time sensitive and a little clock drift can cause tickets not be granted.
Ensure that the Ansible controller has a properly configured computer account in the domain.
Check your Ansible controller's clock is synchronised with your domain controller. Kerberos is time sensitive and a little clock drift can cause tickets not be granted.
Check you are using the real fully qualified domain name for the domain. Sometimes domains are commonly known to users by aliases. To check this run:
@ -165,6 +167,8 @@ In group_vars/windows.yml, define the following inventory variables::
ansible_password: SecretPasswordGoesHere
ansible_port: 5986
ansible_connection: winrm
# The following is necessary for Python 2.7.9+ when using default WinRM self-signed certificates:
ansible_winrm_server_cert_validation: ignore
Although Ansible is mostly an SSH-oriented system, Windows management will not happen over SSH (`yet <http://blogs.msdn.com/b/powershell/archive/2015/06/03/looking-forward-microsoft-support-for-secure-shell-ssh.aspx>`).
@ -189,6 +193,7 @@ Since 2.0, the following custom inventory variables are also supported for addit
* ``ansible_winrm_path``: Specify an alternate path to the WinRM endpoint. Ansible uses ``/wsman`` by default.
* ``ansible_winrm_realm``: Specify the realm to use for Kerberos authentication. If the username contains ``@``, Ansible will use the part of the username after ``@`` by default.
* ``ansible_winrm_transport``: Specify one or more transports as a comma-separated list. By default, Ansible will use ``kerberos,plaintext`` if the ``kerberos`` module is installed and a realm is defined, otherwise ``plaintext``.
* ``ansible_winrm_server_cert_validation``: Specify the server certificate validation mode (``ignore`` or ``validate``). Ansible defaults to ``validate`` on Python 2.7.9 and higher, which will result in certificate validation errors against the Windows self-signed certificates. Unless verifiable certificates have been configured on the WinRM listeners, this should be set to ``ignore``
* ``ansible_winrm_*``: Any additional keyword arguments supported by ``winrm.Protocol`` may be provided.
.. _windows_system_prep:
@ -221,7 +226,7 @@ Getting to PowerShell 3.0 or higher
PowerShell 3.0 or higher is needed for most provided Ansible modules for Windows, and is also required to run the above setup script. Note that PowerShell 3.0 is only supported on Windows 7 SP1, Windows Server 2008 SP1, and later releases of Windows.
Looking at an ansible checkout, copy the `examples/scripts/upgrade_to_ps3.ps1 <https://github.com/cchurch/ansible/blob/devel/examples/scripts/upgrade_to_ps3.ps1>`_ script onto the remote host and run a PowerShell console as an administrator. You will now be running PowerShell 3 and can try connectivity again using the win_ping technique referenced above.
Looking at an Ansible checkout, copy the `examples/scripts/upgrade_to_ps3.ps1 <https://github.com/cchurch/ansible/blob/devel/examples/scripts/upgrade_to_ps3.ps1>`_ script onto the remote host and run a PowerShell console as an administrator. You will now be running PowerShell 3 and can try connectivity again using the win_ping technique referenced above.
.. _what_windows_modules_are_available:
@ -248,10 +253,10 @@ Note there are a few other Ansible modules that don't start with "win" that also
Developers: Supported modules and how it works
``````````````````````````````````````````````
Developing ansible modules are covered in a `later section of the documentation <http://docs.ansible.com/developing_modules.html>`_, with a focus on Linux/Unix.
What if you want to write Windows modules for ansible though?
Developing Ansible modules are covered in a `later section of the documentation <http://docs.ansible.com/developing_modules.html>`_, with a focus on Linux/Unix.
What if you want to write Windows modules for Ansible though?
For Windows, ansible modules are implemented in PowerShell. Skim those Linux/Unix module development chapters before proceeding.
For Windows, Ansible modules are implemented in PowerShell. Skim those Linux/Unix module development chapters before proceeding.
Windows modules live in a "windows/" subfolder in the Ansible "library/" subtree. For example, if a module is named
"library/windows/win_ping", there will be embedded documentation in the "win_ping" file, and the actual PowerShell code will live in a "win_ping.ps1" file. Take a look at the sources and this will make more sense.
@ -351,7 +356,7 @@ form of new modules, tweaks to existing modules, documentation, or something els
:doc:`developing_modules`
How to write modules
:doc:`playbooks`
Learning ansible's configuration management language
Learning Ansible's configuration management language
`List of Windows Modules <http://docs.ansible.com/list_of_windows_modules.html>`_
Windows specific module list, all implemented in PowerShell
`Mailing List <http://groups.google.com/group/ansible-project>`_

View file

@ -8,6 +8,6 @@ The source of these modules is hosted on GitHub in the `ansible-modules-core <ht
If you believe you have found a bug in a core module and are already running the latest stable or development version of Ansible, first look in the `issue tracker at github.com/ansible/ansible-modules-core <http://github.com/ansible/ansible-modules-core>`_ to see if a bug has already been filed. If not, we would be grateful if you would file one.
Should you have a question rather than a bug report, inquries are welcome on the `ansible-project google group <https://groups.google.com/forum/#!forum/ansible-project>`_ or on Ansible's "#ansible" channel, located on irc.freenode.net. Development oriented topics should instead use the similar `ansible-devel google group <https://groups.google.com/forum/#!forum/ansible-devel>`_.
Should you have a question rather than a bug report, inquiries are welcome on the `ansible-project google group <https://groups.google.com/forum/#!forum/ansible-project>`_ or on Ansible's "#ansible" channel, located on irc.freenode.net. Development oriented topics should instead use the similar `ansible-devel google group <https://groups.google.com/forum/#!forum/ansible-devel>`_.
Documentation updates for these modules can also be edited directly in the module itself and by submitting a pull request to the module source code, just look for the "DOCUMENTATION" block in the source tree.

View file

@ -130,6 +130,29 @@ Here is an example::
Note that you must have passphrase-less SSH keys or an ssh-agent configured for this to work, otherwise rsync
will need to ask for a passphrase.
.. _delegate_facts:
Delegated facts
```````````````
.. versionadded:: 2.0
By default, any fact gathered by a delegated task are assigned to the `inventory_hostname` (the current host) instead of the host which actually produced the facts (the delegated to host).
In 2.0, the directive `delegate_facts` may be set to `True` to assign the task's gathered facts to the delegated host instead of the current one.::
- hosts: app_servers
tasks:
- name: gather facts from db servers
setup:
delegate_to: "{{item}}"
delegate_facts: True
with_items: "{{groups['dbservers'}}"
The above will gather facts for the machines in the dbservers group and assign the facts to those machines and not to app_servers.
This way you can lookup `hostvars['dbhost1']['default_ipv4_addresses'][0]` even though dbservers were not part of the play, or left out by using `--limit`.
.. _run_once:
Run Once
@ -159,13 +182,18 @@ This can be optionally paired with "delegate_to" to specify an individual host t
delegate_to: web01.example.org
When "run_once" is not used with "delegate_to" it will execute on the first host, as defined by inventory,
in the group(s) of hosts targeted by the play. e.g. webservers[0] if the play targeted "hosts: webservers".
in the group(s) of hosts targeted by the play - e.g. webservers[0] if the play targeted "hosts: webservers".
This approach is similar, although more concise and cleaner than applying a conditional to a task such as::
This approach is similar to applying a conditional to a task such as::
- command: /opt/application/upgrade_db.py
when: inventory_hostname == webservers[0]
.. note::
When used together with "serial", tasks marked as "run_once" will be ran on one host in *each* serial batch.
If it's crucial that the task is run only once regardless of "serial" mode, use
:code:`inventory_hostname == my_group_name[0]` construct.
.. _local_playbooks:
Local Playbooks

View file

@ -31,7 +31,7 @@ The environment can also be stored in a variable, and accessed like so::
tasks:
- apt: name=cobbler state=installed
environment: proxy_env
environment: "{{proxy_env}}"
You can also use it at a playbook level::

View file

@ -514,20 +514,25 @@ To match strings against a regex, use the "match" or "search" filter::
To replace text in a string with regex, use the "regex_replace" filter::
# convert "ansible" to "able"
# convert "ansible" to "able"
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
# convert "foobar" to "bar"
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
# convert "localhost:80" to "localhost, 80" using named groups
{{ 'localhost:80' | regex_replace('^(?P<host>.+):(?P<port>\\d+)$', '\\g<host>, \\g<port>') }}
.. note:: Prior to ansible 2.0, if "regex_replace" filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments),
then you needed to escape backreferences (e.g. ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``).
.. versionadded:: 2.0
To escape special characters within a regex, use the "regex_escape" filter::
# convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$'
{{ '^f.*o(.*)$' | regex_escape() }}
To make use of one attribute from each item in a list of complex variables, use the "map" filter (see the `Jinja2 map() docs`_ for more)::
# get a comma-separated list of the mount points (e.g. "/,/mnt/stuff") on a host

View file

@ -386,6 +386,7 @@ won't need them for much else.
* Handler names live in a global namespace.
* If two handler tasks have the same name, only one will run.
`* <https://github.com/ansible/ansible/issues/4943>`_
* You cannot notify a handler that is defined inside of an include
Roles are described later on, but it's worthwhile to point out that:

View file

@ -240,6 +240,112 @@ If you're not using 2.0 yet, you can do something similar with the credstash too
debug: msg="Poor man's credstash lookup! {{ lookup('pipe', 'credstash -r us-west-1 get my-other-password') }}"
.. _dns_lookup:
The DNS Lookup (dig)
````````````````````
.. versionadded:: 1.9.0
.. warning:: This lookup depends on the `dnspython <http://www.dnspython.org/>`_
library.
The ``dig`` lookup runs queries against DNS servers to retrieve DNS records for
a specific name (*FQDN* - fully qualified domain name). It is possible to lookup any DNS record in this manner.
There is a couple of different syntaxes that can be used to specify what record
should be retrieved, and for which name. It is also possible to explicitly
specify the DNS server(s) to use for lookups.
In its simplest form, the ``dig`` lookup plugin can be used to retrieve an IPv4
address (DNS ``A`` record) associated with *FQDN*:
.. note:: If you need to obtain the ``AAAA`` record (IPv6 address), you must
specify the record type explicitly. Syntax for specifying the record
type is described below.
.. note:: The trailing dot in most of the examples listed is purely optional,
but is specified for completeness/correctness sake.
::
- debug: msg="The IPv4 address for example.com. is {{ lookup('dig', 'example.com.')}}"
In addition to (default) ``A`` record, it is also possible to specify a different
record type that should be queried. This can be done by either passing-in
additional parameter of format ``qtype=TYPE`` to the ``dig`` lookup, or by
appending ``/TYPE`` to the *FQDN* being queried. For example::
- debug: msg="The TXT record for gmail.com. is {{ lookup('dig', 'gmail.com.', 'qtype=TXT') }}"
- debug: msg="The TXT record for gmail.com. is {{ lookup('dig', 'gmail.com./TXT') }}"
If multiple values are associated with the requested record, the results will be
returned as a comma-separated list. In such cases you may want to pass option
``wantlist=True`` to the plugin, which will result in the record values being
returned as a list over which you can iterate later on::
- debug: msg="One of the MX records for gmail.com. is {{ item }}"
with_items: "{{ lookup('dig', 'gmail.com./MX', wantlist=True) }}"
In case of reverse DNS lookups (``PTR`` records), you can also use a convenience
syntax of format ``IP_ADDRESS/PTR``. The following three lines would produce the
same output::
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8/PTR') }}"
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8.in-addr.arpa./PTR') }}"
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8.in-addr.arpa.', 'qtype=PTR') }}"
By default, the lookup will rely on system-wide configured DNS servers for
performing the query. It is also possible to explicitly specify DNS servers to
query using the ``@DNS_SERVER_1,DNS_SERVER_2,...,DNS_SERVER_N`` notation. This
needs to be passed-in as an additional parameter to the lookup. For example::
- debug: msg="Querying 8.8.8.8 for IPv4 address for example.com. produces {{ lookup('dig', 'example.com', '@8.8.8.8') }}"
In some cases the DNS records may hold a more complex data structure, or it may
be useful to obtain the results in a form of a dictionary for future
processing. The ``dig`` lookup supports parsing of a number of such records,
with the result being returned as a dictionary. This way it is possible to
easily access such nested data. This return format can be requested by
passing-in the ``flat=0`` option to the lookup. For example::
- debug: msg="XMPP service for gmail.com. is available at {{ item.target }} on port {{ item.port }}"
with_items: "{{ lookup('dig', '_xmpp-server._tcp.gmail.com./SRV', 'flat=0', wantlist=True) }}"
Take note that due to the way Ansible lookups work, you must pass the
``wantlist=True`` argument to the lookup, otherwise Ansible will report errors.
Currently the dictionary results are supported for the following records:
.. note:: *ALL* is not a record per-se, merely the listed fields are available
for any record results you retrieve in the form of a dictionary.
========== =============================================================================
Record Fields
---------- -----------------------------------------------------------------------------
*ALL* owner, ttl, type
A address
AAAA address
CNAME target
DNAME target
DLV algorithm, digest_type, key_tag, digest
DNSKEY flags, algorithm, protocol, key
DS algorithm, digest_type, key_tag, digest
HINFO cpu, os
LOC latitude, longitude, altitude, size, horizontal_precision, vertical_precision
MX preference, exchange
NAPTR order, preference, flags, service, regexp, replacement
NS target
NSEC3PARAM algorithm, flags, iterations, salt
PTR target
RP mbox, txt
SOA mname, rname, serial, refresh, retry, expire, minimum
SPF strings
SRV priority, weight, port, target
SSHFP algorithm, fp_type, fingerprint
TLSA usage, selector, mtype, cert
TXT strings
========== =============================================================================
.. _more_lookups:
More Lookups

View file

@ -132,7 +132,7 @@ Note that you cannot do variable substitution when including one playbook
inside another.
.. note::
You can not conditionally path the location to an include file,
You can not conditionally pass the location to an include file,
like you can with 'vars_files'. If you find yourself needing to do
this, consider how you can restructure your playbook to be more
class/role oriented. This is to say you cannot use a 'fact' to
@ -191,11 +191,8 @@ This designates the following behaviors, for each role 'x':
- If roles/x/handlers/main.yml exists, handlers listed therein will be added to the play
- If roles/x/vars/main.yml exists, variables listed therein will be added to the play
- If roles/x/meta/main.yml exists, any role dependencies listed therein will be added to the list of roles (1.3 and later)
- Any copy tasks can reference files in roles/x/files/ without having to path them relatively or absolutely
- Any script tasks can reference scripts in roles/x/files/ without having to path them relatively or absolutely
- Any template tasks can reference files in roles/x/templates/ without having to path them relatively or absolutely
- Any include tasks can reference files in roles/x/tasks/ without having to path them relatively or absolutely
- Any copy, script, template or include tasks (in the role) can reference files in roles/x/files/ without having to path them relatively or absolutely
In Ansible 1.4 and later you can configure a roles_path to search for roles. Use this to check all of your common roles out to one location, and share
them easily between multiple playbook projects. See :doc:`intro_configuration` for details about how to set this up in ansible.cfg.

View file

@ -793,10 +793,10 @@ Basically, anything that goes into "role defaults" (the defaults folder inside t
.. rubric:: Footnotes
.. [1] Tasks in each role will see their own role's defaults tasks outside of roles will the last role's defaults
.. [2] Variables defined in inventory file or provided by dynamic inventory
.. [1] Tasks in each role will see their own role's defaults. Tasks defined outside of a role will see the last role's defaults.
.. [2] Variables defined in inventory file or provided by dynamic inventory.
.. note:: Within a any section, redefining a var will overwrite the previous instance.
.. note:: Within any section, redefining a var will overwrite the previous instance.
If multiple groups have the same variable, the last one loaded wins.
If you define a variable twice in a play's vars: section, the 2nd one wins.
.. note:: the previous describes the default config `hash_behavior=replace`, switch to 'merge' to only partially overwrite.

View file

@ -0,0 +1,176 @@
Porting Guide
=============
Playbook
--------
* backslash escapes When specifying parameters in jinja2 expressions in YAML
dicts, backslashes sometimes needed to be escaped twice. This has been fixed
in 2.0.x so that escaping once works. The following example shows how
playbooks must be modified::
# Syntax in 1.9.x
- debug:
msg: "{{ 'test1_junk 1\\\\3' | regex_replace('(.*)_junk (.*)', '\\\\1 \\\\2') }}"
# Syntax in 2.0.x
- debug:
msg: "{{ 'test1_junk 1\\3' | regex_replace('(.*)_junk (.*)', '\\1 \\2') }}"
# Output:
"msg": "test1 1\\3"
To make an escaped string that will work on all versions you have two options::
- debug: msg="{{ 'test1_junk 1\\3' | regex_replace('(.*)_junk (.*)', '\\1 \\2') }}"
uses key=value escaping which has not changed. The other option is to check for the ansible version::
"{{ (ansible_version|version_compare('ge', '2.0'))|ternary( 'test1_junk 1\\3' | regex_replace('(.*)_junk (.*)', '\\1 \\2') , 'test1_junk 1\\\\3' | regex_replace('(.*)_junk (.*)', '\\\\1 \\\\2') ) }}"
* trailing newline When a string with a trailing newline was specified in the
playbook via yaml dict format, the trailing newline was stripped. When
specified in key=value format, the trailing newlines were kept. In v2, both
methods of specifying the string will keep the trailing newlines. If you
relied on the trailing newline being stripped, you can change your playbook
using the following as an example::
# Syntax in 1.9.x
vars:
message: >
Testing
some things
tasks:
- debug:
msg: "{{ message }}"
# Syntax in 2.0.x
vars:
old_message: >
Testing
some things
message: "{{ old_messsage[:-1] }}"
- debug:
msg: "{{ message }}"
# Output
"msg": "Testing some things"
* When specifying complex args as a variable, the variable must use the full jinja2
variable syntax ('{{var_name}}') - bare variable names there are no longer accepted.
In fact, even specifying args with variables has been deprecated, and will not be
allowed in future versions::
---
- hosts: localhost
connection: local
gather_facts: false
vars:
my_dirs:
- { path: /tmp/3a, state: directory, mode: 0755 }
- { path: /tmp/3b, state: directory, mode: 0700 }
tasks:
- file:
args: "{{item}}" # <- args here uses the full variable syntax
with_items: my_dirs
* porting task includes
* More dynamic. Corner-case formats that were not supposed to work now do not, as expected.
* variables defined in the yaml dict format https://github.com/ansible/ansible/issues/13324
* templating (variables in playbooks and template lookups) has improved with regard to keeping the original instead of turning everything into a string.
If you need the old behavior, quote the value to pass it around as a string.
* Empty variables and variables set to null in yaml are no longer converted to empty strings. They will retain the value of `None`.
You can override the `null_representation` setting to an empty string in your config file by setting the `ANSIBLE_NULL_REPRESENTATION` environment variable.
* Extras callbacks must be whitelisted in ansible.cfg. Copying is no longer necessary but whitelisting in ansible.cfg must be completed.
* dnf module has been rewritten. Some minor changes in behavior may be observed.
* win_updates has been rewritten and works as expected now.
Deprecated
----------
While all items listed here will show a deprecation warning message, they still work as they did in 1.9.x. Please note that they will be removed in 2.2 (Ansible always waits two major releases to remove a deprecated feature).
* Bare variables in `with_` loops should instead use the “{{var}}” syntax, which helps eliminate ambiguity.
* The ansible-galaxy text format requirements file. Users should use the YAML format for requirements instead.
* Undefined variables within a `with_` loops list currently do not interrupt the loop, but they do issue a warning; in the future, they will issue an error.
* Using variables for task parameters is unsafe and will be removed in a future version. For example::
- hosts: localhost
gather_facts: no
vars:
debug_params:
msg: "hello there"
tasks:
- debug: "{{debug_params}}"
* Host patterns should use a comma (,) or colon (:) instead of a semicolon (;) to separate hosts/groups in the pattern.
* Ranges specified in host patterns should use the [x:y] syntax, instead of [x-y].
* Playbooks using privilege escalation should always use “become*” options rather than the old su*/sudo* options.
* The “short form” for vars_prompt is no longer supported.
For example::
vars_prompt:
variable_name: "Prompt string"
* Specifying variables at the top level of a task include statement is no longer supported. For example::
- include: foo.yml
a: 1
Should now be::
- include: foo.yml
args:
a: 1
* Setting any_errors_fatal on a task is no longer supported. This should be set at the play level only.
* Bare variables in the `environment` dictionary (for plays/tasks/etc.) are no longer supported. Variables specified there should use the full variable syntax: {{foo}}.
* Tags should no longer be specified with other parameters in a task include. Instead, they should be specified as an option on the task.
For example::
- include: foo.yml tags=a,b,c
Should be::
- include: foo.yml
tags: [a, b, c]
* The first_available_file option on tasks has been deprecated. Users should use the with_first_found option or lookup (first_found, …) plugin.
Porting plugins
===============
In ansible-1.9.x, you would generally copy an existing plugin to create a new one. Simply implementing the methods and attributes that the caller of the plugin expected made it a plugin of that type. In ansible-2.0, most plugins are implemented by subclassing a base class for each plugin type. This way the custom plugin does not need to contain methods which are not customized.
Lookup plugins
--------------
* lookup plugins ; import version
Connection plugins
------------------
* connection plugins
Action plugins
--------------
* action plugins
Callback plugins
----------------
* callback plugins
Connection plugins
------------------
* connection plugins
Porting custom scripts
======================
Custom scripts that used the ``ansible.runner.Runner`` API in 1.x have to be ported in 2.x. Please refer to:
https://github.com/ansible/ansible/blob/devel/docsite/rst/developing_api.rst

View file

@ -14,7 +14,6 @@
#inventory = /etc/ansible/hosts
#library = /usr/share/my_modules/
#remote_tmp = $HOME/.ansible/tmp
#pattern = *
#forks = 5
#poll_interval = 15
#sudo_user = root
@ -182,7 +181,7 @@
#no_log = False
# prevents logging of tasks, but only on the targets, data is still logged on the master/controller
#no_target_syslog = True
#no_target_syslog = False
# controls the compression level of variables sent to
# worker processes. At the default of 0, no compression

View file

@ -10,35 +10,35 @@
# Ex 1: Ungrouped hosts, specify before any group headers.
green.example.com
blue.example.com
192.168.100.1
192.168.100.10
## green.example.com
## blue.example.com
## 192.168.100.1
## 192.168.100.10
# Ex 2: A collection of hosts belonging to the 'webservers' group
[webservers]
alpha.example.org
beta.example.org
192.168.1.100
192.168.1.110
## [webservers]
## alpha.example.org
## beta.example.org
## 192.168.1.100
## 192.168.1.110
# If you have multiple hosts following a pattern you can specify
# them like this:
www[001:006].example.com
## www[001:006].example.com
# Ex 3: A collection of database servers in the 'dbservers' group
[dbservers]
db01.intranet.mydomain.net
db02.intranet.mydomain.net
10.25.1.56
10.25.1.57
## [dbservers]
##
## db01.intranet.mydomain.net
## db02.intranet.mydomain.net
## 10.25.1.56
## 10.25.1.57
# Here's another example of host ranges, this time there are no
# leading 0s:
db-[99:101]-node.example.com
## db-[99:101]-node.example.com

View file

@ -140,7 +140,7 @@ def list_modules(module_dir, depth=0):
if os.path.isdir(d):
res = list_modules(d, depth + 1)
for key in res.keys():
for key in list(res.keys()):
if key in categories:
categories[key] = merge_hash(categories[key], res[key])
res.pop(key, None)
@ -451,7 +451,7 @@ def main():
categories = list_modules(options.module_dir)
last_category = None
category_names = categories.keys()
category_names = list(categories.keys())
category_names.sort()
category_list_path = os.path.join(options.output_dir, "modules_by_category.rst")

View file

@ -19,5 +19,5 @@
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
__version__ = '2.0.0'
__version__ = '2.0.0.2'
__author__ = 'Ansible, Inc.'

View file

@ -66,7 +66,7 @@ class CLI(object):
LESS_OPTS = 'FRSX' # -F (quit-if-one-screen) -R (allow raw ansi control chars)
# -S (chop long lines) -X (disable termcap init and de-init)
def __init__(self, args):
def __init__(self, args, callback=None):
"""
Base init method for all command line programs
"""
@ -75,6 +75,7 @@ class CLI(object):
self.options = None
self.parser = None
self.action = None
self.callback = callback
def set_action(self):
"""
@ -191,12 +192,9 @@ class CLI(object):
if runas_opts:
# Check for privilege escalation conflicts
if (op.su or op.su_user or op.ask_su_pass) and \
(op.sudo or op.sudo_user or op.ask_sudo_pass) or \
(op.su or op.su_user or op.ask_su_pass) and \
(op.become or op.become_user or op.become_ask_pass) or \
(op.sudo or op.sudo_user or op.ask_sudo_pass) and \
(op.become or op.become_user or op.become_ask_pass):
if (op.su or op.su_user) and (op.sudo or op.sudo_user) or \
(op.su or op.su_user) and (op.become or op.become_user) or \
(op.sudo or op.sudo_user) and (op.become or op.become_user):
self.parser.error("Sudo arguments ('--sudo', '--sudo-user', and '--ask-sudo-pass') "
"and su arguments ('-su', '--su-user', and '--ask-su-pass') "
@ -213,7 +211,7 @@ class CLI(object):
@staticmethod
def base_parser(usage="", output_opts=False, runas_opts=False, meta_opts=False, runtask_opts=False, vault_opts=False, module_opts=False,
async_opts=False, connect_opts=False, subset_opts=False, check_opts=False, inventory_opts=False, epilog=None, fork_opts=False):
async_opts=False, connect_opts=False, subset_opts=False, check_opts=False, inventory_opts=False, epilog=None, fork_opts=False, runas_prompt_opts=False):
''' create an options parser for most ansible scripts '''
# TODO: implement epilog parsing
@ -246,14 +244,15 @@ class CLI(object):
help="specify number of parallel processes to use (default=%s)" % C.DEFAULT_FORKS)
if vault_opts:
parser.add_option('--ask-vault-pass', default=False, dest='ask_vault_pass', action='store_true',
parser.add_option('--ask-vault-pass', default=C.DEFAULT_ASK_VAULT_PASS, dest='ask_vault_pass', action='store_true',
help='ask for vault password')
parser.add_option('--vault-password-file', default=C.DEFAULT_VAULT_PASSWORD_FILE, dest='vault_password_file',
help="vault password file", action="callback", callback=CLI.expand_tilde, type=str)
parser.add_option('--new-vault-password-file', dest='new_vault_password_file',
help="new vault password file for rekey", action="callback", callback=CLI.expand_tilde, type=str)
parser.add_option('--output', default=None, dest='output_file',
help='output file name for encrypt or decrypt; use - for stdout')
help='output file name for encrypt or decrypt; use - for stdout',
action="callback", callback=CLI.expand_tilde, type=str)
if subset_opts:
parser.add_option('-t', '--tags', dest='tags', default='all',
@ -269,10 +268,6 @@ class CLI(object):
if runas_opts:
# priv user defaults to root later on to enable detecting when this option was given here
parser.add_option('-K', '--ask-sudo-pass', default=C.DEFAULT_ASK_SUDO_PASS, dest='ask_sudo_pass', action='store_true',
help='ask for sudo password (deprecated, use become)')
parser.add_option('--ask-su-pass', default=C.DEFAULT_ASK_SU_PASS, dest='ask_su_pass', action='store_true',
help='ask for su password (deprecated, use become)')
parser.add_option("-s", "--sudo", default=C.DEFAULT_SUDO, action="store_true", dest='sudo',
help="run operations with sudo (nopasswd) (deprecated, use become)")
parser.add_option('-U', '--sudo-user', dest='sudo_user', default=None,
@ -289,6 +284,12 @@ class CLI(object):
help="privilege escalation method to use (default=%s), valid choices: [ %s ]" % (C.DEFAULT_BECOME_METHOD, ' | '.join(C.BECOME_METHODS)))
parser.add_option('--become-user', default=None, dest='become_user', type='string',
help='run operations as this user (default=%s)' % C.DEFAULT_BECOME_USER)
if runas_opts or runas_prompt_opts:
parser.add_option('-K', '--ask-sudo-pass', default=C.DEFAULT_ASK_SUDO_PASS, dest='ask_sudo_pass', action='store_true',
help='ask for sudo password (deprecated, use become)')
parser.add_option('--ask-su-pass', default=C.DEFAULT_ASK_SU_PASS, dest='ask_su_pass', action='store_true',
help='ask for su password (deprecated, use become)')
parser.add_option('--ask-become-pass', default=False, dest='become_ask_pass', action='store_true',
help='ask for privilege escalation password')

View file

@ -70,7 +70,7 @@ class AdHocCLI(CLI):
help="module name to execute (default=%s)" % C.DEFAULT_MODULE_NAME,
default=C.DEFAULT_MODULE_NAME)
self.options, self.args = self.parser.parse_args()
self.options, self.args = self.parser.parse_args(self.args[1:])
if len(self.args) != 1:
raise AnsibleOptionsError("Missing target hosts")
@ -158,14 +158,18 @@ class AdHocCLI(CLI):
play_ds = self._play_ds(pattern, self.options.seconds, self.options.poll_interval)
play = Play().load(play_ds, variable_manager=variable_manager, loader=loader)
if self.options.one_line:
if self.callback:
cb = self.callback
elif self.options.one_line:
cb = 'oneline'
else:
cb = 'minimal'
run_tree=False
if self.options.tree:
C.DEFAULT_CALLBACK_WHITELIST.append('tree')
C.TREE_DIR = self.options.tree
run_tree=True
# now create a task queue manager to execute the play
self._tqm = None
@ -177,6 +181,8 @@ class AdHocCLI(CLI):
options=self.options,
passwords=passwords,
stdout_callback=cb,
run_additional_callbacks=C.DEFAULT_LOAD_CALLBACK_PLUGINS,
run_tree=run_tree,
)
result = self._tqm.run(play)
finally:

View file

@ -62,7 +62,7 @@ class DocCLI(CLI):
self.parser.add_option("-s", "--snippet", action="store_true", default=False, dest='show_snippet',
help='Show playbook snippet for specified module(s)')
self.options, self.args = self.parser.parse_args()
self.options, self.args = self.parser.parse_args(self.args[1:])
display.verbosity = self.options.verbosity
def run(self):
@ -90,7 +90,8 @@ class DocCLI(CLI):
for module in self.args:
try:
filename = module_loader.find_plugin(module)
# if the module lives in a non-python file (eg, win_X.ps1), require the corresponding python file for docs
filename = module_loader.find_plugin(module, mod_type='.py')
if filename is None:
display.warning("module %s not found in %s\n" % (module, DocCLI.print_paths(module_loader)))
continue
@ -167,7 +168,8 @@ class DocCLI(CLI):
if module in module_docs.BLACKLIST_MODULES:
continue
filename = module_loader.find_plugin(module)
# if the module lives in a non-python file (eg, win_X.ps1), require the corresponding python file for docs
filename = module_loader.find_plugin(module, mod_type='.py')
if filename is None:
continue

View file

@ -22,10 +22,10 @@
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import os.path
import sys
import yaml
import time
from collections import defaultdict
from jinja2 import Environment
@ -36,6 +36,8 @@ from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.login import GalaxyLogin
from ansible.galaxy.token import GalaxyToken
from ansible.playbook.role.requirement import RoleRequirement
try:
@ -44,14 +46,12 @@ except ImportError:
from ansible.utils.display import Display
display = Display()
class GalaxyCLI(CLI):
VALID_ACTIONS = ("init", "info", "install", "list", "remove", "search")
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url" )
VALID_ACTIONS = ("delete", "import", "info", "init", "install", "list", "login", "remove", "search", "setup")
def __init__(self, args):
self.api = None
self.galaxy = None
super(GalaxyCLI, self).__init__(args)
@ -67,7 +67,17 @@ class GalaxyCLI(CLI):
self.set_action()
# options specific to actions
if self.action == "info":
if self.action == "delete":
self.parser.set_usage("usage: %prog delete [options] github_user github_repo")
elif self.action == "import":
self.parser.set_usage("usage: %prog import [options] github_user github_repo")
self.parser.add_option('--no-wait', dest='wait', action='store_false', default=True,
help='Don\'t wait for import results.')
self.parser.add_option('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch (usually master)')
self.parser.add_option('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_user/github_repo.')
elif self.action == "info":
self.parser.set_usage("usage: %prog info [options] role_name[,version]")
elif self.action == "init":
self.parser.set_usage("usage: %prog init [options] role_name")
@ -88,31 +98,42 @@ class GalaxyCLI(CLI):
self.parser.set_usage("usage: %prog remove role1 role2 ...")
elif self.action == "list":
self.parser.set_usage("usage: %prog list [role_name]")
elif self.action == "login":
self.parser.set_usage("usage: %prog login [options]")
self.parser.add_option('--github-token', dest='token', default=None,
help='Identify with github token rather than username and password.')
elif self.action == "search":
self.parser.add_option('--platforms', dest='platforms',
help='list of OS platforms to filter by')
self.parser.add_option('--galaxy-tags', dest='tags',
help='list of galaxy tags to filter by')
self.parser.set_usage("usage: %prog search [<search_term>] [--galaxy-tags <galaxy_tag1,galaxy_tag2>] [--platforms platform]")
self.parser.add_option('--author', dest='author',
help='GitHub username')
self.parser.set_usage("usage: %prog search [searchterm1 searchterm2] [--galaxy-tags galaxy_tag1,galaxy_tag2] [--platforms platform1,platform2] [--author username]")
elif self.action == "setup":
self.parser.set_usage("usage: %prog setup [options] source github_user github_repo secret")
self.parser.add_option('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see ID values.')
self.parser.add_option('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
# options that apply to more than one action
if self.action != "init":
if not self.action in ("delete","import","init","login","setup"):
self.parser.add_option('-p', '--roles-path', dest='roles_path', default=C.DEFAULT_ROLES_PATH,
help='The path to the directory containing your roles. '
'The default is the roles_path configured in your '
'ansible.cfg file (/etc/ansible/roles if not configured)')
if self.action in ("info","init","install","search"):
self.parser.add_option('-s', '--server', dest='api_server', default="https://galaxy.ansible.com",
if self.action in ("import","info","init","install","login","search","setup","delete"):
self.parser.add_option('-s', '--server', dest='api_server', default=C.GALAXY_SERVER,
help='The API server destination')
self.parser.add_option('-c', '--ignore-certs', action='store_false', dest='validate_certs', default=True,
self.parser.add_option('-c', '--ignore-certs', action='store_true', dest='ignore_certs', default=False,
help='Ignore SSL certificate validation errors.')
if self.action in ("init","install"):
self.parser.add_option('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role')
# get options, args and galaxy object
self.options, self.args =self.parser.parse_args()
display.verbosity = self.options.verbosity
self.galaxy = Galaxy(self.options)
@ -120,15 +141,13 @@ class GalaxyCLI(CLI):
return True
def run(self):
super(GalaxyCLI, self).run()
# if not offline, get connect to galaxy api
if self.action in ("info","install", "search") or (self.action == 'init' and not self.options.offline):
api_server = self.options.api_server
self.api = GalaxyAPI(self.galaxy, api_server)
if not self.api:
raise AnsibleError("The API server (%s) is not responding, please try again later." % api_server)
if self.action in ("import","info","install","search","login","setup","delete") or \
(self.action == 'init' and not self.options.offline):
self.api = GalaxyAPI(self.galaxy)
self.execute()
@ -188,7 +207,7 @@ class GalaxyCLI(CLI):
"however it will reset any main.yml files that may have\n"
"been modified there already." % role_path)
# create the default README.md
# create default README.md
if not os.path.exists(role_path):
os.makedirs(role_path)
readme_path = os.path.join(role_path, "README.md")
@ -196,9 +215,16 @@ class GalaxyCLI(CLI):
f.write(self.galaxy.default_readme)
f.close()
# create default .travis.yml
travis = Environment().from_string(self.galaxy.default_travis).render()
f = open(os.path.join(role_path, '.travis.yml'), 'w')
f.write(travis)
f.close()
for dir in GalaxyRole.ROLE_DIRS:
dir_path = os.path.join(init_path, role_name, dir)
main_yml_path = os.path.join(dir_path, 'main.yml')
# create the directory if it doesn't exist already
if not os.path.exists(dir_path):
os.makedirs(dir_path)
@ -234,6 +260,20 @@ class GalaxyCLI(CLI):
f.write(rendered_meta)
f.close()
pass
elif dir == "tests":
# create tests/test.yml
inject = dict(
role_name = role_name
)
playbook = Environment().from_string(self.galaxy.default_test).render(inject)
f = open(os.path.join(dir_path, 'test.yml'), 'w')
f.write(playbook)
f.close()
# create tests/inventory
f = open(os.path.join(dir_path, 'inventory'), 'w')
f.write('localhost')
f.close()
elif dir not in ('files','templates'):
# just write a (mostly) empty YAML file for main.yml
f = open(main_yml_path, 'w')
@ -325,7 +365,7 @@ class GalaxyCLI(CLI):
for role in required_roles:
role = RoleRequirement.role_yaml_parse(role)
display.debug('found role %s in yaml file' % str(role))
display.vvv('found role %s in yaml file' % str(role))
if 'name' not in role and 'scm' not in role:
raise AnsibleError("Must specify name or src for role")
roles_left.append(GalaxyRole(self.galaxy, **role))
@ -348,7 +388,7 @@ class GalaxyCLI(CLI):
roles_left.append(GalaxyRole(self.galaxy, rname.strip()))
for role in roles_left:
display.debug('Installing role %s ' % role.name)
display.vvv('Installing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None and not force:
@ -458,21 +498,188 @@ class GalaxyCLI(CLI):
return 0
def execute_search(self):
page_size = 1000
search = None
if len(self.args) > 1:
raise AnsibleOptionsError("At most a single search term is allowed.")
elif len(self.args) == 1:
search = self.args.pop()
response = self.api.search_roles(search, self.options.platforms, self.options.tags)
if len(self.args):
terms = []
for i in range(len(self.args)):
terms.append(self.args.pop())
search = '+'.join(terms[::-1])
if 'count' in response:
display.display("Found %d roles matching your search:\n" % response['count'])
if not search and not self.options.platforms and not self.options.tags and not self.options.author:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=self.options.platforms,
tags=self.options.tags, author=self.options.author, page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color="yellow")
return True
data = ''
if 'results' in response:
for role in response['results']:
data += self._display_role_info(role)
if response['count'] > page_size:
data += ("\nFound %d roles matching your search. Showing first %s.\n" % (response['count'], page_size))
else:
data += ("\nFound %d roles matching your search:\n" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = " %%-%ds %%s\n" % name_len
data +='\n'
data += (format_str % ("Name", "Description"))
data += (format_str % ("----", "-----------"))
for role in response['results']:
data += (format_str % (role['username'] + '.' + role['name'],role['description']))
self.pager(data)
return True
def execute_login(self):
"""
Verify user's identify via Github and retreive an auth token from Galaxy.
"""
# Authenticate with github and retrieve a token
if self.options.token is None:
login = GalaxyLogin(self.galaxy)
github_token = login.create_github_token()
else:
github_token = self.options.token
galaxy_response = self.api.authenticate(github_token)
if self.options.token is None:
# Remove the token we created
login.remove_github_token()
# Store the Galaxy token
token = GalaxyToken()
token.set(galaxy_response['token'])
display.display("Succesfully logged into Galaxy as %s" % galaxy_response['username'])
return 0
def execute_import(self):
"""
Import a role into Galaxy
"""
colors = {
'INFO': 'normal',
'WARNING': 'yellow',
'ERROR': 'red',
'SUCCESS': 'green',
'FAILED': 'red'
}
if len(self.args) < 2:
raise AnsibleError("Expected a github_username and github_repository. Use --help.")
github_repo = self.args.pop()
github_user = self.args.pop()
if self.options.check_status:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo, reference=self.options.reference)
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user,github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color='yellow')
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'],t['summary_fields']['role']['name']), color='yellow')
display.display(u'\n' + "To properly namespace this role, remove each of the above and re-import %s/%s from scratch" % (github_user,github_repo),
color='yellow')
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not self.options.wait:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'],task[0]['github_repo']))
if self.options.check_status or self.options.wait:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
"""
Setup an integration from Github or Travis
"""
if self.options.setup_list:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color="green")
display.display("---------- ---------- ----------", color="green")
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']),color="green")
return 0
if self.options.remove_id:
# Remove a secret
self.api.remove_secret(self.options.remove_id)
display.display("Secret removed. Integrations using this secret will not longer work.", color="green")
return 0
if len(self.args) < 4:
raise AnsibleError("Missing one or more arguments. Expecting: source github_user github_repo secret")
return 0
secret = self.args.pop()
github_repo = self.args.pop()
github_user = self.args.pop()
source = self.args.pop()
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
"""
Delete a role from galaxy.ansible.com
"""
if len(self.args) < 2:
raise AnsibleError("Missing one or more arguments. Expected: github_user github_repo")
github_repo = self.args.pop()
github_user = self.args.pop()
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id,role.namespace,role.name))
display.display(resp['status'])
return True

View file

@ -30,6 +30,7 @@ from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.executor.playbook_executor import PlaybookExecutor
from ansible.inventory import Inventory
from ansible.parsing.dataloader import DataLoader
from ansible.playbook.play_context import PlayContext
from ansible.utils.vars import load_extra_vars
from ansible.vars import VariableManager
@ -72,7 +73,7 @@ class PlaybookCLI(CLI):
parser.add_option('--start-at-task', dest='start_at_task',
help="start the playbook at the task matching this name")
self.options, self.args = parser.parse_args()
self.options, self.args = parser.parse_args(self.args[1:])
self.parser = parser
@ -152,18 +153,10 @@ class PlaybookCLI(CLI):
for p in results:
display.display('\nplaybook: %s' % p['playbook'])
i = 1
for play in p['plays']:
if play.name:
playname = play.name
else:
playname = '#' + str(i)
msg = "\n PLAY: %s" % (playname)
mytags = set()
if self.options.listtags and play.tags:
mytags = mytags.union(set(play.tags))
msg += ' TAGS: [%s]' % (','.join(mytags))
for idx, play in enumerate(p['plays']):
msg = "\n play #%d (%s): %s" % (idx + 1, ','.join(play.hosts), play.name)
mytags = set(play.tags)
msg += '\tTAGS: [%s]' % (','.join(mytags))
if self.options.listhosts:
playhosts = set(inventory.get_hosts(play.hosts))
@ -173,23 +166,40 @@ class PlaybookCLI(CLI):
display.display(msg)
all_tags = set()
if self.options.listtags or self.options.listtasks:
taskmsg = ' tasks:'
taskmsg = ''
if self.options.listtasks:
taskmsg = ' tasks:\n'
all_vars = variable_manager.get_vars(loader=loader, play=play)
play_context = PlayContext(play=play, options=self.options)
for block in play.compile():
block = block.filter_tagged_tasks(play_context, all_vars)
if not block.has_tasks():
continue
j = 1
for task in block.block:
taskmsg += "\n %s" % task
if self.options.listtags and task.tags:
taskmsg += " TAGS: [%s]" % ','.join(mytags.union(set(task.tags)))
j = j + 1
if task.action == 'meta':
continue
all_tags.update(task.tags)
if self.options.listtasks:
cur_tags = list(mytags.union(set(task.tags)))
cur_tags.sort()
if task.name:
taskmsg += " %s" % task.get_name()
else:
taskmsg += " %s" % task.action
taskmsg += "\tTAGS: [%s]\n" % ', '.join(cur_tags)
if self.options.listtags:
cur_tags = list(mytags.union(all_tags))
cur_tags.sort()
taskmsg += " TASK TAGS: [%s]\n" % ', '.join(cur_tags)
display.display(taskmsg)
i = i + 1
return 0
else:
return results

View file

@ -64,18 +64,24 @@ class PullCLI(CLI):
subset_opts=True,
inventory_opts=True,
module_opts=True,
runas_prompt_opts=True,
)
# options unique to pull
self.parser.add_option('--purge', default=False, action='store_true', help='purge checkout after playbook run')
self.parser.add_option('--purge', default=False, action='store_true',
help='purge checkout after playbook run')
self.parser.add_option('-o', '--only-if-changed', dest='ifchanged', default=False, action='store_true',
help='only run the playbook if the repository has been updated')
self.parser.add_option('-s', '--sleep', dest='sleep', default=None,
help='sleep for random interval (between 0 and n number of seconds) before starting. This is a useful way to disperse git requests')
self.parser.add_option('-f', '--force', dest='force', default=False, action='store_true',
help='run the playbook even if the repository could not be updated')
self.parser.add_option('-d', '--directory', dest='dest', default='~/.ansible/pull', help='directory to checkout repository to')
self.parser.add_option('-U', '--url', dest='url', default=None, help='URL of the playbook repository')
self.parser.add_option('-d', '--directory', dest='dest', default=None,
help='directory to checkout repository to')
self.parser.add_option('-U', '--url', dest='url', default=None,
help='URL of the playbook repository')
self.parser.add_option('--full', dest='fullclone', action='store_true',
help='Do a full clone, instead of a shallow one.')
self.parser.add_option('-C', '--checkout', dest='checkout',
help='branch/tag/commit to checkout. ' 'Defaults to behavior of repository module.')
self.parser.add_option('--accept-host-key', default=False, dest='accept_host_key', action='store_true',
@ -86,7 +92,13 @@ class PullCLI(CLI):
help='verify GPG signature of checked out commit, if it fails abort running the playbook.'
' This needs the corresponding VCS module to support such an operation')
self.options, self.args = self.parser.parse_args()
self.options, self.args = self.parser.parse_args(self.args[1:])
if not self.options.dest:
hostname = socket.getfqdn()
# use a hostname dependent directory, in case of $HOME on nfs
self.options.dest = os.path.join('~/.ansible/pull', hostname)
self.options.dest = os.path.expandvars(os.path.expanduser(self.options.dest))
if self.options.sleep:
try:
@ -119,7 +131,7 @@ class PullCLI(CLI):
node = platform.node()
host = socket.getfqdn()
limit_opts = 'localhost,%s,127.0.0.1' % ','.join(set([host, node, host.split('.')[0], node.split('.')[0]]))
base_opts = '-c local "%s"' % limit_opts
base_opts = '-c local '
if self.options.verbosity > 0:
base_opts += ' -%s' % ''.join([ "v" for x in range(0, self.options.verbosity) ])
@ -130,7 +142,7 @@ class PullCLI(CLI):
else:
inv_opts = self.options.inventory
#TODO: enable more repo modules hg/svn?
#FIXME: enable more repo modules hg/svn?
if self.options.module_name == 'git':
repo_opts = "name=%s dest=%s" % (self.options.url, self.options.dest)
if self.options.checkout:
@ -145,13 +157,17 @@ class PullCLI(CLI):
if self.options.verify:
repo_opts += ' verify_commit=yes'
if not self.options.fullclone:
repo_opts += ' depth=1'
path = module_loader.find_plugin(self.options.module_name)
if path is None:
raise AnsibleOptionsError(("module '%s' not found.\n" % self.options.module_name))
bin_path = os.path.dirname(os.path.abspath(sys.argv[0]))
cmd = '%s/ansible -i "%s" %s -m %s -a "%s"' % (
bin_path, inv_opts, base_opts, self.options.module_name, repo_opts
cmd = '%s/ansible -i "%s" %s -m %s -a "%s" "%s"' % (
bin_path, inv_opts, base_opts, self.options.module_name, repo_opts, limit_opts
)
for ev in self.options.extra_vars:
@ -163,6 +179,8 @@ class PullCLI(CLI):
time.sleep(self.options.sleep)
# RUN the Checkout command
display.debug("running ansible with VCS module to checkout repo")
display.vvvv('EXEC: %s' % cmd)
rc, out, err = run_cmd(cmd, live=True)
if rc != 0:
@ -174,8 +192,7 @@ class PullCLI(CLI):
display.display("Repository has not changed, quitting.")
return 0
playbook = self.select_playbook(path)
playbook = self.select_playbook(self.options.dest)
if playbook is None:
raise AnsibleOptionsError("Could not find a playbook to run.")
@ -187,16 +204,18 @@ class PullCLI(CLI):
cmd += ' -i "%s"' % self.options.inventory
for ev in self.options.extra_vars:
cmd += ' -e "%s"' % ev
if self.options.ask_sudo_pass:
cmd += ' -K'
if self.options.ask_sudo_pass or self.options.ask_su_pass or self.options.become_ask_pass:
cmd += ' --ask-become-pass'
if self.options.tags:
cmd += ' -t "%s"' % self.options.tags
if self.options.limit:
cmd += ' -l "%s"' % self.options.limit
if self.options.subset:
cmd += ' -l "%s"' % self.options.subset
os.chdir(self.options.dest)
# RUN THE PLAYBOOK COMMAND
display.debug("running ansible-playbook to do actual work")
display.debug('EXEC: %s' % cmd)
rc, out, err = run_cmd(cmd, live=True)
if self.options.purge:

View file

@ -69,7 +69,7 @@ class VaultCLI(CLI):
elif self.action == "rekey":
self.parser.set_usage("usage: %prog rekey [options] file_name")
self.options, self.args = self.parser.parse_args()
self.options, self.args = self.parser.parse_args(self.args[1:])
display.verbosity = self.options.verbosity
can_output = ['encrypt', 'decrypt']

View file

@ -120,19 +120,23 @@ DEFAULT_COW_WHITELIST = ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'd
# sections in config file
DEFAULTS='defaults'
# FIXME: add deprecation warning when these get set
#### DEPRECATED VARS ####
# use more sanely named 'inventory'
DEPRECATED_HOST_LIST = get_config(p, DEFAULTS, 'hostfile', 'ANSIBLE_HOSTS', '/etc/ansible/hosts', ispath=True)
# this is not used since 0.5 but people might still have in config
DEFAULT_PATTERN = get_config(p, DEFAULTS, 'pattern', None, None)
# generally configurable things
#### GENERALLY CONFIGURABLE THINGS ####
DEFAULT_DEBUG = get_config(p, DEFAULTS, 'debug', 'ANSIBLE_DEBUG', False, boolean=True)
DEFAULT_HOST_LIST = get_config(p, DEFAULTS,'inventory', 'ANSIBLE_INVENTORY', DEPRECATED_HOST_LIST, ispath=True)
DEFAULT_MODULE_PATH = get_config(p, DEFAULTS, 'library', 'ANSIBLE_LIBRARY', None, ispath=True)
DEFAULT_ROLES_PATH = get_config(p, DEFAULTS, 'roles_path', 'ANSIBLE_ROLES_PATH', '/etc/ansible/roles', ispath=True)
DEFAULT_REMOTE_TMP = get_config(p, DEFAULTS, 'remote_tmp', 'ANSIBLE_REMOTE_TEMP', '$HOME/.ansible/tmp')
DEFAULT_MODULE_NAME = get_config(p, DEFAULTS, 'module_name', None, 'command')
DEFAULT_PATTERN = get_config(p, DEFAULTS, 'pattern', None, '*')
DEFAULT_FORKS = get_config(p, DEFAULTS, 'forks', 'ANSIBLE_FORKS', 5, integer=True)
DEFAULT_MODULE_ARGS = get_config(p, DEFAULTS, 'module_args', 'ANSIBLE_MODULE_ARGS', '')
DEFAULT_MODULE_LANG = get_config(p, DEFAULTS, 'module_lang', 'ANSIBLE_MODULE_LANG', 'en_US.UTF-8')
DEFAULT_MODULE_LANG = get_config(p, DEFAULTS, 'module_lang', 'ANSIBLE_MODULE_LANG', os.getenv('LANG', 'en_US.UTF-8'))
DEFAULT_TIMEOUT = get_config(p, DEFAULTS, 'timeout', 'ANSIBLE_TIMEOUT', 10, integer=True)
DEFAULT_POLL_INTERVAL = get_config(p, DEFAULTS, 'poll_interval', 'ANSIBLE_POLL_INTERVAL', 15, integer=True)
DEFAULT_REMOTE_USER = get_config(p, DEFAULTS, 'remote_user', 'ANSIBLE_REMOTE_USER', None)
@ -159,7 +163,7 @@ DEFAULT_VAR_COMPRESSION_LEVEL = get_config(p, DEFAULTS, 'var_compression_level',
# disclosure
DEFAULT_NO_LOG = get_config(p, DEFAULTS, 'no_log', 'ANSIBLE_NO_LOG', False, boolean=True)
DEFAULT_NO_TARGET_SYSLOG = get_config(p, DEFAULTS, 'no_target_syslog', 'ANSIBLE_NO_TARGET_SYSLOG', True, boolean=True)
DEFAULT_NO_TARGET_SYSLOG = get_config(p, DEFAULTS, 'no_target_syslog', 'ANSIBLE_NO_TARGET_SYSLOG', False, boolean=True)
# selinux
DEFAULT_SELINUX_SPECIAL_FS = get_config(p, 'selinux', 'special_context_filesystems', None, 'fuse, nfs, vboxsf, ramfs', islist=True)
@ -197,7 +201,7 @@ DEFAULT_BECOME_ASK_PASS = get_config(p, 'privilege_escalation', 'become_ask_pa
# the module takes both, bad things could happen.
# In the future we should probably generalize this even further
# (mapping of param: squash field)
DEFAULT_SQUASH_ACTIONS = get_config(p, DEFAULTS, 'squash_actions', 'ANSIBLE_SQUASH_ACTIONS', "apt, yum, pkgng, zypper, dnf", islist=True)
DEFAULT_SQUASH_ACTIONS = get_config(p, DEFAULTS, 'squash_actions', 'ANSIBLE_SQUASH_ACTIONS', "apt, dnf, package, pkgng, yum, zypper", islist=True)
# paths
DEFAULT_ACTION_PLUGIN_PATH = get_config(p, DEFAULTS, 'action_plugins', 'ANSIBLE_ACTION_PLUGINS', '~/.ansible/plugins/action:/usr/share/ansible/plugins/action', ispath=True)
DEFAULT_CACHE_PLUGIN_PATH = get_config(p, DEFAULTS, 'cache_plugins', 'ANSIBLE_CACHE_PLUGINS', '~/.ansible/plugins/cache:/usr/share/ansible/plugins/cache', ispath=True)
@ -255,12 +259,14 @@ ACCELERATE_MULTI_KEY = get_config(p, 'accelerate', 'accelerate_multi_k
PARAMIKO_PTY = get_config(p, 'paramiko_connection', 'pty', 'ANSIBLE_PARAMIKO_PTY', True, boolean=True)
# galaxy related
DEFAULT_GALAXY_URI = get_config(p, 'galaxy', 'server_uri', 'ANSIBLE_GALAXY_SERVER_URI', 'https://galaxy.ansible.com')
GALAXY_SERVER = get_config(p, 'galaxy', 'server', 'ANSIBLE_GALAXY_SERVER', 'https://galaxy.ansible.com')
GALAXY_IGNORE_CERTS = get_config(p, 'galaxy', 'ignore_certs', 'ANSIBLE_GALAXY_IGNORE', False, boolean=True)
# this can be configured to blacklist SCMS but cannot add new ones unless the code is also updated
GALAXY_SCMS = get_config(p, 'galaxy', 'scms', 'ANSIBLE_GALAXY_SCMS', 'git, hg', islist=True)
# characters included in auto-generated passwords
DEFAULT_PASSWORD_CHARS = ascii_letters + digits + ".,:-_"
STRING_TYPE_FILTERS = get_config(p, 'jinja2', 'dont_type_filters', 'ANSIBLE_STRING_TYPE_FILTERS', ['string', 'to_json', 'to_nice_json', 'to_yaml', 'ppretty', 'json'], islist=True )
# non-configurable things
MODULE_REQUIRE_ARGS = ['command', 'shell', 'raw', 'script']

View file

@ -44,7 +44,7 @@ class AnsibleError(Exception):
which should be returned by the DataLoader() class.
'''
def __init__(self, message, obj=None, show_content=True):
def __init__(self, message="", obj=None, show_content=True):
# we import this here to prevent an import loop problem,
# since the objects code also imports ansible.errors
from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject

View file

@ -396,7 +396,8 @@ class PlayIterator:
return None
def _insert_tasks_into_state(self, state, task_list):
if state.fail_state != self.FAILED_NONE:
# if we've failed at all, or if the task list is empty, just return the current state
if state.fail_state != self.FAILED_NONE and state.run_state not in (self.ITERATING_RESCUE, self.ITERATING_ALWAYS) or not task_list:
return state
if state.run_state == self.ITERATING_TASKS:

View file

@ -31,7 +31,6 @@ from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.playbook import Playbook
from ansible.template import Templar
from ansible.utils.color import colorize, hostcolor
from ansible.utils.encrypt import do_encrypt
from ansible.utils.unicode import to_unicode
@ -83,6 +82,10 @@ class PlaybookExecutor:
if self._tqm is None: # we are doing a listing
entry = {'playbook': playbook_path}
entry['plays'] = []
else:
# make sure the tqm has callbacks loaded
self._tqm.load_callbacks()
self._tqm.send_callback('v2_playbook_on_start', pb)
i = 1
plays = pb.get_plays()
@ -108,10 +111,12 @@ class PlaybookExecutor:
salt_size = var.get("salt_size", None)
salt = var.get("salt", None)
if vname not in play.vars:
if vname not in self._variable_manager.extra_vars:
self._tqm.send_callback('v2_playbook_on_vars_prompt', vname, private, prompt, encrypt, confirm, salt_size, salt, default)
if self._tqm:
self._tqm.send_callback('v2_playbook_on_vars_prompt', vname, private, prompt, encrypt, confirm, salt_size, salt, default)
play.vars[vname] = self._do_var_prompt(vname, private, prompt, encrypt, confirm, salt_size, salt, default)
play.vars[vname] = display.do_var_prompt(vname, private, prompt, encrypt, confirm, salt_size, salt, default)
else: # we are either in --list-<option> or syntax check
play.vars[vname] = default
# Create a temporary copy of the play here, so we can run post_validate
# on it without the templating changes affecting the original object.
@ -128,8 +133,6 @@ class PlaybookExecutor:
entry['plays'].append(new_play)
else:
# make sure the tqm has callbacks loaded
self._tqm.load_callbacks()
self._tqm._unreachable_hosts.update(self._unreachable_hosts)
# we are actually running plays
@ -171,6 +174,10 @@ class PlaybookExecutor:
if entry:
entrylist.append(entry) # per playbook
# send the stats callback for this playbook
if self._tqm is not None:
self._tqm.send_callback('v2_playbook_on_stats', self._tqm._stats)
# if the last result wasn't zero, break out of the playbook file name loop
if result != 0:
break
@ -186,35 +193,6 @@ class PlaybookExecutor:
display.display("No issues encountered")
return result
# TODO: this stat summary stuff should be cleaned up and moved
# to a new method, if it even belongs here...
display.banner("PLAY RECAP")
hosts = sorted(self._tqm._stats.processed.keys())
for h in hosts:
t = self._tqm._stats.summarize(h)
display.display(u"%s : %s %s %s %s" % (
hostcolor(h, t),
colorize(u'ok', t['ok'], 'green'),
colorize(u'changed', t['changed'], 'yellow'),
colorize(u'unreachable', t['unreachable'], 'red'),
colorize(u'failed', t['failures'], 'red')),
screen_only=True
)
display.display(u"%s : %s %s %s %s" % (
hostcolor(h, t, False),
colorize(u'ok', t['ok'], None),
colorize(u'changed', t['changed'], None),
colorize(u'unreachable', t['unreachable'], None),
colorize(u'failed', t['failures'], None)),
log_only=True
)
display.display("", screen_only=True)
# END STATS STUFF
return result
def _cleanup(self, signum=None, framenum=None):
@ -258,48 +236,3 @@ class PlaybookExecutor:
return serialized_batches
def _do_var_prompt(self, varname, private=True, prompt=None, encrypt=None, confirm=False, salt_size=None, salt=None, default=None):
if sys.__stdin__.isatty():
if prompt and default is not None:
msg = "%s [%s]: " % (prompt, default)
elif prompt:
msg = "%s: " % prompt
else:
msg = 'input for %s: ' % varname
def do_prompt(prompt, private):
if sys.stdout.encoding:
msg = prompt.encode(sys.stdout.encoding)
else:
# when piping the output, or at other times when stdout
# may not be the standard file descriptor, the stdout
# encoding may not be set, so default to something sane
msg = prompt.encode(locale.getpreferredencoding())
if private:
return getpass.getpass(msg)
return raw_input(msg)
if confirm:
while True:
result = do_prompt(msg, private)
second = do_prompt("confirm " + msg, private)
if result == second:
break
display.display("***** VALUES ENTERED DO NOT MATCH ****")
else:
result = do_prompt(msg, private)
else:
result = None
display.warning("Not prompting as we are not in interactive mode")
# if result is false and default is not None
if not result and default is not None:
result = default
if encrypt:
result = do_encrypt(result, encrypt, salt_size, salt)
# handle utf-8 chars
result = to_unicode(result, errors='strict')
return result

View file

@ -110,7 +110,7 @@ class ResultProcess(multiprocessing.Process):
# if this task is registering a result, do it now
if result._task.register:
self._send_result(('register_host_var', result._host, result._task.register, clean_copy))
self._send_result(('register_host_var', result._host, result._task, clean_copy))
# send callbacks, execute other options based on the result status
# TODO: this should all be cleaned up and probably moved to a sub-function.

View file

@ -59,14 +59,18 @@ class WorkerProcess(multiprocessing.Process):
for reading later.
'''
def __init__(self, tqm, main_q, rslt_q, hostvars_manager, loader):
def __init__(self, rslt_q, task_vars, host, task, play_context, loader, variable_manager, shared_loader_obj):
super(WorkerProcess, self).__init__()
# takes a task queue manager as the sole param:
self._main_q = main_q
self._rslt_q = rslt_q
self._hostvars = hostvars_manager
self._loader = loader
self._rslt_q = rslt_q
self._task_vars = task_vars
self._host = host
self._task = task
self._play_context = play_context
self._loader = loader
self._variable_manager = variable_manager
self._shared_loader_obj = shared_loader_obj
# dupe stdin, if we have one
self._new_stdin = sys.stdin
@ -97,73 +101,45 @@ class WorkerProcess(multiprocessing.Process):
if HAS_ATFORK:
atfork()
while True:
task = None
try:
#debug("waiting for work")
(host, task, basedir, zip_vars, compressed_vars, play_context, shared_loader_obj) = self._main_q.get(block=False)
try:
# execute the task and build a TaskResult from the result
debug("running TaskExecutor() for %s/%s" % (self._host, self._task))
executor_result = TaskExecutor(
self._host,
self._task,
self._task_vars,
self._play_context,
self._new_stdin,
self._loader,
self._shared_loader_obj,
).run()
if compressed_vars:
job_vars = json.loads(zlib.decompress(zip_vars))
else:
job_vars = zip_vars
debug("done running TaskExecutor() for %s/%s" % (self._host, self._task))
self._host.vars = dict()
self._host.groups = []
task_result = TaskResult(self._host, self._task, executor_result)
job_vars['hostvars'] = self._hostvars.hostvars()
# put the result on the result queue
debug("sending task result")
self._rslt_q.put(task_result)
debug("done sending task result")
debug("there's work to be done! got a task/handler to work on: %s" % task)
except AnsibleConnectionFailure:
self._host.vars = dict()
self._host.groups = []
task_result = TaskResult(self._host, self._task, dict(unreachable=True))
self._rslt_q.put(task_result, block=False)
# because the task queue manager starts workers (forks) before the
# playbook is loaded, set the basedir of the loader inherted by
# this fork now so that we can find files correctly
self._loader.set_basedir(basedir)
# Serializing/deserializing tasks does not preserve the loader attribute,
# since it is passed to the worker during the forking of the process and
# would be wasteful to serialize. So we set it here on the task now, and
# the task handles updating parent/child objects as needed.
task.set_loader(self._loader)
# execute the task and build a TaskResult from the result
debug("running TaskExecutor() for %s/%s" % (host, task))
executor_result = TaskExecutor(
host,
task,
job_vars,
play_context,
self._new_stdin,
self._loader,
shared_loader_obj,
).run()
debug("done running TaskExecutor() for %s/%s" % (host, task))
task_result = TaskResult(host, task, executor_result)
# put the result on the result queue
debug("sending task result")
self._rslt_q.put(task_result)
debug("done sending task result")
except queue.Empty:
time.sleep(0.0001)
except AnsibleConnectionFailure:
except Exception as e:
if not isinstance(e, (IOError, EOFError, KeyboardInterrupt)) or isinstance(e, TemplateNotFound):
try:
if task:
task_result = TaskResult(host, task, dict(unreachable=True))
self._rslt_q.put(task_result, block=False)
self._host.vars = dict()
self._host.groups = []
task_result = TaskResult(self._host, self._task, dict(failed=True, exception=traceback.format_exc(), stdout=''))
self._rslt_q.put(task_result, block=False)
except:
break
except Exception as e:
if isinstance(e, (IOError, EOFError, KeyboardInterrupt)) and not isinstance(e, TemplateNotFound):
break
else:
try:
if task:
task_result = TaskResult(host, task, dict(failed=True, exception=traceback.format_exc(), stdout=''))
self._rslt_q.put(task_result, block=False)
except:
debug("WORKER EXCEPTION: %s" % e)
debug("WORKER EXCEPTION: %s" % traceback.format_exc())
break
debug("WORKER EXCEPTION: %s" % e)
debug("WORKER EXCEPTION: %s" % traceback.format_exc())
debug("WORKER PROCESS EXITING")

View file

@ -35,7 +35,7 @@ from ansible.template import Templar
from ansible.utils.encrypt import key_for_hostname
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unicode import to_unicode
from ansible.vars.unsafe_proxy import UnsafeProxy
from ansible.vars.unsafe_proxy import UnsafeProxy, wrap_var
try:
from __main__ import display
@ -67,6 +67,7 @@ class TaskExecutor:
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
def run(self):
'''
@ -145,7 +146,7 @@ class TaskExecutor:
except AttributeError:
pass
except Exception as e:
display.debug("error closing connection: %s" % to_unicode(e))
display.debug(u"error closing connection: %s" % to_unicode(e))
def _get_loop_items(self):
'''
@ -182,7 +183,7 @@ class TaskExecutor:
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop_args, templar=templar,
loader=self._loader, fail_on_undefined=True, convert_bare=True)
except AnsibleUndefinedVariable as e:
if 'has no attribute' in str(e):
if u'has no attribute' in to_unicode(e):
loop_terms = []
display.deprecated("Skipping task due to undefined attribute, in the future this will be a fatal error.")
else:
@ -230,7 +231,7 @@ class TaskExecutor:
tmp_task = self._task.copy()
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=str(e)))
results.append(dict(failed=True, msg=to_unicode(e)))
continue
# now we swap the internal task and play context with their copies,
@ -244,6 +245,7 @@ class TaskExecutor:
# now update the result with the item info, and append the result
# to the list of results
res['item'] = item
#TODO: send item results to callback here, instead of all at the end
results.append(res)
return results
@ -360,8 +362,9 @@ class TaskExecutor:
self._task.args = variable_params
# get the connection and the handler for this execution
self._connection = self._get_connection(variables=variables, templar=templar)
self._connection.set_host_overrides(host=self._host)
if not self._connection or not getattr(self._connection, 'connected', False):
self._connection = self._get_connection(variables=variables, templar=templar)
self._connection.set_host_overrides(host=self._host)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
@ -384,7 +387,6 @@ class TaskExecutor:
# make a copy of the job vars here, in case we need to update them
# with the registered variable value later on when testing conditions
#vars_copy = variables.copy()
vars_copy = variables.copy()
display.debug("starting attempt loop")
@ -398,9 +400,14 @@ class TaskExecutor:
try:
result = self._handler.run(task_vars=variables)
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=str(e))
return dict(unreachable=True, msg=to_unicode(e))
display.debug("handler run complete")
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
vars_copy[self._task.register] = wrap_var(result.copy())
if self._task.async > 0:
# the async_wrapper module returns dumped JSON via its stdout
# response, so we parse it here and replace the result
@ -409,7 +416,7 @@ class TaskExecutor:
return result
result = json.loads(result.get('stdout'))
except (TypeError, ValueError) as e:
return dict(failed=True, msg="The async task did not return valid JSON: %s" % str(e))
return dict(failed=True, msg=u"The async task did not return valid JSON: %s" % to_unicode(e))
if self._task.poll > 0:
result = self._poll_async_result(result=result, templar=templar)
@ -430,11 +437,6 @@ class TaskExecutor:
return failed_when_result
return False
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
vars_copy[self._task.register] = result
if 'ansible_facts' in result:
vars_copy.update(result['ansible_facts'])
@ -451,7 +453,7 @@ class TaskExecutor:
if attempt < retries - 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
cond.when = [ self._task.until ]
if cond.evaluate_conditional(templar, vars_copy):
break
@ -464,7 +466,7 @@ class TaskExecutor:
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = result
variables[self._task.register] = wrap_var(result)
if 'ansible_facts' in result:
variables.update(result['ansible_facts'])

View file

@ -34,6 +34,7 @@ from ansible.playbook.play_context import PlayContext
from ansible.plugins import callback_loader, strategy_loader, module_loader
from ansible.template import Templar
from ansible.vars.hostvars import HostVars
from ansible.plugins.callback import CallbackBase
try:
from __main__ import display
@ -56,7 +57,7 @@ class TaskQueueManager:
which dispatches the Play's tasks to hosts.
'''
def __init__(self, inventory, variable_manager, loader, options, passwords, stdout_callback=None):
def __init__(self, inventory, variable_manager, loader, options, passwords, stdout_callback=None, run_additional_callbacks=True, run_tree=False):
self._inventory = inventory
self._variable_manager = variable_manager
@ -65,6 +66,8 @@ class TaskQueueManager:
self._stats = AggregateStats()
self.passwords = passwords
self._stdout_callback = stdout_callback
self._run_additional_callbacks = run_additional_callbacks
self._run_tree = run_tree
self._callbacks_loaded = False
self._callback_plugins = []
@ -96,14 +99,10 @@ class TaskQueueManager:
def _initialize_processes(self, num):
self._workers = []
for i in xrange(num):
for i in range(num):
main_q = multiprocessing.Queue()
rslt_q = multiprocessing.Queue()
prc = WorkerProcess(self, main_q, rslt_q, self._hostvars_manager, self._loader)
prc.start()
self._workers.append((prc, main_q, rslt_q))
self._workers.append([None, main_q, rslt_q])
self._result_prc = ResultProcess(self._final_q, self._workers)
self._result_prc.start()
@ -144,8 +143,14 @@ class TaskQueueManager:
if self._stdout_callback is None:
self._stdout_callback = C.DEFAULT_STDOUT_CALLBACK
if self._stdout_callback not in callback_loader:
raise AnsibleError("Invalid callback for stdout specified: %s" % self._stdout_callback)
if isinstance(self._stdout_callback, CallbackBase):
self._callback_plugins.append(self._stdout_callback)
stdout_callback_loaded = True
elif isinstance(self._stdout_callback, basestring):
if self._stdout_callback not in callback_loader:
raise AnsibleError("Invalid callback for stdout specified: %s" % self._stdout_callback)
else:
raise AnsibleError("callback must be an instance of CallbackBase or the name of a callback plugin")
for callback_plugin in callback_loader.all(class_only=True):
if hasattr(callback_plugin, 'CALLBACK_VERSION') and callback_plugin.CALLBACK_VERSION >= 2.0:
@ -159,7 +164,9 @@ class TaskQueueManager:
if callback_name != self._stdout_callback or stdout_callback_loaded:
continue
stdout_callback_loaded = True
elif callback_needs_whitelist and (C.DEFAULT_CALLBACK_WHITELIST is None or callback_name not in C.DEFAULT_CALLBACK_WHITELIST):
elif callback_name == 'tree' and self._run_tree:
pass
elif not self._run_additional_callbacks or (callback_needs_whitelist and (C.DEFAULT_CALLBACK_WHITELIST is None or callback_name not in C.DEFAULT_CALLBACK_WHITELIST)):
continue
self._callback_plugins.append(callback_plugin())
@ -184,31 +191,12 @@ class TaskQueueManager:
new_play = play.copy()
new_play.post_validate(templar)
class HostVarsManager(SyncManager):
pass
hostvars = HostVars(
play=new_play,
self.hostvars = HostVars(
inventory=self._inventory,
variable_manager=self._variable_manager,
loader=self._loader,
)
HostVarsManager.register(
'hostvars',
callable=lambda: hostvars,
# FIXME: this is the list of exposed methods to the DictProxy object, plus our
# special ones (set_variable_manager/set_inventory). There's probably a better way
# to do this with a proper BaseProxy/DictProxy derivative
exposed=(
'set_variable_manager', 'set_inventory', '__contains__', '__delitem__',
'__getitem__', '__len__', '__setitem__', 'clear', 'copy', 'get', 'has_key',
'items', 'keys', 'pop', 'popitem', 'setdefault', 'update', 'values'
),
)
self._hostvars_manager = HostVarsManager()
self._hostvars_manager.start()
# Fork # of forks, # of hosts or serial, whichever is lowest
contenders = [self._options.forks, play.serial, len(self._inventory.get_hosts(new_play.hosts))]
contenders = [ v for v in contenders if v is not None and v > 0 ]
@ -248,7 +236,6 @@ class TaskQueueManager:
# and run the play using the strategy and cleanup on way out
play_return = strategy.run(iterator, play_context)
self._cleanup_processes()
self._hostvars_manager.shutdown()
return play_return
def cleanup(self):
@ -264,7 +251,8 @@ class TaskQueueManager:
for (worker_prc, main_q, rslt_q) in self._workers:
rslt_q.close()
main_q.close()
worker_prc.terminate()
if worker_prc and worker_prc.is_alive():
worker_prc.terminate()
def clear_failed_hosts(self):
self._failed_hosts = dict()
@ -300,7 +288,20 @@ class TaskQueueManager:
for method in methods:
if method is not None:
try:
method(*args, **kwargs)
# temporary hack, required due to a change in the callback API, so
# we don't break backwards compatibility with callbacks which were
# designed to use the original API
# FIXME: target for removal and revert to the original code here
# after a year (2017-01-14)
if method_name == 'v2_playbook_on_start':
import inspect
(f_args, f_varargs, f_keywords, f_defaults) = inspect.getargspec(method)
if 'playbook' in f_args:
method(*args, **kwargs)
else:
method()
else:
method(*args, **kwargs)
except Exception as e:
try:
v1_method = method.replace('v2_','')

View file

@ -52,6 +52,8 @@ class Galaxy(object):
#TODO: move to getter for lazy loading
self.default_readme = self._str_from_data_file('readme')
self.default_meta = self._str_from_data_file('metadata_template.j2')
self.default_test = self._str_from_data_file('test_playbook.j2')
self.default_travis = self._str_from_data_file('travis.j2')
def add_role(self, role):
self.roles[role.name] = role

View file

@ -25,11 +25,15 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import json
import urllib
from urllib2 import quote as urlquote, HTTPError
from urlparse import urlparse
import ansible.constants as C
from ansible.errors import AnsibleError
from ansible.module_utils.urls import open_url
from ansible.galaxy.token import GalaxyToken
try:
from __main__ import display
@ -43,45 +47,111 @@ class GalaxyAPI(object):
SUPPORTED_VERSIONS = ['v1']
def __init__(self, galaxy, api_server):
def __init__(self, galaxy):
self.galaxy = galaxy
self.token = GalaxyToken()
self._api_server = C.GALAXY_SERVER
self._validate_certs = not C.GALAXY_IGNORE_CERTS
try:
urlparse(api_server, scheme='https')
except:
raise AnsibleError("Invalid server API url passed: %s" % api_server)
# set validate_certs
if galaxy.options.ignore_certs:
self._validate_certs = False
display.vvv('Validate TLS certificates: %s' % self._validate_certs)
server_version = self.get_server_api_version('%s/api/' % (api_server))
if not server_version:
raise AnsibleError("Could not retrieve server API version: %s" % api_server)
# set the API server
if galaxy.options.api_server != C.GALAXY_SERVER:
self._api_server = galaxy.options.api_server
display.vvv("Connecting to galaxy_server: %s" % self._api_server)
if server_version in self.SUPPORTED_VERSIONS:
self.baseurl = '%s/api/%s' % (api_server, server_version)
self.version = server_version # for future use
display.vvvvv("Base API: %s" % self.baseurl)
else:
server_version = self.get_server_api_version()
if not server_version in self.SUPPORTED_VERSIONS:
raise AnsibleError("Unsupported Galaxy server API version: %s" % server_version)
def get_server_api_version(self, api_server):
self.baseurl = '%s/api/%s' % (self._api_server, server_version)
self.version = server_version # for future use
display.vvv("Base API: %s" % self.baseurl)
def __auth_header(self):
token = self.token.get()
if token is None:
raise AnsibleError("No access token. You must first use login to authenticate and obtain an access token.")
return {'Authorization': 'Token ' + token}
def __call_galaxy(self, url, args=None, headers=None, method=None):
if args and not headers:
headers = self.__auth_header()
try:
display.vvv(url)
resp = open_url(url, data=args, validate_certs=self._validate_certs, headers=headers, method=method)
data = json.load(resp)
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['detail'])
return data
@property
def api_server(self):
return self._api_server
@property
def validate_certs(self):
return self._validate_certs
def get_server_api_version(self):
"""
Fetches the Galaxy API current version to ensure
the API server is up and reachable.
"""
#TODO: fix galaxy server which returns current_version path (/api/v1) vs actual version (v1)
# also should set baseurl using supported_versions which has path
return 'v1'
try:
data = json.load(open_url(api_server, validate_certs=self.galaxy.options.validate_certs))
return data.get("current_version", 'v1')
except Exception:
# TODO: report error
return None
url = '%s/api/' % self._api_server
data = json.load(open_url(url, validate_certs=self._validate_certs))
return data['current_version']
except Exception as e:
raise AnsibleError("The API server (%s) is not responding, please try again later." % url)
def authenticate(self, github_token):
"""
Retrieve an authentication token
"""
url = '%s/tokens/' % self.baseurl
args = urllib.urlencode({"github_token": github_token})
resp = open_url(url, data=args, validate_certs=self._validate_certs, method="POST")
data = json.load(resp)
return data
def create_import_task(self, github_user, github_repo, reference=None):
"""
Post an import request
"""
url = '%s/imports/' % self.baseurl
args = urllib.urlencode({
"github_user": github_user,
"github_repo": github_repo,
"github_reference": reference if reference else ""
})
data = self.__call_galaxy(url, args=args)
if data.get('results', None):
return data['results']
return data
def get_import_task(self, task_id=None, github_user=None, github_repo=None):
"""
Check the status of an import task.
"""
url = '%s/imports/' % self.baseurl
if not task_id is None:
url = "%s?id=%d" % (url,task_id)
elif not github_user is None and not github_repo is None:
url = "%s?github_user=%s&github_repo=%s" % (url,github_user,github_repo)
else:
raise AnsibleError("Expected task_id or github_user and github_repo")
data = self.__call_galaxy(url)
return data['results']
def lookup_role_by_name(self, role_name, notify=True):
"""
Find a role by name
Find a role by name.
"""
role_name = urlquote(role_name)
@ -92,18 +162,12 @@ class GalaxyAPI(object):
if notify:
display.display("- downloading role '%s', owned by %s" % (role_name, user_name))
except:
raise AnsibleError("- invalid role name (%s). Specify role as format: username.rolename" % role_name)
raise AnsibleError("Invalid role name (%s). Specify role as format: username.rolename" % role_name)
url = '%s/roles/?owner__username=%s&name=%s' % (self.baseurl, user_name, role_name)
display.vvvv("- %s" % (url))
try:
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs))
if len(data["results"]) != 0:
return data["results"][0]
except:
# TODO: report on connection/availability errors
pass
data = self.__call_galaxy(url)
if len(data["results"]) != 0:
return data["results"][0]
return None
def fetch_role_related(self, related, role_id):
@ -114,13 +178,12 @@ class GalaxyAPI(object):
try:
url = '%s/roles/%d/%s/?page_size=50' % (self.baseurl, int(role_id), related)
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs))
data = self.__call_galaxy(url)
results = data['results']
done = (data.get('next', None) is None)
while not done:
url = '%s%s' % (self.baseurl, data['next'])
display.display(url)
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs))
data = self.__call_galaxy(url)
results += data['results']
done = (data.get('next', None) is None)
return results
@ -131,10 +194,9 @@ class GalaxyAPI(object):
"""
Fetch the list of items specified.
"""
try:
url = '%s/%s/?page_size' % (self.baseurl, what)
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs))
data = self.__call_galaxy(url)
if "results" in data:
results = data['results']
else:
@ -144,41 +206,64 @@ class GalaxyAPI(object):
done = (data.get('next', None) is None)
while not done:
url = '%s%s' % (self.baseurl, data['next'])
display.display(url)
data = json.load(open_url(url, validate_certs=self.galaxy.options.validate_certs))
data = self.__call_galaxy(url)
results += data['results']
done = (data.get('next', None) is None)
return results
except Exception as error:
raise AnsibleError("Failed to download the %s list: %s" % (what, str(error)))
def search_roles(self, search, platforms=None, tags=None):
def search_roles(self, search, **kwargs):
search_url = self.baseurl + '/roles/?page=1'
search_url = self.baseurl + '/search/roles/?'
if search:
search_url += '&search=' + urlquote(search)
search_url += '&autocomplete=' + urlquote(search)
if tags is None:
tags = []
elif isinstance(tags, basestring):
tags = kwargs.get('tags',None)
platforms = kwargs.get('platforms', None)
page_size = kwargs.get('page_size', None)
author = kwargs.get('author', None)
if tags and isinstance(tags, basestring):
tags = tags.split(',')
for tag in tags:
search_url += '&chain__tags__name=' + urlquote(tag)
if platforms is None:
platforms = []
elif isinstance(platforms, basestring):
search_url += '&tags_autocomplete=' + '+'.join(tags)
if platforms and isinstance(platforms, basestring):
platforms = platforms.split(',')
search_url += '&platforms_autocomplete=' + '+'.join(platforms)
for plat in platforms:
search_url += '&chain__platforms__name=' + urlquote(plat)
display.debug("Executing query: %s" % search_url)
try:
data = json.load(open_url(search_url, validate_certs=self.galaxy.options.validate_certs))
except HTTPError as e:
raise AnsibleError("Unsuccessful request to server: %s" % str(e))
if page_size:
search_url += '&page_size=%s' % page_size
if author:
search_url += '&username_autocomplete=%s' % author
data = self.__call_galaxy(search_url)
return data
def add_secret(self, source, github_user, github_repo, secret):
url = "%s/notification_secrets/" % self.baseurl
args = urllib.urlencode({
"source": source,
"github_user": github_user,
"github_repo": github_repo,
"secret": secret
})
data = self.__call_galaxy(url, args=args)
return data
def list_secrets(self):
url = "%s/notification_secrets" % self.baseurl
data = self.__call_galaxy(url, headers=self.__auth_header())
return data
def remove_secret(self, secret_id):
url = "%s/notification_secrets/%s/" % (self.baseurl, secret_id)
data = self.__call_galaxy(url, headers=self.__auth_header(), method='DELETE')
return data
def delete_role(self, github_user, github_repo):
url = "%s/removerole/?github_user=%s&github_repo=%s" % (self.baseurl,github_user,github_repo)
data = self.__call_galaxy(url, headers=self.__auth_header(), method='DELETE')
return data

View file

@ -2,9 +2,11 @@ galaxy_info:
author: {{ author }}
description: {{description}}
company: {{ company }}
# If the issue tracker for your role is not on github, uncomment the
# next line and provide a value
# issue_tracker_url: {{ issue_tracker_url }}
# Some suggested licenses:
# - BSD (default)
# - MIT
@ -13,7 +15,17 @@ galaxy_info:
# - Apache
# - CC-BY
license: {{ license }}
min_ansible_version: {{ min_ansible_version }}
# Optionally specify the branch Galaxy will use when accessing the GitHub
# repo for this role. During role install, if no tags are available,
# Galaxy will use this branch. During import Galaxy will access files on
# this branch. If travis integration is cofigured, only notification for this
# branch will be accepted. Otherwise, in all cases, the repo's default branch
# (usually master) will be used.
#github_branch:
#
# Below are all platforms currently available. Just uncomment
# the ones that apply to your role. If you don't see your
@ -28,6 +40,7 @@ galaxy_info:
# - {{ version }}
{%- endfor %}
{%- endfor %}
galaxy_tags: []
# List tags for your role here, one per line. A tag is
# a keyword that describes and categorizes the role.
@ -36,6 +49,7 @@ galaxy_info:
#
# NOTE: A tag is limited to a single word comprised of
# alphanumeric characters. Maximum 20 tags per role.
dependencies: []
# List your role dependencies here, one per line.
# Be sure to remove the '[]' above if you add dependencies

View file

@ -0,0 +1,5 @@
---
- hosts: localhost
remote_user: root
roles:
- {{ role_name }}

View file

@ -0,0 +1,29 @@
---
language: python
python: "2.7"
# Use the new container infrastructure
sudo: false
# Install ansible
addons:
apt:
packages:
- python-pip
install:
# Install ansible
- pip install ansible
# Check ansible version
- ansible --version
# Create ansible.cfg with correct roles_path
- printf '[defaults]\nroles_path=../' >ansible.cfg
script:
# Basic role syntax check
- ansible-playbook tests/test.yml -i tests/inventory --syntax-check
notifications:
webhooks: https://galaxy.ansible.com/api/v1/notifications/

113
lib/ansible/galaxy/login.py Normal file
View file

@ -0,0 +1,113 @@
#!/usr/bin/env python
########################################################################
#
# (C) 2015, Chris Houseknecht <chouse@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
########################################################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import getpass
import json
import urllib
from urllib2 import quote as urlquote, HTTPError
from urlparse import urlparse
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.module_utils.urls import open_url
from ansible.utils.color import stringc
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
class GalaxyLogin(object):
''' Class to handle authenticating user with Galaxy API prior to performing CUD operations '''
GITHUB_AUTH = 'https://api.github.com/authorizations'
def __init__(self, galaxy, github_token=None):
self.galaxy = galaxy
self.github_username = None
self.github_password = None
if github_token == None:
self.get_credentials()
def get_credentials(self):
display.display(u'\n\n' + "We need your " + stringc("Github login",'bright cyan') +
" to identify you.", screen_only=True)
display.display("This information will " + stringc("not be sent to Galaxy",'bright cyan') +
", only to " + stringc("api.github.com.","yellow"), screen_only=True)
display.display("The password will not be displayed." + u'\n\n', screen_only=True)
display.display("Use " + stringc("--github-token",'yellow') +
" if you do not want to enter your password." + u'\n\n', screen_only=True)
try:
self.github_username = raw_input("Github Username: ")
except:
pass
try:
self.github_password = getpass.getpass("Password for %s: " % self.github_username)
except:
pass
if not self.github_username or not self.github_password:
raise AnsibleError("Invalid Github credentials. Username and password are required.")
def remove_github_token(self):
'''
If for some reason an ansible-galaxy token was left from a prior login, remove it. We cannot
retrieve the token after creation, so we are forced to create a new one.
'''
try:
tokens = json.load(open_url(self.GITHUB_AUTH, url_username=self.github_username,
url_password=self.github_password, force_basic_auth=True,))
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['message'])
for token in tokens:
if token['note'] == 'ansible-galaxy login':
display.vvvvv('removing token: %s' % token['token_last_eight'])
try:
open_url('https://api.github.com/authorizations/%d' % token['id'], url_username=self.github_username,
url_password=self.github_password, method='DELETE', force_basic_auth=True,)
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['message'])
def create_github_token(self):
'''
Create a personal authorization token with a note of 'ansible-galaxy login'
'''
self.remove_github_token()
args = json.dumps({"scopes":["public_repo"], "note":"ansible-galaxy login"})
try:
data = json.load(open_url(self.GITHUB_AUTH, url_username=self.github_username,
url_password=self.github_password, force_basic_auth=True, data=args))
except HTTPError as e:
res = json.load(e)
raise AnsibleError(res['message'])
return data['token']

View file

@ -46,7 +46,7 @@ class GalaxyRole(object):
SUPPORTED_SCMS = set(['git', 'hg'])
META_MAIN = os.path.join('meta', 'main.yml')
META_INSTALL = os.path.join('meta', '.galaxy_install_info')
ROLE_DIRS = ('defaults','files','handlers','meta','tasks','templates','vars')
ROLE_DIRS = ('defaults','files','handlers','meta','tasks','templates','vars','tests')
def __init__(self, galaxy, name, src=None, version=None, scm=None, path=None):
@ -198,10 +198,10 @@ class GalaxyRole(object):
role_data = self.src
tmp_file = self.fetch(role_data)
else:
api = GalaxyAPI(self.galaxy, self.options.api_server)
api = GalaxyAPI(self.galaxy)
role_data = api.lookup_role_by_name(self.src)
if not role_data:
raise AnsibleError("- sorry, %s was not found on %s." % (self.src, self.options.api_server))
raise AnsibleError("- sorry, %s was not found on %s." % (self.src, api.api_server))
role_versions = api.fetch_role_related('versions', role_data['id'])
if not self.version:
@ -213,8 +213,10 @@ class GalaxyRole(object):
loose_versions = [LooseVersion(a.get('name',None)) for a in role_versions]
loose_versions.sort()
self.version = str(loose_versions[-1])
elif role_data.get('github_branch', None):
self.version = role_data['github_branch']
else:
self.version = 'master'
self.version = 'master'
elif self.version != 'master':
if role_versions and self.version not in [a.get('name', None) for a in role_versions]:
raise AnsibleError("- the specified version (%s) of %s was not found in the list of available versions (%s)." % (self.version, self.name, role_versions))

View file

@ -0,0 +1,67 @@
#!/usr/bin/env python
########################################################################
#
# (C) 2015, Chris Houseknecht <chouse@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
########################################################################
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import yaml
from stat import *
try:
from __main__ import display
except ImportError:
from ansible.utils.display import Display
display = Display()
class GalaxyToken(object):
''' Class to storing and retrieving token in ~/.ansible_galaxy '''
def __init__(self):
self.file = os.path.expanduser("~") + '/.ansible_galaxy'
self.config = yaml.safe_load(self.__open_config_for_read())
if not self.config:
self.config = {}
def __open_config_for_read(self):
if os.path.isfile(self.file):
display.vvv('Opened %s' % self.file)
return open(self.file, 'r')
# config.yml not found, create and chomd u+rw
f = open(self.file,'w')
f.close()
os.chmod(self.file,S_IRUSR|S_IWUSR) # owner has +rw
display.vvv('Created %s' % self.file)
return open(self.file, 'r')
def set(self, token):
self.config['token'] = token
self.save()
def get(self):
return self.config.get('token', None)
def save(self):
with open(self.file,'w') as f:
yaml.safe_dump(self.config,f,default_flow_style=False)

View file

@ -109,7 +109,12 @@ class Inventory(object):
pass
elif isinstance(host_list, list):
for h in host_list:
(host, port) = parse_address(h, allow_ranges=False)
try:
(host, port) = parse_address(h, allow_ranges=False)
except AnsibleError as e:
display.vvv("Unable to parse address from hostname, leaving unchanged: %s" % to_unicode(e))
host = h
port = None
all.add_host(Host(host, port))
elif self._loader.path_exists(host_list):
#TODO: switch this to a plugin loader and a 'condition' per plugin on which it should be tried, restoring 'inventory pllugins'
@ -178,25 +183,26 @@ class Inventory(object):
if self._restriction:
pattern_hash += u":%s" % to_unicode(self._restriction)
if pattern_hash in HOSTS_PATTERNS_CACHE:
return HOSTS_PATTERNS_CACHE[pattern_hash][:]
if pattern_hash not in HOSTS_PATTERNS_CACHE:
patterns = Inventory.split_host_pattern(pattern)
hosts = self._evaluate_patterns(patterns)
patterns = Inventory.split_host_pattern(pattern)
hosts = self._evaluate_patterns(patterns)
# mainly useful for hostvars[host] access
if not ignore_limits_and_restrictions:
# exclude hosts not in a subset, if defined
if self._subset:
subset = self._evaluate_patterns(self._subset)
hosts = [ h for h in hosts if h in subset ]
# mainly useful for hostvars[host] access
if not ignore_limits_and_restrictions:
# exclude hosts not in a subset, if defined
if self._subset:
subset = self._evaluate_patterns(self._subset)
hosts = [ h for h in hosts if h in subset ]
# exclude hosts mentioned in any restriction (ex: failed hosts)
if self._restriction is not None:
hosts = [ h for h in hosts if h in self._restriction ]
# exclude hosts mentioned in any restriction (ex: failed hosts)
if self._restriction is not None:
hosts = [ h for h in hosts if h in self._restriction ]
HOSTS_PATTERNS_CACHE[pattern_hash] = hosts[:]
return hosts
seen = set()
HOSTS_PATTERNS_CACHE[pattern_hash] = [x for x in hosts if x not in seen and not seen.add(x)]
return HOSTS_PATTERNS_CACHE[pattern_hash][:]
@classmethod
def split_host_pattern(cls, pattern):
@ -227,15 +233,13 @@ class Inventory(object):
# If it doesn't, it could still be a single pattern. This accounts for
# non-separator uses of colons: IPv6 addresses and [x:y] host ranges.
else:
(base, port) = parse_address(pattern, allow_ranges=True)
if base:
try:
(base, port) = parse_address(pattern, allow_ranges=True)
patterns = [pattern]
# The only other case we accept is a ':'-separated list of patterns.
# This mishandles IPv6 addresses, and is retained only for backwards
# compatibility.
else:
except:
# The only other case we accept is a ':'-separated list of patterns.
# This mishandles IPv6 addresses, and is retained only for backwards
# compatibility.
patterns = re.findall(
r'''(?: # We want to match something comprising:
[^\s:\[\]] # (anything other than whitespace or ':[]'
@ -388,7 +392,7 @@ class Inventory(object):
end = -1
subscript = (int(start), int(end))
if sep == '-':
display.deprecated("Use [x:y] inclusive subscripts instead of [x-y]", version=2.0, removed=True)
display.warning("Use [x:y] inclusive subscripts instead of [x-y] which has been removed")
return (pattern, subscript)

View file

@ -192,6 +192,8 @@ class InventoryDirectory(object):
if group.name not in self.groups:
# it's brand new, add him!
self.groups[group.name] = group
# the Group class does not (yet) implement __eq__/__ne__,
# so unlike Host we do a regular comparison here
if self.groups[group.name] != group:
# different object, merge
self._merge_groups(self.groups[group.name], group)
@ -200,6 +202,9 @@ class InventoryDirectory(object):
if host.name not in self.hosts:
# Papa's got a brand new host
self.hosts[host.name] = host
# because the __eq__/__ne__ methods in Host() compare the
# name fields rather than references, we use id() here to
# do the object comparison for merges
if self.hosts[host.name] != host:
# different object, merge
self._merge_hosts(self.hosts[host.name], host)

View file

@ -19,6 +19,8 @@
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import uuid
from ansible.inventory.group import Group
from ansible.utils.vars import combine_vars
@ -38,7 +40,7 @@ class Host:
def __eq__(self, other):
if not isinstance(other, Host):
return False
return self.name == other.name
return self._uuid == other._uuid
def __ne__(self, other):
return not self.__eq__(other)
@ -55,6 +57,7 @@ class Host:
name=self.name,
vars=self.vars.copy(),
address=self.address,
uuid=self._uuid,
gathered_facts=self._gathered_facts,
groups=groups,
)
@ -65,6 +68,7 @@ class Host:
self.name = data.get('name')
self.vars = data.get('vars', dict())
self.address = data.get('address', '')
self._uuid = data.get('uuid', uuid.uuid4())
groups = data.get('groups', [])
for group_data in groups:
@ -84,6 +88,7 @@ class Host:
self.set_variable('ansible_port', int(port))
self._gathered_facts = False
self._uuid = uuid.uuid4()
def __repr__(self):
return self.get_name()

View file

@ -23,7 +23,7 @@ import ast
import re
from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.inventory.host import Host
from ansible.inventory.group import Group
from ansible.inventory.expand_hosts import detect_range
@ -264,9 +264,12 @@ class InventoryParser(object):
# Can the given hostpattern be parsed as a host with an optional port
# specification?
(pattern, port) = parse_address(hostpattern, allow_ranges=True)
if not pattern:
self._raise_error("Can't parse '%s' as host[:port]" % hostpattern)
try:
(pattern, port) = parse_address(hostpattern, allow_ranges=True)
except:
# not a recognizable host pattern
pattern = hostpattern
port = None
# Once we have separated the pattern, we expand it into list of one or
# more hostnames, depending on whether it contains any [x:y] ranges.

View file

@ -31,6 +31,7 @@ from ansible.errors import AnsibleError
from ansible.inventory.host import Host
from ansible.inventory.group import Group
from ansible.module_utils.basic import json_dict_bytes_to_unicode
from ansible.utils.unicode import to_str
class InventoryScript:
@ -62,7 +63,6 @@ class InventoryScript:
self.host_vars_from_top = None
self._parse(stderr)
def _parse(self, err):
all_hosts = {}
@ -72,11 +72,11 @@ class InventoryScript:
self.raw = self._loader.load(self.data)
except Exception as e:
sys.stderr.write(err + "\n")
raise AnsibleError("failed to parse executable inventory script results from {0}: {1}".format(self.filename, str(e)))
raise AnsibleError("failed to parse executable inventory script results from {0}: {1}".format(to_str(self.filename), to_str(e)))
if not isinstance(self.raw, Mapping):
sys.stderr.write(err + "\n")
raise AnsibleError("failed to parse executable inventory script results from {0}: data needs to be formatted as a json dict".format(self.filename))
raise AnsibleError("failed to parse executable inventory script results from {0}: data needs to be formatted as a json dict".format(to_str(self.filename)))
self.raw = json_dict_bytes_to_unicode(self.raw)
@ -112,7 +112,7 @@ class InventoryScript:
"data for the host list:\n %s" % (group_name, data))
for hostname in data['hosts']:
if not hostname in all_hosts:
if hostname not in all_hosts:
all_hosts[hostname] = Host(hostname)
host = all_hosts[hostname]
group.add_host(host)
@ -148,7 +148,6 @@ class InventoryScript:
got = self.host_vars_from_top.get(host.name, {})
return got
cmd = [self.filename, "--host", host.name]
try:
sp = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
@ -161,4 +160,3 @@ class InventoryScript:
return json_dict_bytes_to_unicode(self._loader.load(out))
except ValueError:
raise AnsibleError("could not parse post variable response: %s, %s" % (cmd, out))

View file

@ -213,7 +213,7 @@ except ImportError:
elif isinstance(node, ast.List):
return list(map(_convert, node.nodes))
elif isinstance(node, ast.Dict):
return dict((_convert(k), _convert(v)) for k, v in node.items)
return dict((_convert(k), _convert(v)) for k, v in node.items())
elif isinstance(node, ast.Name):
if node.name in _safe_names:
return _safe_names[node.name]
@ -369,7 +369,12 @@ def return_values(obj):
sensitive values pre-jsonification."""
if isinstance(obj, basestring):
if obj:
yield obj
if isinstance(obj, bytes):
yield obj
else:
# Unicode objects should all convert to utf-8
# (still must deal with surrogateescape on python3)
yield obj.encode('utf-8')
return
elif isinstance(obj, Sequence):
for element in obj:
@ -391,10 +396,22 @@ def remove_values(value, no_log_strings):
""" Remove strings in no_log_strings from value. If value is a container
type, then remove a lot more"""
if isinstance(value, basestring):
if value in no_log_strings:
if isinstance(value, unicode):
# This should work everywhere on python2. Need to check
# surrogateescape on python3
bytes_value = value.encode('utf-8')
value_is_unicode = True
else:
bytes_value = value
value_is_unicode = False
if bytes_value in no_log_strings:
return 'VALUE_SPECIFIED_IN_NO_LOG_PARAMETER'
for omit_me in no_log_strings:
value = value.replace(omit_me, '*' * 8)
bytes_value = bytes_value.replace(omit_me, '*' * 8)
if value_is_unicode:
value = unicode(bytes_value, 'utf-8', errors='replace')
else:
value = bytes_value
elif isinstance(value, Sequence):
return [remove_values(elem, no_log_strings) for elem in value]
elif isinstance(value, Mapping):
@ -499,6 +516,7 @@ class AnsibleModule(object):
self._debug = False
self.aliases = {}
self._legal_inputs = ['_ansible_check_mode', '_ansible_no_log', '_ansible_debug']
if add_file_common_args:
for k, v in FILE_COMMON_ARGUMENTS.items():
@ -507,6 +525,15 @@ class AnsibleModule(object):
self.params = self._load_params()
# append to legal_inputs and then possibly check against them
try:
self.aliases = self._handle_aliases()
except Exception:
e = get_exception()
# use exceptions here cause its not safe to call vail json until no_log is processed
print('{"failed": true, "msg": "Module alias error: %s"}' % str(e))
sys.exit(1)
# Save parameter values that should never be logged
self.no_log_values = set()
# Use the argspec to determine which args are no_log
@ -521,10 +548,6 @@ class AnsibleModule(object):
# reset to LANG=C if it's an invalid/unavailable locale
self._check_locale()
self._legal_inputs = ['_ansible_check_mode', '_ansible_no_log', '_ansible_debug']
# append to legal_inputs and then possibly check against them
self.aliases = self._handle_aliases()
self._check_arguments(check_invalid_arguments)
@ -1047,6 +1070,7 @@ class AnsibleModule(object):
self.fail_json(msg="An unknown error was encountered while attempting to validate the locale: %s" % e)
def _handle_aliases(self):
# this uses exceptions as it happens before we can safely call fail_json
aliases_results = {} #alias:canon
for (k,v) in self.argument_spec.items():
self._legal_inputs.append(k)
@ -1055,11 +1079,11 @@ class AnsibleModule(object):
required = v.get('required', False)
if default is not None and required:
# not alias specific but this is a good place to check this
self.fail_json(msg="internal error: required and default are mutually exclusive for %s" % k)
raise Exception("internal error: required and default are mutually exclusive for %s" % k)
if aliases is None:
continue
if type(aliases) != list:
self.fail_json(msg='internal error: aliases must be a list')
raise Exception('internal error: aliases must be a list')
for alias in aliases:
self._legal_inputs.append(alias)
aliases_results[alias] = k
@ -1257,7 +1281,7 @@ class AnsibleModule(object):
if isinstance(value, bool):
return value
if isinstance(value, basestring):
if isinstance(value, basestring) or isinstance(value, int):
return self.boolean(value)
raise TypeError('%s cannot be converted to a bool' % type(value))
@ -1414,7 +1438,6 @@ class AnsibleModule(object):
self.log(msg, log_args=log_args)
def _set_cwd(self):
try:
cwd = os.getcwd()
@ -1507,6 +1530,8 @@ class AnsibleModule(object):
self.add_path_info(kwargs)
if not 'changed' in kwargs:
kwargs['changed'] = False
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
kwargs = remove_values(kwargs, self.no_log_values)
self.do_cleanup_files()
print(self.jsonify(kwargs))
@ -1517,6 +1542,8 @@ class AnsibleModule(object):
self.add_path_info(kwargs)
assert 'msg' in kwargs, "implementation error -- msg to explain the error is required"
kwargs['failed'] = True
if 'invocation' not in kwargs:
kwargs['invocation'] = {'module_args': self.params}
kwargs = remove_values(kwargs, self.no_log_values)
self.do_cleanup_files()
print(self.jsonify(kwargs))

View file

@ -1,155 +0,0 @@
#
# (c) 2015 Peter Sprygada, <psprygada@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
"""
This module adds shared support for Arista EOS devices using eAPI over
HTTP/S transport. It is built on module_utils/urls.py which is required
for proper operation.
In order to use this module, include it as part of a custom
module as shown below.
** Note: The order of the import statements does matter. **
from ansible.module_utils.basic import *
from ansible.module_utils.urls import *
from ansible.module_utils.eapi import *
The eapi module provides the following common argument spec:
* host (str) - [Required] The IPv4 address or FQDN of the network device
* port (str) - Overrides the default port to use for the HTTP/S
connection. The default values are 80 for HTTP and
443 for HTTPS
* url_username (str) - [Required] The username to use to authenticate
the HTTP/S connection. Aliases: username
* url_password (str) - [Required] The password to use to authenticate
the HTTP/S connection. Aliases: password
* use_ssl (bool) - Specifies whether or not to use an encrypted (HTTPS)
connection or not. The default value is False.
* enable_mode (bool) - Specifies whether or not to enter `enable` mode
prior to executing the command list. The default value is True
* enable_password (str) - The password for entering `enable` mode
on the switch if configured.
In order to communicate with Arista EOS devices, the eAPI feature
must be enabled and configured on the device.
"""
def eapi_argument_spec(spec=None):
"""Creates an argument spec for working with eAPI
"""
arg_spec = url_argument_spec()
arg_spec.update(dict(
host=dict(required=True),
port=dict(),
url_username=dict(required=True, aliases=['username']),
url_password=dict(required=True, aliases=['password']),
use_ssl=dict(default=True, type='bool'),
enable_mode=dict(default=True, type='bool'),
enable_password=dict()
))
if spec:
arg_spec.update(spec)
return arg_spec
def eapi_url(module):
"""Construct a valid Arist eAPI URL
"""
if module.params['use_ssl']:
proto = 'https'
else:
proto = 'http'
host = module.params['host']
url = '{}://{}'.format(proto, host)
if module.params['port']:
url = '{}:{}'.format(url, module.params['port'])
return '{}/command-api'.format(url)
def to_list(arg):
"""Convert the argument to a list object
"""
if isinstance(arg, (list, tuple)):
return list(arg)
elif arg is not None:
return [arg]
else:
return []
def eapi_body(commands, encoding, reqid=None):
"""Create a valid eAPI JSON-RPC request message
"""
params = dict(version=1, cmds=to_list(commands), format=encoding)
return dict(jsonrpc='2.0', id=reqid, method='runCmds', params=params)
def eapi_enable_mode(module):
"""Build commands for entering `enable` mode on the switch
"""
if module.params['enable_mode']:
passwd = module.params['enable_password']
if passwd:
return dict(cmd='enable', input=passwd)
else:
return 'enable'
def eapi_command(module, commands, encoding='json'):
"""Send an ordered list of commands to the device over eAPI
"""
commands = to_list(commands)
url = eapi_url(module)
enable = eapi_enable_mode(module)
if enable:
commands.insert(0, enable)
data = eapi_body(commands, encoding)
data = module.jsonify(data)
headers = {'Content-Type': 'application/json-rpc'}
response, headers = fetch_url(module, url, data=data, headers=headers,
method='POST')
if headers['status'] != 200:
module.fail_json(**headers)
response = module.from_json(response.read())
if 'error' in response:
err = response['error']
module.fail_json(msg='json-rpc error', **err)
if enable:
response['result'].pop(0)
return response['result'], headers
def eapi_configure(module, commands):
"""Send configuration commands to the device over eAPI
"""
commands.insert(0, 'configure')
response, headers = eapi_command(module, commands)
response.pop(0)
return response, headers

View file

@ -0,0 +1,215 @@
#
# (c) 2015 Peter Sprygada, <psprygada@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
NET_PASSWD_RE = re.compile(r"[\r\n]?password: $", re.I)
NET_COMMON_ARGS = dict(
host=dict(required=True),
port=dict(type='int'),
username=dict(required=True),
password=dict(no_log=True),
authorize=dict(default=False, type='bool'),
auth_pass=dict(no_log=True),
transport=dict(choices=['cli', 'eapi']),
use_ssl=dict(default=True, type='bool')
)
def to_list(val):
if isinstance(val, (list, tuple)):
return list(val)
elif val is not None:
return [val]
else:
return list()
class Eapi(object):
def __init__(self, module):
self.module = module
# sets the module_utils/urls.py req parameters
self.module.params['url_username'] = module.params['username']
self.module.params['url_password'] = module.params['password']
self.url = None
self.enable = None
def _get_body(self, commands, encoding, reqid=None):
"""Create a valid eAPI JSON-RPC request message
"""
params = dict(version=1, cmds=commands, format=encoding)
return dict(jsonrpc='2.0', id=reqid, method='runCmds', params=params)
def connect(self):
host = self.module.params['host']
port = self.module.params['port']
if self.module.params['use_ssl']:
proto = 'https'
if not port:
port = 443
else:
proto = 'http'
if not port:
port = 80
self.url = '%s://%s:%s/command-api' % (proto, host, port)
def authorize(self):
if self.module.params['auth_pass']:
passwd = self.module.params['auth_pass']
self.enable = dict(cmd='enable', input=passwd)
else:
self.enable = 'enable'
def send(self, commands, encoding='json'):
"""Send commands to the device.
"""
clist = to_list(commands)
if self.enable is not None:
clist.insert(0, self.enable)
data = self._get_body(clist, encoding)
data = self.module.jsonify(data)
headers = {'Content-Type': 'application/json-rpc'}
response, headers = fetch_url(self.module, self.url, data=data,
headers=headers, method='POST')
if headers['status'] != 200:
self.module.fail_json(**headers)
response = self.module.from_json(response.read())
if 'error' in response:
err = response['error']
self.module.fail_json(msg='json-rpc error', **err)
if self.enable:
response['result'].pop(0)
return response['result']
class Cli(object):
def __init__(self, module):
self.module = module
self.shell = None
def connect(self, **kwargs):
host = self.module.params['host']
port = self.module.params['port'] or 22
username = self.module.params['username']
password = self.module.params['password']
self.shell = Shell()
self.shell.open(host, port=port, username=username, password=password)
def authorize(self):
passwd = self.module.params['auth_pass']
self.send(Command('enable', prompt=NET_PASSWD_RE, response=passwd))
def send(self, commands, encoding='text'):
return self.shell.send(commands)
class EosModule(AnsibleModule):
def __init__(self, *args, **kwargs):
super(EosModule, self).__init__(*args, **kwargs)
self.connection = None
self._config = None
@property
def config(self):
if not self._config:
self._config = self.get_config()
return self._config
def connect(self):
if self.params['transport'] == 'eapi':
self.connection = Eapi(self)
else:
self.connection = Cli(self)
try:
self.connection.connect()
self.execute('terminal length 0')
if self.params['authorize']:
self.connection.authorize()
except Exception, exc:
self.fail_json(msg=exc.message)
def configure(self, commands):
commands = to_list(commands)
commands.insert(0, 'configure terminal')
responses = self.execute(commands)
responses.pop(0)
return responses
def execute(self, commands, **kwargs):
try:
return self.connection.send(commands, **kwargs)
except Exception, exc:
self.fail_json(msg=exc.message, commands=commands)
def disconnect(self):
self.connection.close()
def parse_config(self, cfg):
return parse(cfg, indent=3)
def get_config(self):
cmd = 'show running-config'
if self.params.get('include_defaults'):
cmd += ' all'
if self.params['transport'] == 'cli':
return self.execute(cmd)[0]
else:
resp = self.execute(cmd, encoding='text')
return resp[0]
def get_module(**kwargs):
"""Return instance of EosModule
"""
argument_spec = NET_COMMON_ARGS.copy()
if kwargs.get('argument_spec'):
argument_spec.update(kwargs['argument_spec'])
kwargs['argument_spec'] = argument_spec
kwargs['check_invalid_arguments'] = False
module = EosModule(**kwargs)
# HAS_PARAMIKO is set by module_utils/shell.py
if module.params['transport'] == 'cli' and not HAS_PARAMIKO:
module.fail_json(msg='paramiko is required but does not appear to be installed')
# copy in values from local action.
params = json_dict_unicode_to_bytes(json.loads(MODULE_COMPLEX_ARGS))
for key, value in params.iteritems():
module.params[key] = value
module.connect()
return module

View file

@ -51,19 +51,35 @@ def f5_argument_spec():
def f5_parse_arguments(module):
if not bigsuds_found:
module.fail_json(msg="the python bigsuds module is required")
if not module.params['validate_certs']:
disable_ssl_cert_validation()
if module.params['validate_certs']:
import ssl
if not hasattr(ssl, 'SSLContext'):
module.fail_json(msg='bigsuds does not support verifying certificates with python < 2.7.9. Either update python or set validate_certs=False on the task')
return (module.params['server'],module.params['user'],module.params['password'],module.params['state'],module.params['partition'],module.params['validate_certs'])
def bigip_api(bigip, user, password):
api = bigsuds.BIGIP(hostname=bigip, username=user, password=password)
return api
def bigip_api(bigip, user, password, validate_certs):
try:
# bigsuds >= 1.0.3
api = bigsuds.BIGIP(hostname=bigip, username=user, password=password, verify=validate_certs)
except TypeError:
# bigsuds < 1.0.3, no verify param
if validate_certs:
# Note: verified we have SSLContext when we parsed params
api = bigsuds.BIGIP(hostname=bigip, username=user, password=password)
else:
import ssl
if hasattr(ssl, 'SSLContext'):
# Really, you should never do this. It disables certificate
# verification *globally*. But since older bigip libraries
# don't give us a way to toggle verification we need to
# disable it at the global level.
# From https://www.python.org/dev/peps/pep-0476/#id29
ssl._create_default_https_context = ssl._create_unverified_context
api = bigsuds.BIGIP(hostname=bigip, username=user, password=password)
def disable_ssl_cert_validation():
# You probably only want to do this for testing and never in production.
# From https://www.python.org/dev/peps/pep-0476/#id29
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
return api
# Fully Qualified name (with the partition)
def fq_name(partition,name):

View file

@ -493,9 +493,10 @@ class Facts(object):
if self.facts['distribution'].lower() == 'coreos':
data = get_file_content('/etc/coreos/update.conf')
release = re.search("^GROUP=(.*)", data)
if release:
self.facts['distribution_release'] = release.group(1).strip('"')
if data:
release = re.search("^GROUP=(.*)", data)
if release:
self.facts['distribution_release'] = release.group(1).strip('"')
else:
self.facts['distribution'] = name
machine_id = get_file_content("/var/lib/dbus/machine-id") or get_file_content("/etc/machine-id")
@ -524,7 +525,10 @@ class Facts(object):
keytypes = ('dsa', 'rsa', 'ecdsa', 'ed25519')
if self.facts['system'] == 'Darwin':
keydir = '/etc'
if self.facts['distribution'] == 'MacOSX' and LooseVersion(self.facts['distribution_version']) >= LooseVersion('10.11') :
keydir = '/etc/ssh'
else:
keydir = '/etc'
else:
keydir = '/etc/ssh'
@ -552,8 +556,8 @@ class Facts(object):
if proc_1 is None:
rc, proc_1, err = module.run_command("ps -p 1 -o comm|tail -n 1", use_unsafe_shell=True)
if proc_1 in ['init', '/sbin/init']:
# many systems return init, so this cannot be trusted
if proc_1 in ['init', '/sbin/init', 'bash']:
# many systems return init, so this cannot be trusted, bash is from docker
proc_1 = None
# if not init/None it should be an identifiable or custom init, so we are done!

View file

@ -0,0 +1,133 @@
#
# (c) 2015 Peter Sprygada, <psprygada@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
NET_PASSWD_RE = re.compile(r"[\r\n]?password: $", re.I)
NET_COMMON_ARGS = dict(
host=dict(required=True),
port=dict(default=22, type='int'),
username=dict(required=True),
password=dict(no_log=True),
authorize=dict(default=False, type='bool'),
auth_pass=dict(no_log=True),
)
def to_list(val):
if isinstance(val, (list, tuple)):
return list(val)
elif val is not None:
return [val]
else:
return list()
class Cli(object):
def __init__(self, module):
self.module = module
self.shell = None
def connect(self, **kwargs):
host = self.module.params['host']
port = self.module.params['port'] or 22
username = self.module.params['username']
password = self.module.params['password']
self.shell = Shell()
self.shell.open(host, port=port, username=username, password=password)
def authorize(self):
passwd = self.module.params['auth_pass']
self.send(Command('enable', prompt=NET_PASSWD_RE, response=passwd))
def send(self, commands):
return self.shell.send(commands)
class IosModule(AnsibleModule):
def __init__(self, *args, **kwargs):
super(IosModule, self).__init__(*args, **kwargs)
self.connection = None
self._config = None
@property
def config(self):
if not self._config:
self._config = self.get_config()
return self._config
def connect(self):
try:
self.connection = Cli(self)
self.connection.connect()
self.execute('terminal length 0')
if self.params['authorize']:
self.connection.authorize()
except Exception, exc:
self.fail_json(msg=exc.message)
def configure(self, commands):
commands = to_list(commands)
commands.insert(0, 'configure terminal')
responses = self.execute(commands)
responses.pop(0)
return responses
def execute(self, commands, **kwargs):
return self.connection.send(commands)
def disconnect(self):
self.connection.close()
def parse_config(self, cfg):
return parse(cfg, indent=1)
def get_config(self):
cmd = 'show running-config'
if self.params.get('include_defaults'):
cmd += ' all'
return self.execute(cmd)[0]
def get_module(**kwargs):
"""Return instance of IosModule
"""
argument_spec = NET_COMMON_ARGS.copy()
if kwargs.get('argument_spec'):
argument_spec.update(kwargs['argument_spec'])
kwargs['argument_spec'] = argument_spec
kwargs['check_invalid_arguments'] = False
module = IosModule(**kwargs)
# HAS_PARAMIKO is set by module_utils/shell.py
if not HAS_PARAMIKO:
module.fail_json(msg='paramiko is required but does not appear to be installed')
# copy in values from local action.
params = json_dict_unicode_to_bytes(json.loads(MODULE_COMPLEX_ARGS))
for key, value in params.iteritems():
module.params[key] = value
module.connect()
return module

View file

@ -0,0 +1,121 @@
#
# (c) 2015 Peter Sprygada, <psprygada@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
NET_PASSWD_RE = re.compile(r"[\r\n]?password: $", re.I)
NET_COMMON_ARGS = dict(
host=dict(required=True),
port=dict(default=22, type='int'),
username=dict(required=True),
password=dict(no_log=True)
)
def to_list(val):
if isinstance(val, (list, tuple)):
return list(val)
elif val is not None:
return [val]
else:
return list()
class Cli(object):
def __init__(self, module):
self.module = module
self.shell = None
def connect(self, **kwargs):
host = self.module.params['host']
port = self.module.params['port'] or 22
username = self.module.params['username']
password = self.module.params['password']
self.shell = Shell()
self.shell.open(host, port=port, username=username, password=password)
def send(self, commands):
return self.shell.send(commands)
class IosxrModule(AnsibleModule):
def __init__(self, *args, **kwargs):
super(IosxrModule, self).__init__(*args, **kwargs)
self.connection = None
self._config = None
@property
def config(self):
if not self._config:
self._config = self.get_config()
return self._config
def connect(self):
try:
self.connection = Cli(self)
self.connection.connect()
self.execute('terminal length 0')
except Exception, exc:
self.fail_json(msg=exc.message)
def configure(self, commands):
commands = to_list(commands)
commands.insert(0, 'configure terminal')
commands.append('commit')
responses = self.execute(commands)
responses.pop(0)
responses.pop()
return responses
def execute(self, commands, **kwargs):
return self.connection.send(commands)
def disconnect(self):
self.connection.close()
def parse_config(self, cfg):
return parse(cfg, indent=1)
def get_config(self):
return self.execute('show running-config')[0]
def get_module(**kwargs):
"""Return instance of IosxrModule
"""
argument_spec = NET_COMMON_ARGS.copy()
if kwargs.get('argument_spec'):
argument_spec.update(kwargs['argument_spec'])
kwargs['argument_spec'] = argument_spec
kwargs['check_invalid_arguments'] = False
module = IosxrModule(**kwargs)
if not HAS_PARAMIKO:
module.fail_json(msg='paramiko is required but does not appear to be installed')
# copy in values from local action.
params = json_dict_unicode_to_bytes(json.loads(MODULE_COMPLEX_ARGS))
for key, value in params.iteritems():
module.params[key] = value
module.connect()
return module

View file

@ -28,7 +28,11 @@
import os
import hmac
import urlparse
try:
import urlparse
except ImportError:
import urllib.parse as urlparse
try:
from hashlib import sha1
@ -74,12 +78,12 @@ def get_fqdn(repo_url):
if "@" in repo_url and "://" not in repo_url:
# most likely an user@host:path or user@host/path type URL
repo_url = repo_url.split("@", 1)[1]
if ":" in repo_url:
repo_url = repo_url.split(":")[0]
result = repo_url
if repo_url.startswith('['):
result = repo_url.split(']', 1)[0] + ']'
elif ":" in repo_url:
result = repo_url.split(":")[0]
elif "/" in repo_url:
repo_url = repo_url.split("/")[0]
result = repo_url
result = repo_url.split("/")[0]
elif "://" in repo_url:
# this should be something we can parse with urlparse
parts = urlparse.urlparse(repo_url)
@ -87,11 +91,13 @@ def get_fqdn(repo_url):
# ensure we actually have a parts[1] before continuing.
if parts[1] != '':
result = parts[1]
if ":" in result:
result = result.split(":")[0]
if "@" in result:
result = result.split("@", 1)[1]
if result[0].startswith('['):
result = result.split(']', 1)[0] + ']'
elif ":" in result:
result = result.split(":")[0]
return result
def check_hostkey(module, fqdn):
@ -169,7 +175,7 @@ def add_host_key(module, fqdn, key_type="rsa", create_dir=False):
if not os.path.exists(user_ssh_dir):
if create_dir:
try:
os.makedirs(user_ssh_dir, 0700)
os.makedirs(user_ssh_dir, int('700', 8))
except:
module.fail_json(msg="failed to create host key directory: %s" % user_ssh_dir)
else:

View file

@ -0,0 +1,66 @@
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# Copyright (c), Jonathan Mainguy <jon@soh.re>, 2015
# Most of this was originally added by Sven Schliesing @muffl0n in the mysql_user.py module
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
def mysql_connect(module, login_user=None, login_password=None, config_file='', ssl_cert=None, ssl_key=None, ssl_ca=None, db=None, cursor_class=None):
config = {
'host': module.params['login_host'],
'ssl': {
}
}
if module.params['login_unix_socket']:
config['unix_socket'] = module.params['login_unix_socket']
else:
config['port'] = module.params['login_port']
if os.path.exists(config_file):
config['read_default_file'] = config_file
# If login_user or login_password are given, they should override the
# config file
if login_user is not None:
config['user'] = login_user
if login_password is not None:
config['passwd'] = login_password
if ssl_cert is not None:
config['ssl']['cert'] = ssl_cert
if ssl_key is not None:
config['ssl']['key'] = ssl_key
if ssl_ca is not None:
config['ssl']['ca'] = ssl_ca
if db is not None:
config['db'] = db
db_connection = MySQLdb.connect(**config)
if cursor_class is not None:
return db_connection.cursor(cursorclass=MySQLdb.cursors.DictCursor)
else:
return db_connection.cursor()

View file

@ -0,0 +1,85 @@
#
# (c) 2015 Peter Sprygada, <psprygada@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
import re
import collections
class ConfigLine(object):
def __init__(self, text):
self.text = text
self.children = list()
self.parents = list()
self.raw = None
def __str__(self):
return self.raw
def __eq__(self, other):
if self.text == other.text:
return self.parents == other.parents
def __ne__(self, other):
return not self.__eq__(other)
def parse(lines, indent):
toplevel = re.compile(r'\S')
childline = re.compile(r'^\s*(.+)$')
repl = r'([{|}|;])'
ancestors = list()
config = list()
for line in str(lines).split('\n'):
text = str(re.sub(repl, '', line)).strip()
cfg = ConfigLine(text)
cfg.raw = line
if not text or text[0] in ['!', '#']:
continue
# handle top level commands
if toplevel.match(line):
ancestors = [cfg]
# handle sub level commands
else:
match = childline.match(line)
line_indent = match.start(1)
level = int(line_indent / indent)
parent_level = level - 1
cfg.parents = ancestors[:level]
if level > len(ancestors):
config.append(cfg)
continue
for i in range(level, len(ancestors)):
ancestors.pop()
ancestors.append(cfg)
ancestors[parent_level].children.append(cfg)
config.append(cfg)
return config

View file

@ -1,130 +0,0 @@
#
# (c) 2015 Peter Sprygada, <psprygada@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
"""
This module adds support for Cisco NXAPI to Ansible shared
module_utils. It builds on module_utils/urls.py to provide
NXAPI support over HTTP/S which is required for proper operation.
In order to use this module, include it as part of a custom
module as shown below.
** Note: The order of the import statements does matter. **
from ansible.module_utils.basic import *
from ansible.module_utils.urls import *
from ansible.module_utils.nxapi import *
The nxapi module provides the following common argument spec:
* host (str) - [Required] The IPv4 address or FQDN of the network device
* port (str) - Overrides the default port to use for the HTTP/S
connection. The default values are 80 for HTTP and
443 for HTTPS
* url_username (str) - [Required] The username to use to authenticate
the HTTP/S connection. Aliases: username
* url_password (str) - [Required] The password to use to authenticate
the HTTP/S connection. Aliases: password
* use_ssl (bool) - Specifies whether or not to use an encrypted (HTTPS)
connection or not. The default value is False.
* command_type (str) - The type of command to send to the remote
device. Valid values in `cli_show`, `cli_show_ascii`, 'cli_conf`
and `bash`. The default value is `cli_show_ascii`
In order to communicate with Cisco NXOS devices, the NXAPI feature
must be enabled and configured on the device.
"""
NXAPI_COMMAND_TYPES = ['cli_show', 'cli_show_ascii', 'cli_conf', 'bash']
def nxapi_argument_spec(spec=None):
"""Creates an argument spec for working with NXAPI
"""
arg_spec = url_argument_spec()
arg_spec.update(dict(
host=dict(required=True),
port=dict(),
url_username=dict(required=True, aliases=['username']),
url_password=dict(required=True, aliases=['password']),
use_ssl=dict(default=False, type='bool'),
command_type=dict(default='cli_show_ascii', choices=NXAPI_COMMAND_TYPES)
))
if spec:
arg_spec.update(spec)
return arg_spec
def nxapi_url(module):
"""Constructs a valid NXAPI url
"""
if module.params['use_ssl']:
proto = 'https'
else:
proto = 'http'
host = module.params['host']
url = '{}://{}'.format(proto, host)
port = module.params['port']
if module.params['port']:
url = '{}:{}'.format(url, module.params['port'])
url = '{}/ins'.format(url)
return url
def nxapi_body(commands, command_type, **kwargs):
"""Encodes a NXAPI JSON request message
"""
if isinstance(commands, (list, set, tuple)):
commands = ' ;'.join(commands)
msg = {
'version': kwargs.get('version') or '1.2',
'type': command_type,
'chunk': kwargs.get('chunk') or '0',
'sid': kwargs.get('sid'),
'input': commands,
'output_format': 'json'
}
return dict(ins_api=msg)
def nxapi_command(module, commands, command_type=None, **kwargs):
"""Sends the list of commands to the device over NXAPI
"""
url = nxapi_url(module)
command_type = command_type or module.params['command_type']
data = nxapi_body(commands, command_type)
data = module.jsonify(data)
headers = {'Content-Type': 'text/json'}
response, headers = fetch_url(module, url, data=data, headers=headers,
method='POST')
status = kwargs.get('status') or 200
if headers['status'] != status:
module.fail_json(**headers)
response = module.from_json(response.read())
return response, headers

View file

@ -0,0 +1,216 @@
#
# (c) 2015 Peter Sprygada, <psprygada@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
NET_PASSWD_RE = re.compile(r"[\r\n]?password: $", re.I)
NET_COMMON_ARGS = dict(
host=dict(required=True),
port=dict(type='int'),
username=dict(required=True),
password=dict(no_log=True),
transport=dict(choices=['cli', 'nxapi']),
use_ssl=dict(default=False, type='bool')
)
NXAPI_COMMAND_TYPES = ['cli_show', 'cli_show_ascii', 'cli_conf', 'bash']
NXAPI_ENCODINGS = ['json', 'xml']
def to_list(val):
if isinstance(val, (list, tuple)):
return list(val)
elif val is not None:
return [val]
else:
return list()
class Nxapi(object):
def __init__(self, module):
self.module = module
# sets the module_utils/urls.py req parameters
self.module.params['url_username'] = module.params['username']
self.module.params['url_password'] = module.params['password']
self.url = None
self.enable = None
def _get_body(self, commands, command_type, encoding, version='1.2', chunk='0', sid=None):
"""Encodes a NXAPI JSON request message
"""
if isinstance(commands, (list, set, tuple)):
commands = ' ;'.join(commands)
if encoding not in NXAPI_ENCODINGS:
self.module.fail_json("Invalid encoding. Received %s. Expected one of %s" %
(encoding, ','.join(NXAPI_ENCODINGS)))
msg = {
'version': version,
'type': command_type,
'chunk': chunk,
'sid': sid,
'input': commands,
'output_format': encoding
}
return dict(ins_api=msg)
def connect(self):
host = self.module.params['host']
port = self.module.params['port']
if self.module.params['use_ssl']:
proto = 'https'
if not port:
port = 443
else:
proto = 'http'
if not port:
port = 80
self.url = '%s://%s:%s/ins' % (proto, host, port)
def send(self, commands, command_type='cli_show_ascii', encoding='json'):
"""Send commands to the device.
"""
clist = to_list(commands)
if command_type not in NXAPI_COMMAND_TYPES:
self.module.fail_json(msg="Invalid command_type. Received %s. Expected one of %s." %
(command_type, ','.join(NXAPI_COMMAND_TYPES)))
data = self._get_body(clist, command_type, encoding)
data = self.module.jsonify(data)
headers = {'Content-Type': 'application/json'}
response, headers = fetch_url(self.module, self.url, data=data, headers=headers,
method='POST')
if headers['status'] != 200:
self.module.fail_json(**headers)
response = self.module.from_json(response.read())
if 'error' in response:
err = response['error']
self.module.fail_json(msg='json-rpc error % ' % str(err))
return response
class Cli(object):
def __init__(self, module):
self.module = module
self.shell = None
def connect(self, **kwargs):
host = self.module.params['host']
port = self.module.params['port'] or 22
username = self.module.params['username']
password = self.module.params['password']
self.shell = Shell()
self.shell.open(host, port=port, username=username, password=password)
def send(self, commands, encoding='text'):
return self.shell.send(commands)
class NxosModule(AnsibleModule):
def __init__(self, *args, **kwargs):
super(NxosModule, self).__init__(*args, **kwargs)
self.connection = None
self._config = None
@property
def config(self):
if not self._config:
self._config = self.get_config()
return self._config
def connect(self):
if self.params['transport'] == 'nxapi':
self.connection = Nxapi(self)
else:
self.connection = Cli(self)
try:
self.connection.connect()
self.execute('terminal length 0')
except Exception, exc:
self.fail_json(msg=exc.message)
def configure(self, commands):
commands = to_list(commands)
if self.params['transport'] == 'cli':
commands.insert(0, 'configure terminal')
responses = self.execute(commands)
responses.pop(0)
else:
responses = self.execute(commands, command_type='cli_conf')
return responses
def execute(self, commands, **kwargs):
try:
return self.connection.send(commands, **kwargs)
except Exception, exc:
self.fail_json(msg=exc.message)
def disconnect(self):
self.connection.close()
def parse_config(self, cfg):
return parse(cfg, indent=2)
def get_config(self):
cmd = 'show running-config'
if self.params.get('include_defaults'):
cmd += ' all'
if self.params['transport'] == 'cli':
return self.execute(cmd)[0]
else:
resp = self.execute(cmd)
if not resp.get('ins_api').get('outputs').get('output').get('body'):
self.fail_json(msg="Unrecognized response: %s" % str(resp))
return resp['ins_api']['outputs']['output']['body']
def get_module(**kwargs):
"""Return instance of EosModule
"""
argument_spec = NET_COMMON_ARGS.copy()
if kwargs.get('argument_spec'):
argument_spec.update(kwargs['argument_spec'])
kwargs['argument_spec'] = argument_spec
kwargs['check_invalid_arguments'] = False
module = NxosModule(**kwargs)
# HAS_PARAMIKO is set by module_utils/shell.py
if module.params['transport'] == 'cli' and not HAS_PARAMIKO:
module.fail_json(msg='paramiko is required but does not appear to be installed')
# copy in values from local action.
params = json_dict_unicode_to_bytes(json.loads(MODULE_COMPLEX_ARGS))
for key, value in params.iteritems():
module.params[key] = value
module.connect()
return module

View file

@ -0,0 +1,246 @@
#
# (c) 2015 Peter Sprygada, <psprygada@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
import time
import json
try:
from runconfig import runconfig
from opsrest.settings import settings
from opsrest.manager import OvsdbConnectionManager
from opslib import restparser
HAS_OPS = True
except ImportError:
HAS_OPS = False
NET_PASSWD_RE = re.compile(r"[\r\n]?password: $", re.I)
NET_COMMON_ARGS = dict(
host=dict(),
port=dict(type='int'),
username=dict(),
password=dict(no_log=True),
transport=dict(default='ssh', choices=['ssh', 'cli', 'rest']),
)
def to_list(val):
if isinstance(val, (list, tuple)):
return list(val)
elif val is not None:
return [val]
else:
return list()
def get_idl():
manager = OvsdbConnectionManager(settings.get('ovs_remote'),
settings.get('ovs_schema'))
manager.start()
idl = manager.idl
init_seq_no = 0
while (init_seq_no == idl.change_seqno):
idl.run()
time.sleep(1)
return idl
def get_schema():
return restparser.parseSchema(settings.get('ext_schema'))
def get_runconfig():
idl = get_idl()
schema = get_schema()
return runconfig.RunConfigUtil(idl, schema)
class Response(object):
def __init__(self, resp, hdrs):
self.body = resp.read()
self.headers = hdrs
@property
def json(self):
try:
return json.loads(self.body)
except ValueError:
return None
class Rest(object):
def __init__(self, module):
self.module = module
self.baseurl = None
def connect(self):
host = self.module.params['host']
port = self.module.params['port']
if self.module.params['use_ssl']:
proto = 'https'
if not port:
port = 443
else:
proto = 'http'
if not port:
port = 80
self.baseurl = '%s://%s:%s/rest/v1' % (proto, host, port)
def _url_builder(self, path):
if path[0] == '/':
path = path[1:]
return '%s/%s' % (self.baseurl, path)
def send(self, method, path, data=None, headers=None):
url = self._url_builder(path)
data = self.module.jsonify(data)
if headers is None:
headers = dict()
headers.update({'Content-Type': 'application/json'})
resp, hdrs = fetch_url(self.module, url, data=data, headers=headers,
method=method)
return Response(resp, hdrs)
def get(self, path, data=None, headers=None):
return self.send('GET', path, data, headers)
def put(self, path, data=None, headers=None):
return self.send('PUT', path, data, headers)
def post(self, path, data=None, headers=None):
return self.send('POST', path, data, headers)
def delete(self, path, data=None, headers=None):
return self.send('DELETE', path, data, headers)
class Cli(object):
def __init__(self, module):
self.module = module
self.shell = None
def connect(self, **kwargs):
host = self.module.params['host']
port = self.module.params['port'] or 22
username = self.module.params['username']
password = self.module.params['password']
self.shell = Shell()
self.shell.open(host, port=port, username=username, password=password)
def send(self, commands, encoding='text'):
return self.shell.send(commands)
class OpsModule(AnsibleModule):
def __init__(self, *args, **kwargs):
super(OpsModule, self).__init__(*args, **kwargs)
self.connection = None
self._config = None
self._runconfig = None
@property
def config(self):
if not self._config:
self._config = self.get_config()
return self._config
def connect(self):
if self.params['transport'] == 'rest':
self.connection = Rest(self)
elif self.params['transport'] == 'cli':
self.connection = Cli(self)
try:
self.connection.connect()
except Exception, exc:
self.fail_json(msg=exc.message)
def configure(self, config):
if self.params['transport'] == 'cli':
commands = to_list(config)
commands.insert(0, 'configure terminal')
responses = self.execute(commands)
responses.pop(0)
return responses
elif self.params['transport'] == 'rest':
path = '/system/full-configuration'
return self.connection.put(path, data=config)
else:
if not self._runconfig:
self._runconfig = get_runconfig()
self._runconfig.write_config_to_db(config)
def execute(self, commands, **kwargs):
try:
return self.connection.send(commands, **kwargs)
except Exception, exc:
self.fail_json(msg=exc.message, commands=commands)
def disconnect(self):
self.connection.close()
def parse_config(self, cfg):
return parse(cfg, indent=4)
def get_config(self):
if self.params['transport'] == 'cli':
return self.execute('show running-config')[0]
elif self.params['transport'] == 'rest':
resp = self.connection.get('/system/full-configuration')
return resp.json
else:
if not self._runconfig:
self._runconfig = get_runconfig()
return self._runconfig.get_running_config()
def get_module(**kwargs):
"""Return instance of OpsModule
"""
argument_spec = NET_COMMON_ARGS.copy()
if kwargs.get('argument_spec'):
argument_spec.update(kwargs['argument_spec'])
kwargs['argument_spec'] = argument_spec
kwargs['check_invalid_arguments'] = False
module = OpsModule(**kwargs)
if not HAS_OPS and module.params['transport'] == 'ssh':
module.fail_json(msg='could not import ops library')
# HAS_PARAMIKO is set by module_utils/shell.py
if module.params['transport'] == 'cli' and not HAS_PARAMIKO:
module.fail_json(msg='paramiko is required but does not appear to be installed')
# copy in values from local action.
params = json_dict_unicode_to_bytes(json.loads(MODULE_COMPLEX_ARGS))
for key, value in params.iteritems():
module.params[key] = value
if module.params['transport'] in ['cli', 'rest']:
module.connect()
return module

View file

@ -0,0 +1,193 @@
#
# (c) 2015 Peter Sprygada, <psprygada@ansible.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
import re
import socket
from StringIO import StringIO
try:
import paramiko
HAS_PARAMIKO = True
except ImportError:
HAS_PARAMIKO = False
ANSI_RE = re.compile(r'(\x1b\[\?1h\x1b=)')
CLI_PROMPTS_RE = [
re.compile(r'[\r\n]?[a-zA-Z]{1}[a-zA-Z0-9-]*[>|#](?:\s*)$'),
re.compile(r'[\r\n]?[a-zA-Z]{1}[a-zA-Z0-9-]*\(.+\)#(?:\s*)$')
]
CLI_ERRORS_RE = [
re.compile(r"% ?Error"),
re.compile(r"^% \w+", re.M),
re.compile(r"% ?Bad secret"),
re.compile(r"invalid input", re.I),
re.compile(r"(?:incomplete|ambiguous) command", re.I),
re.compile(r"connection timed out", re.I),
re.compile(r"[^\r\n]+ not found", re.I),
re.compile(r"'[^']' +returned error code: ?\d+"),
]
def to_list(val):
if isinstance(val, (list, tuple)):
return list(val)
elif val is not None:
return [val]
else:
return list()
class ShellError(Exception):
def __init__(self, msg, command=None):
super(ShellError, self).__init__(msg)
self.message = msg
self.command = command
class Command(object):
def __init__(self, command, prompt=None, response=None):
self.command = command
self.prompt = prompt
self.response = response
def __str__(self):
return self.command
class Shell(object):
def __init__(self):
self.ssh = None
self.shell = None
self.prompts = list()
self.prompts.extend(CLI_PROMPTS_RE)
self.errors = list()
self.errors.extend(CLI_ERRORS_RE)
def open(self, host, port=22, username=None, password=None,
timeout=10, key_filename=None):
self.ssh = paramiko.SSHClient()
self.ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
use_keys = password is None
self.ssh.connect(host, port=port, username=username, password=password,
timeout=timeout, allow_agent=use_keys, look_for_keys=use_keys,
key_filename=key_filename)
self.shell = self.ssh.invoke_shell()
self.shell.settimeout(10)
self.receive()
def strip(self, data):
return ANSI_RE.sub('', data)
def receive(self, cmd=None):
recv = StringIO()
while True:
data = self.shell.recv(200)
recv.write(data)
recv.seek(recv.tell() - 200)
window = self.strip(recv.read())
if isinstance(cmd, Command):
self.handle_input(window, prompt=cmd.prompt,
response=cmd.response)
try:
if self.read(window):
resp = self.strip(recv.getvalue())
return self.sanitize(cmd, resp)
except ShellError, exc:
exc.command = cmd
raise
def send(self, commands):
responses = list()
try:
for command in to_list(commands):
cmd = '%s\r' % str(command)
self.shell.sendall(cmd)
responses.append(self.receive(command))
except socket.timeout, exc:
raise ShellError("timeout trying to send command", cmd)
return responses
def close(self):
self.shell.close()
def handle_input(self, resp, prompt, response):
if not prompt or not response:
return
prompt = to_list(prompt)
response = to_list(response)
for pr, ans in zip(prompt, response):
match = pr.search(resp)
if match:
cmd = '%s\r' % ans
self.shell.sendall(cmd)
def sanitize(self, cmd, resp):
cleaned = []
for line in resp.splitlines():
if line.startswith(str(cmd)) or self.read(line):
continue
cleaned.append(line)
return "\n".join(cleaned)
def read(self, response):
for regex in self.errors:
if regex.search(response):
raise ShellError('%s' % response)
for regex in self.prompts:
if regex.search(response):
return True
def get_cli_connection(module):
host = module.params['host']
port = module.params['port']
if not port:
port = 22
username = module.params['username']
password = module.params['password']
try:
cli = Cli()
cli.open(host, port=port, username=username, password=password)
except paramiko.ssh_exception.AuthenticationException, exc:
module.fail_json(msg=exc.message)
except socket.error, exc:
host = '%s:%s' % (host, port)
module.fail_json(msg=exc.strerror, errno=exc.errno, host=host)
except socket.timeout:
module.fail_json(msg='socket timed out')
return cli

View file

@ -310,36 +310,45 @@ class NoSSLError(SSLValidationError):
"""Needed to connect to an HTTPS url but no ssl library available to verify the certificate"""
pass
# Some environments (Google Compute Engine's CoreOS deploys) do not compile
# against openssl and thus do not have any HTTPS support.
CustomHTTPSConnection = CustomHTTPSHandler = None
if hasattr(httplib, 'HTTPSConnection') and hasattr(urllib2, 'HTTPSHandler'):
class CustomHTTPSConnection(httplib.HTTPSConnection):
def __init__(self, *args, **kwargs):
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
if HAS_SSLCONTEXT:
self.context = create_default_context()
if self.cert_file:
self.context.load_cert_chain(self.cert_file, self.key_file)
class CustomHTTPSConnection(httplib.HTTPSConnection):
def __init__(self, *args, **kwargs):
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
if HAS_SSLCONTEXT:
self.context = create_default_context()
if self.cert_file:
self.context.load_cert_chain(self.cert_file, self.key_file)
def connect(self):
"Connect to a host on a given (SSL) port."
def connect(self):
"Connect to a host on a given (SSL) port."
if hasattr(self, 'source_address'):
sock = socket.create_connection((self.host, self.port), self.timeout, self.source_address)
else:
sock = socket.create_connection((self.host, self.port), self.timeout)
if hasattr(self, 'source_address'):
sock = socket.create_connection((self.host, self.port), self.timeout, self.source_address)
else:
sock = socket.create_connection((self.host, self.port), self.timeout)
if self._tunnel_host:
self.sock = sock
self._tunnel()
if HAS_SSLCONTEXT:
self.sock = self.context.wrap_socket(sock, server_hostname=self.host)
else:
self.sock = ssl.wrap_socket(sock, keyfile=self.key_file, certfile=self.cert_file, ssl_version=PROTOCOL)
server_hostname = self.host
# Note: self._tunnel_host is not available on py < 2.6 but this code
# isn't used on py < 2.6 (lack of create_connection)
if self._tunnel_host:
self.sock = sock
self._tunnel()
server_hostname = self._tunnel_host
class CustomHTTPSHandler(urllib2.HTTPSHandler):
if HAS_SSLCONTEXT:
self.sock = self.context.wrap_socket(sock, server_hostname=server_hostname)
else:
self.sock = ssl.wrap_socket(sock, keyfile=self.key_file, certfile=self.cert_file, ssl_version=PROTOCOL)
def https_open(self, req):
return self.do_open(CustomHTTPSConnection, req)
class CustomHTTPSHandler(urllib2.HTTPSHandler):
https_request = urllib2.AbstractHTTPHandler.do_request_
def https_open(self, req):
return self.do_open(CustomHTTPSConnection, req)
https_request = urllib2.AbstractHTTPHandler.do_request_
def generic_urlparse(parts):
'''
@ -373,7 +382,10 @@ def generic_urlparse(parts):
# get the username, password, etc.
try:
netloc_re = re.compile(r'^((?:\w)+(?::(?:\w)+)?@)?([A-Za-z0-9.-]+)(:\d+)?$')
(auth, hostname, port) = netloc_re.match(parts[1])
match = netloc_re.match(parts[1])
auth = match.group(1)
hostname = match.group(2)
port = match.group(3)
if port:
# the capture group for the port will include the ':',
# so remove it and convert the port to an integer
@ -383,6 +395,8 @@ def generic_urlparse(parts):
# and then split it up based on the first ':' found
auth = auth[:-1]
username, password = auth.split(':', 1)
else:
username = password = None
generic_parts['username'] = username
generic_parts['password'] = password
generic_parts['hostname'] = hostname
@ -390,7 +404,7 @@ def generic_urlparse(parts):
except:
generic_parts['username'] = None
generic_parts['password'] = None
generic_parts['hostname'] = None
generic_parts['hostname'] = parts[1]
generic_parts['port'] = None
return generic_parts
@ -532,7 +546,8 @@ class SSLValidationHandler(urllib2.BaseHandler):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
if https_proxy:
proxy_parts = generic_urlparse(urlparse.urlparse(https_proxy))
s.connect((proxy_parts.get('hostname'), proxy_parts.get('port')))
port = proxy_parts.get('port') or 443
s.connect((proxy_parts.get('hostname'), port))
if proxy_parts.get('scheme') == 'http':
s.sendall(self.CONNECT_COMMAND % (self.hostname, self.port))
if proxy_parts.get('username'):
@ -542,7 +557,7 @@ class SSLValidationHandler(urllib2.BaseHandler):
connect_result = s.recv(4096)
self.validate_proxy_response(connect_result)
if context:
ssl_s = context.wrap_socket(s, server_hostname=proxy_parts.get('hostname'))
ssl_s = context.wrap_socket(s, server_hostname=self.hostname)
else:
ssl_s = ssl.wrap_socket(s, ca_certs=tmp_ca_cert_path, cert_reqs=ssl.CERT_REQUIRED, ssl_version=PROTOCOL)
match_hostname(ssl_s.getpeercert(), self.hostname)
@ -661,8 +676,9 @@ def open_url(url, data=None, headers=None, method=None, use_proxy=True,
handlers.append(proxyhandler)
# pre-2.6 versions of python cannot use the custom https
# handler, since the socket class is lacking this method
if hasattr(socket, 'create_connection'):
# handler, since the socket class is lacking create_connection.
# Some python builds lack HTTPS support.
if hasattr(socket, 'create_connection') and CustomHTTPSHandler:
handlers.append(CustomHTTPSHandler)
opener = urllib2.build_opener(*handlers)

@ -1 +1 @@
Subproject commit 88e0bfd75df9de563f9991b3dab7aebfbf8a9bf3
Subproject commit ce6619bf5db87f94001625c991d02960109dee2d

@ -1 +1 @@
Subproject commit 7da1f8d4ca3ab8b00e0b3a056d8ba03a4d2bf3a4
Subproject commit 29af26884ea11639f38c145b348afccdb6923285

View file

@ -1,344 +0,0 @@
# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#############################################
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import re
import sys
from ansible import constants as C
from ansible.inventory.group import Group
from .host import Host
from ansible.plugins.inventory.aggregate import InventoryAggregateParser
from ansible import errors
class Inventory:
'''
Create hosts and groups from inventory
Retrieve the hosts and groups that ansible knows about from this class.
Retrieve raw variables (non-expanded) from the Group and Host classes
returned from here.
'''
def __init__(self, inventory_list=C.DEFAULT_HOST_LIST):
'''
:kwarg inventory_list: A list of inventory sources. This may be file
names which will be parsed as ini-like files, executable scripts
which return inventory data as json, directories of both of the above,
or hostnames. Files and directories are
:kwarg vault_password: Password to use if any of the inventory sources
are in an ansible vault
'''
self._restricted_to = None
self._filter_pattern = None
parser = InventoryAggregateParser(inventory_list)
parser.parse()
self._basedir = parser.basedir
self._hosts = parser.hosts
self._groups = parser.groups
def get_hosts(self):
'''
Return the list of hosts, after filtering based on any set pattern
and restricting the results based on the set host restrictions.
'''
if self._filter_pattern:
hosts = self._filter_hosts()
else:
hosts = self._hosts[:]
if self._restricted_to is not None:
# this will preserve the order of hosts after intersecting them
res_set = set(hosts).intersection(self._restricted_to)
return [h for h in hosts if h in res_set]
else:
return hosts[:]
def get_groups(self):
'''
Retrieve the Group objects known to the Inventory
'''
return self._groups[:]
def get_host(self, hostname):
'''
Retrieve the Host object for a hostname
'''
for host in self._hosts:
if host.name == hostname:
return host
return None
def get_group(self, groupname):
'''
Retrieve the Group object for a groupname
'''
for group in self._groups:
if group.name == groupname:
return group
return None
def add_group(self, group):
'''
Add a new group to the inventory
'''
if group not in self._groups:
self._groups.append(group)
def set_filter_pattern(self, pattern='all'):
'''
Sets a pattern upon which hosts/groups will be filtered.
This pattern can contain logical groupings such as unions,
intersections and negations using special syntax.
'''
self._filter_pattern = pattern
def set_host_restriction(self, restriction):
'''
Restrict operations to hosts in the given list
'''
assert isinstance(restriction, list)
self._restricted_to = restriction[:]
def remove_host_restriction(self):
'''
Remove the restriction on hosts, if any.
'''
self._restricted_to = None
def _filter_hosts(self):
"""
Limits inventory results to a subset of inventory that matches a given
list of patterns, such as to select a subset of a hosts selection that also
belongs to a certain geographic group or numeric slice.
Corresponds to --limit parameter to ansible-playbook
:arg patterns: The pattern to limit with. If this is None it
clears the subset. Multiple patterns may be specified as a comma,
semicolon, or colon separated string.
"""
hosts = []
pattern_regular = []
pattern_intersection = []
pattern_exclude = []
patterns = self._pattern.replace(";",":").split(":")
for p in patterns:
if p.startswith("!"):
pattern_exclude.append(p)
elif p.startswith("&"):
pattern_intersection.append(p)
elif p:
pattern_regular.append(p)
# if no regular pattern was given, hence only exclude and/or intersection
# make that magically work
if pattern_regular == []:
pattern_regular = ['all']
# when applying the host selectors, run those without the "&" or "!"
# first, then the &s, then the !s.
patterns = pattern_regular + pattern_intersection + pattern_exclude
for p in patterns:
intersect = False
negate = False
if p.startswith('&'):
intersect = True
elif p.startswith('!'):
p = p[1:]
negate = True
target = self._resolve_pattern(p)
if isinstance(target, Host):
if negate and target in hosts:
# remove it
hosts.remove(target)
elif target not in hosts:
# for both union and intersections, we just append it
hosts.append(target)
else:
if intersect:
hosts = [ h for h in hosts if h not in target ]
elif negate:
hosts = [ h for h in hosts if h in target ]
else:
to_append = [ h for h in target if h.name not in [ y.name for y in hosts ] ]
hosts.extend(to_append)
return hosts
def _resolve_pattern(self, pattern):
target = self.get_host(pattern)
if target:
return target
else:
(name, enumeration_details) = self._enumeration_info(pattern)
hpat = self._hosts_in_unenumerated_pattern(name)
result = self._apply_ranges(pattern, hpat)
return result
def _enumeration_info(self, pattern):
"""
returns (pattern, limits) taking a regular pattern and finding out
which parts of it correspond to start/stop offsets. limits is
a tuple of (start, stop) or None
"""
# Do not parse regexes for enumeration info
if pattern.startswith('~'):
return (pattern, None)
# The regex used to match on the range, which can be [x] or [x-y].
pattern_re = re.compile("^(.*)\[([-]?[0-9]+)(?:(?:-)([0-9]+))?\](.*)$")
m = pattern_re.match(pattern)
if m:
(target, first, last, rest) = m.groups()
first = int(first)
if last:
if first < 0:
raise errors.AnsibleError("invalid range: negative indices cannot be used as the first item in a range")
last = int(last)
else:
last = first
return (target, (first, last))
else:
return (pattern, None)
def _apply_ranges(self, pat, hosts):
"""
given a pattern like foo, that matches hosts, return all of hosts
given a pattern like foo[0:5], where foo matches hosts, return the first 6 hosts
"""
# If there are no hosts to select from, just return the
# empty set. This prevents trying to do selections on an empty set.
# issue#6258
if not hosts:
return hosts
(loose_pattern, limits) = self._enumeration_info(pat)
if not limits:
return hosts
(left, right) = limits
if left == '':
left = 0
if right == '':
right = 0
left=int(left)
right=int(right)
try:
if left != right:
return hosts[left:right]
else:
return [ hosts[left] ]
except IndexError:
raise errors.AnsibleError("no hosts matching the pattern '%s' were found" % pat)
def _hosts_in_unenumerated_pattern(self, pattern):
""" Get all host names matching the pattern """
results = []
hosts = []
hostnames = set()
# ignore any negative checks here, this is handled elsewhere
pattern = pattern.replace("!","").replace("&", "")
def __append_host_to_results(host):
if host not in results and host.name not in hostnames:
hostnames.add(host.name)
results.append(host)
groups = self.get_groups()
for group in groups:
if pattern == 'all':
for host in group.get_hosts():
__append_host_to_results(host)
else:
if self._match(group.name, pattern):
for host in group.get_hosts():
__append_host_to_results(host)
else:
matching_hosts = self._match_list(group.get_hosts(), 'name', pattern)
for host in matching_hosts:
__append_host_to_results(host)
if pattern in ["localhost", "127.0.0.1"] and len(results) == 0:
new_host = self._create_implicit_localhost(pattern)
results.append(new_host)
return results
def _create_implicit_localhost(self, pattern):
new_host = Host(pattern)
new_host._connection = 'local'
new_host.set_variable("ansible_python_interpreter", sys.executable)
ungrouped = self.get_group("ungrouped")
if ungrouped is None:
self.add_group(Group('ungrouped'))
ungrouped = self.get_group('ungrouped')
self.get_group('all').add_child_group(ungrouped)
ungrouped.add_host(new_host)
return new_host
def is_file(self):
'''
Did inventory come from a file?
:returns: True if the inventory is file based, False otherwise
'''
pass
def src(self):
'''
What's the complete path to the inventory file?
:returns: Complete path to the inventory file. None if inventory is
not file-based
'''
pass
def basedir(self):
'''
What directory from which the inventory was read.
'''
return self._basedir

View file

@ -1,51 +0,0 @@
# (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class Host:
def __init__(self, name):
self._name = name
self._connection = None
self._ipv4_address = ''
self._ipv6_address = ''
self._port = 22
self._vars = dict()
def __repr__(self):
return self.get_name()
def get_name(self):
return self._name
def get_groups(self):
return []
def set_variable(self, name, value):
''' sets a variable for this host '''
self._vars[name] = value
def get_vars(self):
''' returns all variables for this host '''
all_vars = self._vars.copy()
all_vars.update(dict(inventory_hostname=self._name))
return all_vars

View file

@ -21,7 +21,7 @@ __metaclass__ = type
from ansible.compat.six import iteritems, string_types
from ansible.errors import AnsibleParserError
from ansible.errors import AnsibleParserError,AnsibleError
from ansible.plugins import module_loader
from ansible.parsing.splitter import parse_kv, split_args
from ansible.template import Templar
@ -137,7 +137,16 @@ class ModuleArgsParser:
# than those which may be parsed/normalized next
final_args = dict()
if additional_args:
final_args.update(additional_args)
if isinstance(additional_args, string_types):
templar = Templar(loader=None)
if templar._contains_vars(additional_args):
final_args['_variable_params'] = additional_args
else:
raise AnsibleParserError("Complex args containing variables cannot use bare variables, and must use the full variable style ('{{var_name}}')")
elif isinstance(additional_args, dict):
final_args.update(additional_args)
else:
raise AnsibleParserError('Complex args must be a dictionary or variable string ("{{var}}").')
# how we normalize depends if we figured out what the module name is
# yet. If we have already figured it out, it's an 'old style' invocation.
@ -155,6 +164,13 @@ class ModuleArgsParser:
tmp_args = parse_kv(tmp_args)
args.update(tmp_args)
# only internal variables can start with an underscore, so
# we don't allow users to set them directy in arguments
if args and action not in ('command', 'shell', 'script', 'raw'):
for arg in args:
if arg.startswith('_ansible_'):
raise AnsibleError("invalid parameter specified for action '%s': '%s'" % (action, arg))
# finally, update the args we're going to return with the ones
# which were normalized above
if args:

View file

@ -65,8 +65,8 @@ def parse_kv(args, check_raw=False):
raise
raw_params = []
for x in vargs:
x = _decode_escapes(x)
for orig_x in vargs:
x = _decode_escapes(orig_x)
if "=" in x:
pos = 0
try:
@ -83,19 +83,14 @@ def parse_kv(args, check_raw=False):
k = x[:pos]
v = x[pos + 1:]
# only internal variables can start with an underscore, so
# we don't allow users to set them directy in arguments
if k.startswith('_'):
raise AnsibleError("invalid parameter specified: '%s'" % k)
# FIXME: make the retrieval of this list of shell/command
# options a function, so the list is centralized
if check_raw and k not in ('creates', 'removes', 'chdir', 'executable', 'warn'):
raw_params.append(x)
raw_params.append(orig_x)
else:
options[k.strip()] = unquote(v.strip())
else:
raw_params.append(x)
raw_params.append(orig_x)
# recombine the free-form params, if any were found, and assign
# them to a special option for use later by the shell/command module

View file

@ -20,6 +20,7 @@ from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import re
from ansible.errors import AnsibleParserError, AnsibleError
# Components that match a numeric or alphanumeric begin:end or begin:end:step
# range expression inside square brackets.
@ -162,6 +163,7 @@ patterns = {
$
'''.format(label=label), re.X|re.I|re.UNICODE
),
}
def parse_address(address, allow_ranges=False):
@ -183,8 +185,8 @@ def parse_address(address, allow_ranges=False):
# First, we extract the port number if one is specified.
port = None
for type in ['bracketed_hostport', 'hostport']:
m = patterns[type].match(address)
for matching in ['bracketed_hostport', 'hostport']:
m = patterns[matching].match(address)
if m:
(address, port) = m.groups()
port = int(port)
@ -194,22 +196,20 @@ def parse_address(address, allow_ranges=False):
# numeric ranges, or a hostname with alphanumeric ranges.
host = None
for type in ['ipv4', 'ipv6', 'hostname']:
m = patterns[type].match(address)
for matching in ['ipv4', 'ipv6', 'hostname']:
m = patterns[matching].match(address)
if m:
host = address
continue
# If it isn't any of the above, we don't understand it.
if not host:
return (None, None)
# If we get to this point, we know that any included ranges are valid. If
# the caller is prepared to handle them, all is well. Otherwise we treat
# it as a parse failure.
raise AnsibleError("Not a valid network hostname: %s" % address)
# If we get to this point, we know that any included ranges are valid.
# If the caller is prepared to handle them, all is well.
# Otherwise we treat it as a parse failure.
if not allow_ranges and '[' in host:
return (None, None)
raise AnsibleParserError("Detected range in host but was asked to ignore ranges")
return (host, port)

View file

@ -22,7 +22,7 @@ __metaclass__ = type
import yaml
from ansible.compat.six import PY3
from ansible.parsing.yaml.objects import AnsibleUnicode
from ansible.parsing.yaml.objects import AnsibleUnicode, AnsibleSequence, AnsibleMapping
from ansible.vars.hostvars import HostVars
class AnsibleDumper(yaml.SafeDumper):
@ -50,3 +50,13 @@ AnsibleDumper.add_representer(
represent_hostvars,
)
AnsibleDumper.add_representer(
AnsibleSequence,
yaml.representer.SafeRepresenter.represent_list,
)
AnsibleDumper.add_representer(
AnsibleMapping,
yaml.representer.SafeRepresenter.represent_dict,
)

View file

@ -44,6 +44,7 @@ class Playbook:
self._entries = []
self._basedir = os.getcwd()
self._loader = loader
self._file_name = None
@staticmethod
def load(file_name, variable_manager=None, loader=None):
@ -61,6 +62,8 @@ class Playbook:
# set the loaders basedir
self._loader.set_basedir(self._basedir)
self._file_name = file_name
# dynamically load any plugins from the playbook directory
for name, obj in get_all_plugin_loaders():
if obj.subdir:

View file

@ -19,6 +19,7 @@
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from copy import deepcopy
class Attribute:
@ -32,6 +33,11 @@ class Attribute:
self.priority = priority
self.always_post_validate = always_post_validate
if default is not None and self.isa in ('list', 'dict', 'set'):
self.default = deepcopy(default)
else:
self.default = default
def __eq__(self, other):
return other.priority == self.priority

View file

@ -27,7 +27,7 @@ import uuid
from functools import partial
from inspect import getmembers
from ansible.compat.six import iteritems, string_types, text_type
from ansible.compat.six import iteritems, string_types
from jinja2.exceptions import UndefinedError
@ -36,6 +36,7 @@ from ansible.parsing.dataloader import DataLoader
from ansible.playbook.attribute import Attribute, FieldAttribute
from ansible.utils.boolean import boolean
from ansible.utils.vars import combine_vars, isidentifier
from ansible.utils.unicode import to_unicode
BASE_ATTRIBUTES = {}
@ -48,7 +49,7 @@ class Base:
_remote_user = FieldAttribute(isa='string')
# variables
_vars = FieldAttribute(isa='dict', default=dict(), priority=100)
_vars = FieldAttribute(isa='dict', priority=100)
# flags and misc. settings
_environment = FieldAttribute(isa='list')
@ -76,6 +77,10 @@ class Base:
# and initialize the base attributes
self._initialize_base_attributes()
# and init vars, avoid using defaults in field declaration as it lives across plays
self.vars = dict()
# The following three functions are used to programatically define data
# descriptors (aka properties) for the Attributes of all of the playbook
# objects (tasks, blocks, plays, etc).
@ -310,7 +315,7 @@ class Base:
# and make sure the attribute is of the type it should be
if value is not None:
if attribute.isa == 'string':
value = text_type(value)
value = to_unicode(value)
elif attribute.isa == 'int':
value = int(value)
elif attribute.isa == 'float':

View file

@ -90,16 +90,18 @@ class Become:
display.deprecated("Instead of su/su_user, use become/become_user and set become_method to 'su' (default is sudo)")
# if we are becoming someone else, but some fields are unset,
# make sure they're initialized to the default config values
if ds.get('become', False):
if ds.get('become_method', None) is None:
ds['become_method'] = C.DEFAULT_BECOME_METHOD
if ds.get('become_user', None) is None:
ds['become_user'] = C.DEFAULT_BECOME_USER
return ds
def set_become_defaults(self, become, become_method, become_user):
''' if we are becoming someone else, but some fields are unset,
make sure they're initialized to the default config values '''
if become:
if become_method is None:
become_method = C.DEFAULT_BECOME_METHOD
if become_user is None:
become_user = C.DEFAULT_BECOME_USER
def _get_attr_become(self):
'''
Override for the 'become' getattr fetcher, used from Base.

View file

@ -34,6 +34,7 @@ class Block(Base, Become, Conditional, Taggable):
_rescue = FieldAttribute(isa='list', default=[])
_always = FieldAttribute(isa='list', default=[])
_delegate_to = FieldAttribute(isa='list')
_delegate_facts = FieldAttribute(isa='bool', default=False)
# for future consideration? this would be functionally
# similar to the 'else' clause for exceptions

View file

@ -22,7 +22,7 @@ __metaclass__ = type
from jinja2.exceptions import UndefinedError
from ansible.compat.six import text_type
from ansible.errors import AnsibleError
from ansible.errors import AnsibleError, AnsibleUndefinedVariable
from ansible.playbook.attribute import FieldAttribute
from ansible.template import Templar
@ -89,16 +89,22 @@ class Conditional:
# make sure the templar is using the variables specifed to this method
templar.set_available_variables(variables=all_vars)
conditional = templar.template(conditional)
if not isinstance(conditional, basestring) or conditional == "":
return conditional
try:
conditional = templar.template(conditional)
if not isinstance(conditional, text_type) or conditional == "":
return conditional
# a Jinja2 evaluation that results in something Python can eval!
presented = "{%% if %s %%} True {%% else %%} False {%% endif %%}" % conditional
conditional = templar.template(presented, fail_on_undefined=False)
val = conditional.strip()
if val == presented:
# a Jinja2 evaluation that results in something Python can eval!
presented = "{%% if %s %%} True {%% else %%} False {%% endif %%}" % conditional
conditional = templar.template(presented)
val = conditional.strip()
if val == "True":
return True
elif val == "False":
return False
else:
raise AnsibleError("unable to evaluate conditional: %s" % original)
except (AnsibleUndefinedVariable, UndefinedError) as e:
# the templating failed, meaning most likely a
# variable was undefined. If we happened to be
# looking for an undefined variable, return True,
@ -108,11 +114,5 @@ class Conditional:
elif "is defined" in original:
return False
else:
raise AnsibleError("error while evaluating conditional: %s (%s)" % (original, presented))
elif val == "True":
return True
elif val == "False":
return False
else:
raise AnsibleError("unable to evaluate conditional: %s" % original)
raise AnsibleError("error while evaluating conditional (%s): %s" % (original, e))

View file

@ -49,9 +49,15 @@ class IncludedFile:
return "%s (%s): %s" % (self._filename, self._args, self._hosts)
@staticmethod
def process_include_results(results, tqm, iterator, loader, variable_manager):
def process_include_results(results, tqm, iterator, inventory, loader, variable_manager):
included_files = []
def get_original_host(host):
if host.name in inventory._hosts_cache:
return inventory._hosts_cache[host.name]
else:
return inventory.get_host(host.name)
for res in results:
if res._task.action == 'include':
@ -67,9 +73,10 @@ class IncludedFile:
if 'skipped' in include_result and include_result['skipped'] or 'failed' in include_result:
continue
original_task = iterator.get_original_task(res._host, res._task)
original_host = get_original_host(res._host)
original_task = iterator.get_original_task(original_host, res._task)
task_vars = variable_manager.get_vars(loader=loader, play=iterator._play, host=res._host, task=original_task)
task_vars = variable_manager.get_vars(loader=loader, play=iterator._play, host=original_host, task=original_task)
templar = Templar(loader=loader, variables=task_vars)
include_variables = include_result.get('include_variables', dict())
@ -81,14 +88,19 @@ class IncludedFile:
# handle relative includes by walking up the list of parent include
# tasks and checking the relative result to see if it exists
parent_include = original_task._task_include
cumulative_path = None
while parent_include is not None:
parent_include_dir = templar.template(os.path.dirname(parent_include.args.get('_raw_params')))
if cumulative_path is None:
cumulative_path = parent_include_dir
elif not os.path.isabs(cumulative_path):
cumulative_path = os.path.join(parent_include_dir, cumulative_path)
include_target = templar.template(include_result['include'])
if original_task._role:
new_basedir = os.path.join(original_task._role._role_path, 'tasks', parent_include_dir)
new_basedir = os.path.join(original_task._role._role_path, 'tasks', cumulative_path)
include_file = loader.path_dwim_relative(new_basedir, 'tasks', include_target)
else:
include_file = loader.path_dwim_relative(loader.get_basedir(), parent_include_dir, include_target)
include_file = loader.path_dwim_relative(loader.get_basedir(), cumulative_path, include_target)
if os.path.exists(include_file):
break
@ -111,6 +123,6 @@ class IncludedFile:
except ValueError:
included_files.append(inc_file)
inc_file.add_host(res._host)
inc_file.add_host(original_host)
return included_files

View file

@ -64,7 +64,7 @@ class Play(Base, Taggable, Become):
# Connection
_gather_facts = FieldAttribute(isa='bool', default=None, always_post_validate=True)
_hosts = FieldAttribute(isa='list', default=[], required=True, listof=string_types, always_post_validate=True)
_hosts = FieldAttribute(isa='list', required=True, listof=string_types, always_post_validate=True)
_name = FieldAttribute(isa='string', default='', always_post_validate=True)
# Variable Attributes

View file

@ -125,6 +125,18 @@ TASK_ATTRIBUTE_OVERRIDES = (
'remote_user',
)
RESET_VARS = (
'ansible_connection',
'ansible_ssh_host',
'ansible_ssh_pass',
'ansible_ssh_port',
'ansible_ssh_user',
'ansible_ssh_private_key_file',
'ansible_ssh_pipelining',
'ansible_user',
'ansible_host',
'ansible_port',
)
class PlayContext(Base):
@ -316,6 +328,13 @@ class PlayContext(Base):
# the host name in the delegated variable dictionary here
delegated_host_name = templar.template(task.delegate_to)
delegated_vars = variables.get('ansible_delegated_vars', dict()).get(delegated_host_name, dict())
delegated_transport = C.DEFAULT_TRANSPORT
for transport_var in MAGIC_VARIABLE_MAPPING.get('connection'):
if transport_var in delegated_vars:
delegated_transport = delegated_vars[transport_var]
break
# make sure this delegated_to host has something set for its remote
# address, otherwise we default to connecting to it by name. This
# may happen when users put an IP entry into their inventory, or if
@ -326,6 +345,24 @@ class PlayContext(Base):
else:
display.debug("no remote address found for delegated host %s\nusing its name, so success depends on DNS resolution" % delegated_host_name)
delegated_vars['ansible_host'] = delegated_host_name
# reset the port back to the default if none was specified, to prevent
# the delegated host from inheriting the original host's setting
for port_var in MAGIC_VARIABLE_MAPPING.get('port'):
if port_var in delegated_vars:
break
else:
if delegated_transport == 'winrm':
delegated_vars['ansible_port'] = 5986
else:
delegated_vars['ansible_port'] = C.DEFAULT_REMOTE_PORT
# and likewise for the remote user
for user_var in MAGIC_VARIABLE_MAPPING.get('remote_user'):
if user_var in delegated_vars:
break
else:
delegated_vars['ansible_user'] = task.remote_user or self.remote_user
else:
delegated_vars = dict()
@ -367,6 +404,13 @@ class PlayContext(Base):
if new_info.no_log is None:
new_info.no_log = C.DEFAULT_NO_LOG
# set become defaults if not previouslly set
task.set_become_defaults(new_info.become, new_info.become_method, new_info.become_user)
# have always_run override check mode
if task.always_run:
new_info.check_mode = False
return new_info
def make_become_cmd(self, cmd, executable=None):
@ -453,7 +497,7 @@ class PlayContext(Base):
if self.become_user:
flags += ' -u %s ' % self.become_user
becomecmd = '%s %s echo %s && %s %s env ANSIBLE=true %s' % (exe, flags, success_key, exe, flags, cmd)
becomecmd = '%s %s %s -c %s' % (exe, flags, executable, success_cmd)
else:
raise AnsibleError("Privilege escalation method not found: %s" % self.become_method)
@ -473,7 +517,8 @@ class PlayContext(Base):
# TODO: should we be setting the more generic values here rather than
# the more specific _ssh_ ones?
for special_var in ['ansible_connection', 'ansible_ssh_host', 'ansible_ssh_pass', 'ansible_ssh_port', 'ansible_ssh_user', 'ansible_ssh_private_key_file', 'ansible_ssh_pipelining']:
for special_var in RESET_VARS:
if special_var not in variables:
for prop, varnames in MAGIC_VARIABLE_MAPPING.items():
if special_var in varnames:

View file

@ -55,9 +55,9 @@ class PlaybookInclude(Base, Conditional, Taggable):
# playbook objects
new_obj = super(PlaybookInclude, self).load_data(ds, variable_manager, loader)
all_vars = dict()
all_vars = self.vars.copy()
if variable_manager:
all_vars = variable_manager.get_vars(loader=loader)
all_vars.update(variable_manager.get_vars(loader=loader))
templar = Templar(loader=loader, variables=all_vars)
if not new_obj.evaluate_conditional(templar=templar, all_vars=all_vars):
@ -66,7 +66,7 @@ class PlaybookInclude(Base, Conditional, Taggable):
# then we use the object to load a Playbook
pb = Playbook(loader=loader)
file_name = new_obj.include
file_name = templar.template(new_obj.include)
if not os.path.isabs(file_name):
file_name = os.path.join(basedir, file_name)

Some files were not shown because too many files have changed in this diff Show more