fixed typos found by RETF rules in RST files

rules are avaialble at https://en.wikipedia.org/wiki/Wikipedia:AutoWikiBrowser/Typos
This commit is contained in:
Christian Berendt 2014-05-03 17:59:50 +02:00
parent b753625dbf
commit 58ff9cd7c8
12 changed files with 15 additions and 15 deletions

View file

@ -330,7 +330,7 @@ and guidelines:
* Include a minimum of dependencies if possible. If there are dependencies, document them at the top of the module file, and have the module raise JSON error messages when the import fails.
* Modules must be self contained in one file to be auto-transferred by ansible.
* Modules must be self-contained in one file to be auto-transferred by ansible.
* If packaging modules in an RPM, they only need to be installed on the control machine and should be dropped into /usr/share/ansible. This is entirely optional and up to you.
@ -338,7 +338,7 @@ and guidelines:
* In the event of failure, a key of 'failed' should be included, along with a string explanation in 'msg'. Modules that raise tracebacks (stacktraces) are generally considered 'poor' modules, though Ansible can deal with these returns and will automatically convert anything unparseable into a failed result. If you are using the AnsibleModule common Python code, the 'failed' element will be included for you automatically when you call 'fail_json'.
* Return codes from modules are not actually not signficant, but continue on with 0=success and non-zero=failure for reasons of future proofing.
* Return codes from modules are not actually not significant, but continue on with 0=success and non-zero=failure for reasons of future proofing.
* As results from many hosts will be aggregated at once, modules should return only relevant output. Returning the entire contents of a log file is generally bad form.

View file

@ -96,7 +96,7 @@ Development
+++++++++++
More information will come later, though see the source of any of the existing callbacks and you should be able to get started quickly.
They should be reasonably self explanatory.
They should be reasonably self-explanatory.
.. _distributing_plugins:

View file

@ -81,7 +81,7 @@ What is the best way to make content reusable/redistributable?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
If you have not done so already, read all about "Roles" in the playbooks documentation. This helps you make playbook content
self contained, and works well with things like git submodules for sharing content with others.
self-contained, and works well with things like git submodules for sharing content with others.
If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can be extended.

View file

@ -410,7 +410,7 @@ YAML
++++
Ansible does not want to force people to write programming language code to automate infrastructure, so Ansible uses YAML to define playbook configuration languages and also variable files. YAML is nice because it has a minimum of syntax and is very clean and easy for people to skim. It is a good data format for configuration files and humans, but also machine readable. Ansible's usage of YAML stemmed from Michael's first use of it inside of Cobbler around 2006. YAML is fairly popular in the dynamic language community and the format has libraries available
for serialization in many different languages (Python, Perl, Ruby, etc.).
for serialization in many languages (Python, Perl, Ruby, etc.).
.. seealso::

View file

@ -580,12 +580,12 @@ and less information has to be shared with remote hosts.
Orchestration in the Rackspace Cloud
++++++++++++++++++++++++++++++++++++
Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks, deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other pice of software in an environment. Complex deployments might have previously required manaul manipulation of load balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the deployment of additioanl nodes contingent on the current number of running nodes, or the configuration of a clustered applicaiton dependent on the number of nodes with common metadata. One could automate the following scenarios, for example:
Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks, deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other pice of software in an environment. Complex deployments might have previously required manaul manipulation of load balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the deployment of additioanl nodes contingent on the current number of running nodes, or the configuration of a clustered application dependent on the number of nodes with common metadata. One could automate the following scenarios, for example:
* Servers that are removed from a Cloud Load Balancer one-by-one, updated, verified, and returned to the load balancer pool
* Expansion of an already-online environment, where nodes are provisioned, bootstrapped, configured, and software installed
* A procedure where app log files are uploaded to a central location, like Cloud Files, before a node is decommissioned
* Servers and load balancers that have DNS receords created and destroyed on creation and decomissioning, respectively
* Servers and load balancers that have DNS receords created and destroyed on creation and decommissioning, respectively

View file

@ -13,7 +13,7 @@ provisioner for these virtual machines, and the two tools work together well.
This guide will describe how to use Vagrant and Ansible together.
If you're not familar with Vagrant, you should visit `the documentation
If you're not familiar with Vagrant, you should visit `the documentation
<http://docs.vagrantup.com/v2/>`_.
This guide assumes that you already have Ansible installed and working.

View file

@ -123,7 +123,7 @@ File Transfer
Here's another use case for the `/usr/bin/ansible` command line. Ansible can SCP lots of files to multiple machines in parallel.
To transfer a file directly to many different servers::
To transfer a file directly to many servers::
$ ansible atlanta -m copy -a "src=/etc/hosts dest=/tmp/hosts"

View file

@ -22,7 +22,7 @@ Prior to 1.5 the order was::
* .ansible.cfg (in the home directory)
* /etc/ansible/ansible.cfg
Ansible will process the above list and use the first file found. Settings in files are not merged together.
Ansible will process the above list and use the first file found. Settings in files are not merged.
.. _getting_the_latest_configuration:
@ -228,7 +228,7 @@ hash_behaviour
Ansible by default will override variables in specific precedence orders, as described in :doc:`playbooks_variables`. When a variable
of higher precedence wins, it will replace the other value.
Some users prefer that variables that are hashes (aka 'dictionaries' in Python terms) are merged together. This setting is called 'merge'. This is not the default behavior and it does not affect variables whose values are scalars (integers, strings) or
Some users prefer that variables that are hashes (aka 'dictionaries' in Python terms) are merged. This setting is called 'merge'. This is not the default behavior and it does not affect variables whose values are scalars (integers, strings) or
arrays. We generally recommend not using this setting unless you think you have an absolute need for it, and playbooks in the
official examples repos do not use this setting::

View file

@ -109,7 +109,7 @@ Host Key Checking
Ansible 1.2.1 and later have host key checking enabled by default.
If a host is reinstalled and has a different key in 'known_hosts', this will result in a error message until corrected. If a host is not initially in 'known_hosts' this will result in prompting for confirmation of the key, which results in a interactive experience if using Ansible, from say, cron. You might not want this.
If a host is reinstalled and has a different key in 'known_hosts', this will result in an error message until corrected. If a host is not initially in 'known_hosts' this will result in prompting for confirmation of the key, which results in an interactive experience if using Ansible, from say, cron. You might not want this.
If you wish to disable this behavior and understand the implications, you can do so by editing /etc/ansible/ansible.cfg or ~/.ansible.cfg::

View file

@ -125,7 +125,7 @@ Local Playbooks
It may be useful to use a playbook locally, rather than by connecting over SSH. This can be useful
for assuring the configuration of a system by putting a playbook on a crontab. This may also be used
to run a playbook inside a OS installer, such as an Anaconda kickstart.
to run a playbook inside an OS installer, such as an Anaconda kickstart.
To run an entire playbook locally, just set the "hosts:" line to "hosts:127.0.0.1" and then run the playbook like so::

View file

@ -233,7 +233,7 @@ Sometimes you would want to retry a task until a certain condition is met. Here
retries: 5
delay: 10
The above example run the shell module recursively till the module's result has "all systems go" in it's stdout or the task has
The above example run the shell module recursively till the module's result has "all systems go" in its stdout or the task has
been retried for 5 times with a delay of 10 seconds. The default value for "retries" is 3 and "delay" is 5.
The task returns the results returned by the last task run. The results of individual retries can be viewed by -vv option.

View file

@ -344,7 +344,7 @@ in taking high-quality modules into ansible core for inclusion, so this shouldn'
An good example for this is if you worked at a company called AcmeWidgets, and wrote an internal module that helped configure your internal software, and you wanted other
people in your organization to easily use this module -- but you didn't want to tell everyone how to configure their Ansible library path.
Alongside the 'tasks' and 'handlers' structure of of a role, add a directory named 'library'. In this 'library' directory, then include the module directly inside of it.
Alongside the 'tasks' and 'handlers' structure of a role, add a directory named 'library'. In this 'library' directory, then include the module directly inside of it.
Assuming you had this::