Categories
Uncategorized

Moving away from Puppet: SaltStack or Ansible?

Really well detailed article from: http://ryandlane.com/blog/2014/08/04/moving-away-from-puppet-saltstack-or-ansible/

Over the past month at Lyft we’ve been working on porting our
infrastructure code away from Puppet. We had some difficulty coming to
agreement on whether we wanted to use SaltStack (Salt) or Ansible. We
were already using Salt for AWS orchestration, but we were divided on
whether Salt or Ansible would be better for configuration management. We
decided to settle it the thorough way by implementing the port in both
Salt and Ansible, comparing them over multiple criteria.
First, let me start by explaining why we decided to port away from
Puppet: We had a complex puppet code base that has around 10,000 lines
of actual Puppet code. This code was originally spaghetti-code oriented
and in the past year or so was being converted to a new pattern that
used Hiera and Puppet modules split up into services and components. It’s roughly the role
pattern, for those familiar with Puppet. The code base was a mixture of
these two patterns and our DevOps team was comprised of almost all
recently hired members who were not very familiar with Puppet and were
unfamiliar with the code base. It was large, unwieldy and complex,
especially for our core application. Our DevOps team was getting
accustom to the Puppet infrastructure; however, Lyft is strongly rooted
in the concept of ‘If you build it you run it’. The DevOps team felt
that the Puppet infrastructure was too difficult to pick up quickly and
would be impossible to introduce to our developers as the tool they’d
use to manage their own services.
Before I delve into the comparison, we had some requirements of the new infrastructure:

  1. No masters. For Ansible this meant using ansible-playbook locally,
    and for Salt this meant using salt-call locally. Using a master for
    configuration management adds an unnecessary point of failure and
    sacrifices performance.
  2. Code should be as simple as possible. Configuration management
    abstractions generally lead to complicated, convoluted and difficult to
    understand code.
  3. No optimizations that would make the code read in an illogical order.
  4. Code must be split into two parts: base and service-specific, where
    each would reside in separate repositories. We want the base section of
    the code to cover configuration and services that would be deployed for
    every service (monitoring, alerting, logging, users, etc.) and we want
    the service-specific code to reside in the application repositories.
  5. The code must work for multiple environments (development, staging, production).
  6. The code should read and run in sequential order.

Here’s how we compared:

  1. Simplicity/Ease of Use
  2. Maturity
  3. Performance
  4. Community

Simplicity/Ease of Use

Ansible:
A couple team members had a strong preference to using Ansible as
they felt it was easier to use than Salt, so I started by implementing
the port in Ansible, then implementing it again in Salt.
As I started Ansible was indeed simple. The documentation was clearly
structured which made learning the syntax and general workflow
relatively simple. The documentation is oriented to running Ansible from
a controller and not locally, which made the initial work slightly more
difficult to pick up, but it wasn’t a major stumbling block. The
biggest issue was needing to have an inventory file with ‘localhost’
defined and needing to use -c local on the command line. Additionally,
Ansible’s playbook’s structure is very simple. There’s tasks, handlers,
variables and facts. Tasks do the work in order and can notify handlers
to do actions at the end of the run. The variables can be used via Jinja
in the playbooks or in templates. Facts are gathered from the system
and can be used like variables.
Developing the playbook was straightforward. Ansible always runs in
order and exits immediately when an error occurs. This made development
relatively easy and consistent. For the most part this also meant that
when I destroyed my vagrant instance and recreated it that my playbook
was consistently run.
That said, as I was developing I noticed that my ordering was
occasionally problematic and needed to move things around. As I finished
porting sections of the code I’d occasionally destroy and up my vagrant
instance and re-run the playbook, then noticed errors in my execution.
Overall using ordered execution was far more reliable than Puppet’s
unordered execution, though.
My initial playbook was a single file. As I went to split base and
service apart I noticed some complexity creeping in. Ansible includes
tasks and handlers separately and when included the format changes,
which was confusing at first. My playbook was now: playbook.yml,
base.yml, base-handlers.yml, service.yml, and service-handlers.yml. For
variables I had: user.yml and common.yml. As I was developing I
generally needed to keep the handlers open so that I could easily
reference them for the tasks.
The use of Jinja in Ansible is well executed. Here’s an example of adding users from a dictionary of users:

- name: Ensure groups exist
  group: name={{ item.key }} gid={{ item.value.id }}
  with_dict: users

- name: Ensure users exist
  user: name={{ item.key }} uid={{ item.value.id }} group={{ item.key }} groups=vboxsf,syslog comment="{{ item.value.full_name }}" shell=/bin/bash
  with_dict: users

For playbooks Ansible uses Jinja for variables, but not for logic.
Looping and conditionals are built into the DSL. with/when/etc. control
how individual tasks are handled. This is important to note because that
means you can only loop over individual tasks. A downside of Ansible
doing logic via the DSL is that I found myself constantly needing to
look at the documentation for looping and conditionals. Ansible has a
pretty powerful feature since it controls its logic itself, though:
variable registration. Tasks can register data into variables for use in
later tasks. Here’s an example:

- name: Check test pecl module
  shell: "pecl list | grep test | awk '{ print $2 }'"
  register: pecl_test_result
  ignore_errors: True
  changed_when: False

- name: Ensure test pecl module is installed
  command: pecl install -f test-1.1.1
  when: pecl_test_result.stdout != ‘1.1.1’

This is one of Ansible’s most powerful tools, but unfortunately
Ansible also relies on this for pretty basic functionality. Notice in
the above what’s happening. The first task checks the status of a shell
command then registers it to a variable so that it can be used in the
next task. I was displeased to see it took this much effort to do very
basic functionality. This should be a feature of the DSL. Puppet, for
instance, has a much more elegant syntax for this:

exec { ‘Ensure redis pecl module is installed’:
  command => ‘pecl install -f redis-2.2.4’,
  unless  => ‘pecl list | grep redis | awk ’{ print $2 }’’;
}

I was initially very excited about this feature, thinking I’d use it
often in interesting ways, but as it turned out I only used the feature
for cases where I needed to shell out in the above pattern because a
module didn’t exist for what I needed to do.
Some of the module functionality was broken up into a number of
different modules, which made it difficult to figure out how to do some
basic tasks. For instance, basic file operations are split between the
file, copy, fetch, get_url, lineinfile, replace, stat and template
modules. This was annoying when referencing documentation, where I
needed to jump between modules until I found the right one. The
shell/command module split is much more annoying, as command will only
run basic commands and won’t warn you when it’s stripping code. A few
times I wrote a task using the command module, then later changed the
command being run. The new command actually required the use of the
shell module, but I didn’t realize it and spent quite a while trying to
figure out what was wrong with the execution.
I found the input, output, DSL and configuration formats of Ansible perplexing. Here’s some examples:

  • Ansible and inventory configuration: INI format
  • Custom facts in facts.d: INI format
  • Variables: YAML format
  • Playbooks: YAML format, with key=value format inline
  • Booleans: yes/no format in some places and True/False format in other places
  • Output for introspection of facts: JSON format
  • Output for playbook runs: no idea what format

Output for playbook runs was terse, which was generally nice. Each
playbook task output a single line, except for looping, which printed
the task line, then each sub-action. Loop actions over dictionaries
printed the dict item with the task, which was a little unexpected and
cluttered the output. There is little to no control over the output.
Introspection for Ansible was lacking. To see the value of variables
in the format actually presented inside of the language it’s necessary
to use the debug task inside of a playbook, which means you need to edit
a file and do a playbook run to see the values. Getting the facts
available was more straightforward: ‘ansible -m setup hostname’. Note
that hostname must be provided here, which is a little awkward when
you’re only ever going to run locally. Debug mode was helpful, but
getting in-depth information about what Ansible was actually doing
inside of tasks was impossible without diving into the code, since every
task copies a python script to /tmp and executes it, hiding any real
information.
When I finished writing the playbooks, I had the following line length/character count:

 15     48     472   service-handlers.yml
 463    1635   17185 service.yml
 27     70     555   base-handlers.yml
 353    1161   11986 base.yml
 15     55     432   playbook.yml
 873    2969   30630 total

There were 194 tasks in total.
Salt:
Salt is initially difficult. The organization of the documentation is
poor and the text of the documentation is dense, making it difficult
for newbies. Salt assumes you’re running in master/minion mode and uses
absolute paths for its states, modules, etc.. Unless you’re using the
default locations, which are poorly documented for masterless mode, it’s
necessary to create a configuration file. The documentation for
configuring the minion is dense and there’s no guides for normal
configuration modes. States and pillars both require a ‘top.sls’ file
which define what will be included per-host (or whatever host matching
scheme you’re using); this is somewhat confusing at first.
Past the initial setup, Salt was straightforward. Salt’s state system
has states, pillars and grains. States are the YAML DSL used for
configuration management, pillars are user defined variables and grains
are variables gathered from the system. All parts of the system except
for the configuration file are templated through Jinja.
Developing Salt’s states was straightforward. Salt’s default mode of
operation is to execute states in order, but it also has a requisite
system, like Puppet’s, which can change the order of the execution.
Triggering events (like restarting a service) is documented using the
watch or watch_in requisite, which means that following the default
documentation will generally result in out-of-order execution. Salt also
provides the listen/listen_in global state arguments which execute at
the end of a state run and do not modify ordering. By default Salt does
not immediately halt execution when a state fails, but runs all states
and returns the results with a list of failures and successes. It’s
possible to modify this behavior via the configuration. Though Salt
didn’t exit on errors, I found that I had errors after destroying my
vagrant instance then rebuilding it at a similar rate to Ansible. That
said, I did eventually set the configuration to hard fail since our team
felt it would lead to more consistent runs.
My initial state definition was in a single file. Splitting this
apart into base and service states was very straightforward. I split the
files apart and included base from service. Salt makes no distinction
between states and commands being notified (handlers in Ansible);
there’s just states, so base and service each had their associated
notification states in their respective files. At this point I had:
top.sls, base.sls and service.sls for states. For pillars I had top.sls,
users.sls and common.sls.
The use of Jinja in Salt is well executed. Here’s an example of adding users from a dictionary of users:

{% for name, user in pillar['users'].items() %}
  Ensure user {{ name }} exist:
    user.present:
      - name: {{ name }}
      - uid: {{ user.id }}
      - gid_from_name: True
      - shell: /bin/bash
      - groups:
        - vboxsf
        - syslog
        - fullname: {{ user.full_name }}
{% endfor %}

Salt uses Jinja for both state logic and templates. It’s important to
note that Salt uses Jinja for state logic because it means that the
Jinja is executed before the state. A negative of this is that you can’t
do something like this:

Ensure myelb exists:
  boto_elb.present:
    - name: myelb
    - availability_zones:
      - us-east-1a
    - listeners:
      - elb_port: 80
        instance_port: 80
        elb_protocol: HTTP
      - elb_port: 443
        instance_port: 80
        elb_protocol: HTTPS
        instance_protocol: HTTP
        certificate: 'arn:aws:iam::879879:server-certificate/mycert'
      - health_check:
          target: 'TCP:8210'
    - profile: myprofile

{% set elb = salt['boto_elb.get_elb_config']('myelb', profile='myprofile') %}

{% if elb %}
Ensure myrecord.example.com cname points at ELB:
  boto_route53.present:
    - name: myrecord.example.com.
    - zone: example.com.
    - type: CNAME
    - value: {{ elb.dns_name }}
{% endif %}

That’s not possible because the Jinja running ’set elb’ is going to
run before ‘Ensure myelb exists’, since the Jinja is always rendered
before the states are executed.
On the other hand, since Jinja is executed first, it means you can wrap multiple states in a single loop:

{% for module, version in {
       ‘test’: (‘1.1.1’, 'stable'),
       ‘hello’: (‘1.2.1’, 'stable'),
       ‘world’: (‘2.2.2’, 'beta')
   }.items() %}
Ensure {{ module }} pecl module is installed:
  pecl.installed:
    - name: {{ module }}
    - version: {{ version[0] }}
    - preferred_state: {{ version[1] }}

Ensure {{ module }} pecl module is configured:
  file.managed:
    - name: /etc/php5/mods-available/{{ module }}.ini
    - contents: "extension={{ module }}.so"
    - listen_in:
      - cmd: Restart apache

Ensure {{ module }} pecl module is enabled for cli:
  file.symlink:
    - name: /etc/php5/cli/conf.d/{{ module }}.ini
    - target: /etc/php5/mods-available/{{ module }}.ini

Ensure {{ module }} pecl module is enabled for apache:
  file.symlink:
    - name: /etc/php5/apache2/conf.d/{{ module }}.ini
    - target: /etc/php5/mods-available/{{ module }}.ini
    - listen_in:
      - cmd: Restart apache
{% endfor %}

Of course something similar to Ansible’s register functionality isn’t
available either. This turned out to be fine, though, since Salt has a
very feature rich DSL. Here’s an example of a case where it was
necessary to shell out:

# We need to ensure the current link points to src.git initially
# but we only want to do so if there’s not a link there already,
# since it will point to the current deployed version later.
Ensure link from current to src.git exists if needed:
  file.symlink:
    - name: /srv/service/current
    - target: /srv/service/src.git
    - unless: test -L /srv/service/current

Additionally, as a developer who wanted to switch to either Salt or
Ansible because it was Python, it was very refreshing to use Jinja for
logic in the states rather than something built into the DSL, since I
didn’t need to look at the DSL specific documentation for looping or
conditionals.
Salt is very consistent when it comes to input, output and
configuration. Everything is YAML by default. Salt will happily give you
output in a number of different formats, including ones you create
yourself via outputter modules. The default output of state runs shows
the status of all states, but can be configured in multiple ways. I
ended up using the following configuration:

# Show terse output for successful states and full output for failures.
state_output: mixed
# Only show changes
state_verbose: False

State runs that don’t change anything show nothing. State runs that
change things will show the changes as single lines, but failures show
full output so that it’s possible to see stacktraces.
Introspection for Salt was excellent. Both grains and pillars were
accessible from the CLI in a consistent manner (salt-call grains.items;
salt-call pillar.items). Salt’s info log level shows in-depth
information of what is occurring per module. Using the debug log level
even shows how the code is being loaded, the order it’s being loaded in,
the OrderedDict that’s generated for the state run, the OrderedDict
that’s used for the pillars, the OrderedDict that’s used for the grains,
etc.. I found it was very easy to trace down bugs in Salt to report
issues and even quickly fix some of the bugs myself.
When I finished writing the states, I had the following word/character count:

527    1629   14553 api.sls
6      18     109   top.sls
576    1604   13986 base/init.sls
1109   3251   28648 total

There were 151 salt states in total.
Notice that though there’s 236 more lines of Salt, there’s in total
fewer characters. This is because Ansible has a short format which makes
its lines longer, but uses less lines overall. This makes it difficult
to directly compare by lines of code. Number of states/tasks is a better
metric to go by anyway, though.

Maturity

Both Salt and Ansible are currently more than mature enough to
replace Puppet. At no point was I unable to continue because a necessary
feature was missing from either.
That said, Salt’s execution and state module support is more mature
than Ansible’s, overall. An example is how to add users. It’s common to
add a user with a group of the same name. Doing this in Ansible requires
two tasks:

- name: Ensure groups exist
  group: name={{ item.key }} gid={{ item.value.id }}
  with_dict: users

- name: Ensure users exist
  user: name={{ item.key }} uid={{ item.value.id }} group={{ item.key }} groups=vboxsf,syslog comment="{{ item.value.full_name }}" shell=/bin/bash
  with_dict: users

Doing the same in Salt requires one:

{% for name, user in pillar['users'].items() %}
Ensure user {{ name }} exist:
  user.present:
    - name: {{ name }}
    - uid: {{ user.id }}
    - gid_from_name: True
    - shell: /bin/bash
    - groups:
      - vboxsf
      - syslog
    - fullname: {{ user.full_name }}
{% endfor %}

Additionally, Salt’s user module supports shadow attributes, where Ansible’s does not.
Another example is installing a debian package from a url. Doing this in Ansible is two tasks:

- name: Download mypackage debian package
  get_url: url=https://s3.amazonaws.com/mybucket/mypackage/mypackage_0.1.0-1_amd64.deb dest=/tmp/mypackage_0.1.0-1_amd64.deb

- name: Ensure mypackage is installed
  apt: deb=/tmp/mypackage_0.1.0-1_amd64.deb

Doing the same in Salt requires one:

Ensure mypackage is installed:
  pkg.installed:
    - sources:
    - mypackage: https://s3.amazonaws.com/mybucket/mypackage/mypackage_0.1.0-1_amd64.deb

Another example is fetching files from S3. Salt has native support
for this where files are referenced in many modules, while in Ansible
you must use the s3 module to download a file to a temporary location on
the filesystem, then use one of the file modules to manage it.
Salt has state modules for the following things that Ansible did not have:

  • pecl
  • mail aliases
  • ssh known hosts

Ansible had a few broken modules:

  • copy: when content is used, it writes POSIX non-compliant files by
    default. I opened an issue for this and was marked as won’t fix. More on
    this in the Community section.
  • apache2_module: always reports changes for some modules. I opened an
    issue it was marked as a duplicate. Fix in a pull request, open as of
    this writing with no response since June 24, 2014.
  • supervisorctl: doesn’t handle a race condition properly where a
    service starts after it checks its status. Fix in a pull request, open
    as of this writing with no response since June 29, 2014. Unsuccessfully
    fixed in a pull request on Aug 30, 2013, issue still marked as closed,
    though there are reports of it still being broken.

Salt had broken modules as well, both of which were broken in the same way as the Ansible equivalents, which was amusing:

  • apache_module: always reports changes for some modules. Fixed in upcoming release.
  • supervisorctl: doesn’t handle a race condition properly where a
    service starts after it checks its status. Fixed in upcoming release.

Past basic module support, Salt is more far more feature rich:

  • Salt can output in a number of different formats, including custom ones (via outputters)
  • Salt can output to other locations like mysql, redis, mongo, or custom locations (via returners)
  • Salt can load its pillars from a number of locations, including custom ones (via external pillars)
  • If running an agent, Salt can fire local events that can be reacted
    upon (via reactors); if using a master it’s also possible to react to
    events from minions.

Performance

Salt was faster than Ansible for state/playbook runs. For no-change
runs Salt was considerably faster. Here’s some performance data for
each, for full runs and no-change runs. Note that these runs were
relatively consistent across large numbers of system builds in both
vagrant and AWS and the full run times were mostly related to
package/pip/npm/etc installations:
Salt:

  • Full run: 12m 30s
  • No change run: 15s

Ansible:

  • Full run: 16m
  • No change run: 2m

I was very surprised at how slow Ansible was when making no changes.
Nearly all of this time was related to user accounts, groups, and ssh
key management. In fact, I opened an issue for it.
Ansible takes on average .5 seconds per user, but this extends to other
modules that use loops over large dictionaries. As the number of users
managed grows our no-change (and full-change) runs will grow with it. If
we double our managed users we’ll be looking at 3-4 minute no-change
runs.
I mentioned in the Simplicity/Ease of Use section that I had started
this project by developing with Ansible and then re-implementing in
Salt, but as time progressed I started implementing in Salt while
Ansible was running. By the time I got half-way through implementing in
Ansible I had already finished implementing everything in Salt.

Community

There’s a number of ways to rate a community. For Open Source projects I generally consider a few things:

  1. Participation

In terms of development participation Salt has 4 times the number of
merged pull requests (471 for Salt and 112 for Ansible) in a one month
period at the time of this writing. It also three times the number of
total commits. Salt is also much more diverse from a perspective of
community contribution. Ansible is almost solely written by mpdehaan.
Nearly the top 10 Salt contributors have more commits than the #2
committer for Ansible. That said, Ansible has more stars and forks on
GitHub, which may imply a larger user community.
Both Salt and Ansible have a very high level of participation. They
are generally always in the running with each other for the most active
GitHub project, so in either case you should feel assured the community
is strong.

  1. Friendliness

Ansible has a somewhat bad reputation here. I’ve heard anecdotal
stories of people being kicked out of the Ansible community. While
originally researching Ansible I had found some examples
of rude behavior to well meaning contributors. I did get a “pull
request welcome” response on a legitimate bug, which is an anti-pattern
in the open source world. That said, the IRC channel was incredibly
friendly and all of the mailing list posts I read during this project
were friendly as well.
Salt has an excellent reputation here. They thank users for bug
reports and code. They are very receptive and open to feature requests.
They respond quickly on the lists, email, twitter and IRC in a very
friendly manner. The only complaint that I have here is that they are
sometimes less rigorous than they should be when it comes to accepting
code (I’d like to see more code review).

  1. Responsiveness

I opened 4 issues while working on the Ansible port. 3 were closed
won’t fix and 1 was marked as a duplicate. Ansible’s issue reporting
process is somewhat laborious. All issues must use a template, which
requires a few clicks to get to and copy/paste. If you don’t use the
template they won’t help you (and will auto-close the issue after a few
days).
Of the issues marked won’t fix:

  1. user/group module slow:
    Not considered a bug that Ansible can do much about. Issue was closed
    with basically no discussion. I was welcomed to start a discussion on
    the mailing list about it. (For comparison: Salt checks all users,
    groups and ssh keys in roughly 1 second)
  2. Global ignore_errors: Feature request. Ansible was disinterested in the feature and the issue was closed without discussion.
  3. Content argument of copy module doesn’t add end of file character:
    The issue was closed won’t fix without discussion. When I linked to the
    POSIX spec showing why it was a bug the issue wasn’t reopened and I was
    told I could submit a patch. At this point I stopped submitting further
    bug reports.

Salt was incredibly responsive when it comes to issues. I opened 19
issues while working on the port. 3 of these issues weren’t actually
bugs and I closed them on my own accord after discussion in the issues. 4
were documentation issues. Let’s take a look at the rest of the issues:

  1. pecl state missing argument: I submitted an issue with a pull request. It was merged and closed the same day.
  2. Stacktrace when fetching directories using the S3 module: I submitted an issue with a pull request. It was merged the same day and the issue was closed the next.
  3. grains_dir is not a valid configuration option:
    I submitted an issue with no pull request. I was thanked for the report
    and the issue was marked as Approved the same day. The bug was fixed
    and merged in 4 days later.
  4. Apache state should have enmod and dismod capability: I submitted an issue with a pull request. It was merged and closed the same day.
  5. The hold argument is broken for pkg.installed: I submitted an issue without a pull request. I got a response the same day. The bug was fixed and merged the next day.
  6. Sequential operation relatively impossible currently:
    I submitted an issue without a pull request. I then went into IRC and
    had a long discussion with the developers about how this could be fixed.
    The issue was with the use of watch/watch_in requisites and how it
    modifies the order of state runs. I proposed a new set of requisites
    that would work like Ansible’s handlers. The issue was marked Approved
    after the IRC conversation. Later that night the founder (Thomas Hatch)
    wrote and merged the fix and let me know about it via Twitter. The bug was closed the following day.
  7. Stacktrace with listen/listen_in when key is not valid: This bug was a followup to the listen/listen_in feature. It was fixed/merged and closed the same day.
  8. Stacktrace using new listen/listen_in feature:
    This bug was an additional followup to the listen/listen_in feature and
    was reported at the same time as the previous one. It was fixed/merged
    and closed the same day.
  9. pkgrepo should only run refresh_db once:
    This is a feature request to save me 30 seconds on occasional state
    runs. It’s still open at the time of this writing, but was marked as
    Approved and the discussion has a recommended solution.
  10. refresh=True shouldn’t run when package specifies version and it matches.
    This is a feature request to save me 30 seconds on occasional state
    runs. It was fixed and merged 24 days later, but the bug still shows
    open (it’s likely waiting for me to verify).
  11. Add an enforce option to the ssh_auth state: This is a feature request. It’s still open at the time of this writing, but it was approved the same day.
  12. Allow minion config options to be modified from salt-call:
    This is a feature request. It’s still open at the time of this writing,
    but it was approved the same day and a possible solution was listed in
    the discussion.

All of these bugs, except for the listen/listen_in feature could have
easily been worked around, but I felt confident that if I submitted an
issue the bug would get fixed, or I’d be given a reasonable workaround.
When I submitted issues I was usually thanked for the issue submission
and I got confirmation on whether or not my issue was approved to be
fixed or not. When I submitted code I was always thanked and my code was
almost always merged in the same day. Most of the issues I submitted
were fixed within 24 hours, even a relatively major change like the
listen/listen_in feature.

  1. Documentation

For new users Ansible’s documentation is much better. The
organization of the docs and the brevity of the documentation make it
very easy to get started. Salt’s documentation is poorly organized and
is very dense, making it difficult to get started.
While implementing the port, I found the density of Salt’s docs to be
immensely helpful and the brevity of Ansible’s docs to be be
infuriating. I spent much longer periods of time trying to figure out
the subtleties of Ansible’s modules since they were relatively
undocumented. Not a single module has the variable registration
dictionary documented in Ansible, which required me to write a debug
task and run the playbook every time I needed to register a variable,
which was annoyingly often.
Salt’s docs are unnecessarily broken up, though. There’s multiple
sections on states. There’s multiple sections on global state arguments.
There’s multiple sections on pillars. The list goes on. Many of these
docs are overlapping, which makes searching for the right doc difficult.
The split of execution modules and state modules (which I rather enjoy
when doing salt development) make searching for modules more difficult
when writing states.
I’m a harsh critic of documentation though, so for both Salt and
Ansible, you should take this with a grain of salt (ha ha) and take a
look at the docs yourself.

Conclusion

At this point both Salt and Ansible are viable and excellent options
for replacing Puppet. As you may have guessed by now, I’m more in favor
of Salt. I feel the language is more mature, it’s much faster and the
community is friendlier and more responsive. If I couldn’t use Salt for a
project, Ansible would be my second choice. Both Salt and Ansible are
easier, faster, and more reliable than Puppet or Chef.
As you may have noticed earlier in this post, we had 10,000 lines of
puppet code and reduced that to roughly 1,000 in both Salt and Ansible.
That alone should speak highly of both.
After implementing the port in both Salt and Ansible, the Lyft DevOps team all agreed to go with Salt.

Leave a Reply

Your email address will not be published. Required fields are marked *