SEI Insights

DevOps Blog

Technical Guidelines and Practical Advice for DevOps

DevOps Technologies: Fabric or Ansible

Posted on by in

The workflow of deploying code is almost as old as code itself. There are many use cases associated with the deployment process, including evaluating resource requirements, designing a production system, provisioning and configuring production servers, and pushing code to name a few. In this blog post I focus on a use case for configuring a remote server with the packages and software necessary to execute your code.

This use case is supported by many different and competing technologies, such as Chef, Puppet, Fabric, Ansible, Salt, and Foreman, which are just a few of which you are likely to have heard on the path to automation in DevOps. All these technologies have free offerings, leave you with scripts to commit to your repository, and get the job done. This post explores Fabric and Ansible in more depth. To learn more about other infrastructure-as-code solutions, check out Joe Yankel's blog post on Docker or my post on Vagrant.

One difference between Fabric and Ansible is that while Fabric will get you results in minutes, Ansible requires a bit more effort to understand. Ansible is generally much more powerful since it provides much deeper and more complex semantics for modeling multi-tier infrastructure, such as those with arrays of web and database hosts. From an operator's perspective, Fabric has a more literal and basic API and uses Python for authoring, while Ansible consumes YAML and provides a richness in its behavior (which I discuss later in this post). We'll walk through examples of both in this posting.

Both Fabric and Ansible employ secure shell (SSH) to do their job in most cases. While Fabric leverages execution of simple command-line statements to target machines over SSH, Ansible pushes modules to remote machines and then executes these modules remotely, similar to Chef. Both tools wrap these commands with semantics for basic tasks such as copying files, restarting servers, and installing packages. The biggest difference between them is in the features and complexity that is presented to the operator.

Here is a Fabric script that installs Apache on a remote server:

  from fabric.api import run,env
  env.hosts = ['']

  def install_apache():
    run('apt-get install apache2', with_sudo=True)

This script is executed with:

$ fab install_apache

One obvious note here is that we are writing in Python, which provides to the operator all the features of the language. In this Fabric example, we are creating a task: install_apache, calling the run() operation, and literally spelling out the command we want to execute. Fabric handles reading the host name from the environment variable we set.

In contrast, here is an Ansible script that does the same thing Fabric did above, using a "playbook" and a "role":



name: install Apache
apt: name=apache2 state=present


name: install Apache

	- web

This script is executed with:

$ ansible-playbook deploy.yml

The playbook, and point of entry, is site.yml. This script declares plays, where each play states to what hosts each role should be applied. Each play starts with a name parameter and goes on to declare its targeted hosts and roles to use. The roles themselves are defined by a structure of subfolders containing more YAML that defines what modules to execute with what parameters for that role. In this example, we define a web role containing the apt module.

There is a subtle distinction about roles: hosts do not have roles. Instead, hosts are decorated with roles according to the playbook. Also, a playbook can have multiple plays, multiple roles can be applied to a host, roles can have multiple task files, and tasks can have multiple modules. Moreover, we can define groups for hosts and even put those groups into higher level groups.

Here is a more complete Ansible example:




name: configure a webserver
hosts: webservers
    - web
name: configure a database server
hosts: dbservers
    - db


name: install apache
apt: name=apache2 state=present


name: install mysql
apt: name=mysql-server state=present

All the elements in this example are executed with:

$ ansible-playbook site.yml

First, notice that we have added more hosts and group them in the hosts file. Second, we've added a second play to the playbook.

One nice feature that we don't see by looking at the playbook or the roles is that Ansible will gather information for all of the hosts at runtime and only apply changes necessary to obtain the desired state. In other words, if it ain't broke, don't fix it. Also, note that this is a stripped-down example of Ansible, and does not exemplify its many other features, such as defining and iterating over lists in a module call, using metadata from the hosts such as IP addresses and OS versions dynamically at runtime, and chaining roles together as dependencies. I highly recommend watching the Ansible quick start video here.

Now, back to Fabric. Here is roughly the same result using Fabric's tooling:

from fabric.api import env,hosts,run,execute
env.roledefs['webservers'] = ['', '']
env.roledefs['dbservers'] = ['']
def install_apache():
    run('apt-get install apache2', with_sudo=True)
def install_mysql():
    run('apt-get install mysql-server', with_sudo=True)
def deploy():

Note that we are contained to a single file, although the raw size of our configuration in bytes is roughly the same as in Ansible. On a more technical level, Fabric's semantics are much "thinner" than Ansible's. For example, when we target a host with a role in Ansible, we are effectively asking it to check the host for a multitude of data points and evaluate its state before running any commands. Fabric is more of a what-you-see-is-what-you-get implementation, as demonstrated by its API: "run", "put", "reboot", and "cd" are common operations. A consequence of this simplicity is a lack of the rich features that we see in Ansible, such as its ability to pull in host information dynamically and use that information during execution.

Here is a simple example of using Ansible's dynamic host information:


name: install apache
apt: name=apache2 state=present
name: deploy apache configuration
template: src=apache.conf.j2 dest=/etc/apache2/sites-enabled/apache.conf


<VirtualHost {{ ansible_default_ipv4.address }}:80>

Here we see a new module being used: "template". By convention, Ansible will look in the role's "templates" folder for the file supplied to the "src" attribute and deploy it to the location supplied to the "dest" attribute. But the magic here is that prior to application of this role, Ansible gathers a list of what are actually called "facts" from the host and provides that data to us in the scope of our YAML. In this example, it means we can supply our Apache configuration file with the IP address of whatever host to which the role is applied. Getting this kind of behavior with Fabric is work left to the operator.

One last topic is how these tools handle authentication.

Ansible's answer to this is in the playbook:


- hosts: webservers
  remote_user: alice
  sudo: yes # optional
  sudo_user: bob # optional

With Fabric, we simply set the environment variable:

from fabric.api import env
env.user = 'alice'

Both Fabric and Ansible can use your public key, as well to remove the need to enter passwords.

This blog posting provided a light introduction to two fairly powerful solutions to the infrastructure-as-code problem. By this point, you may have already decided which direction you want to go, but it's more likely that you have more questions than you started with. There are many features of both Fabric and Ansible that are best left to their respective and official documentation, but hopefully this post helped to you get started.

Every two weeks, the SEI will publish a new blog post offering guidelines and practical advice for organizations seeking to adopt DevOps in practice. We welcome your feedback on this series, as well as suggestions for future content. Please leave feedback in the comments section below.

Additional Resources

To listen to the podcast, DevOps--Transform Development and Operations for Fast, Secure Deployments, featuring Gene Kim and Julia Allen, please visit

To read all of the blog posts in our series thus far, please visit

About the Author