I recently needed to drop a large file onto a target Linux server where direct SCP was prohibited. The server is entirely managed using Ansible playbooks in Azure DevOps with it’s credentials stored in CyberArk. Ok no problem…maybe?

Running my pipeline I was presented with an error stating there was insufficient disk space in the /tmp/.ansible/tmp directory on the agent machine. This directory is often where Ansible temporarily stores files during execution, such as modules, scripts, and temporary data. It’s a tmp directory so its understandable that it’s struggling considering the file size but now I was in a bit of a bind.

The following provides some steps for how I overcame this problem. It’s not the most elegant solution but it got the job done!

All code referenced in this post can be found in this repo.

Split the File

First I archived my file:

tar -cvf largefile.tar /path/to/files/

Then I used the split Command to Split the .tar File: The split command allows you to split the file into smaller, more manageable chunks. The general syntax is:

split -b <size> <source-file> <output-prefix>

This command split largefile.tar into 1 GB chunks, resulting in files named largefile_part_aa, largefile_part_ab, etc.

CyberArk

As mentioned the credentials needed by Ansible to access the server are stored in CyberArk, so in my main pipeline I needed to initiate a connection to CyberArk using my service connection:

- task:
  name: PWV # Task Name
  inputs:
    svcConnection: # your cyberark service connection
    accounts: # your cyberark account name
  env:
    PWV_password: $(PWV.cyberark_account_name_seperated_by_underscores)

Note that I needed to configure the password as an environment variable so that it can later be passed to the hosts.ini file that Ansible uses for authentication.

Become

In my playbook I required a privilege escalation step. In Ansible, you can gain root privileges by using the become directive. This allows a non-root user to run commands with elevated privileges.

You can add the become: true directive to:

  • The entire playbook.
  • A specific task.
  • A specific play.

If your servers are using sudoexec you will have to create a role. That’s a whole other thing that I won’t go into here. Nevertheless, I’ve included a role in the repo just so you have an idea about how the tree structure might look if you did need that.

Transfer

I then transferred each smaller chunk over. You can do this pipeline run by pipeline run like I did, or you could engineer a single run to do the transfers in one go. Up to you. Here is basic example of what the file transfer playbook looked like:

---
- name: create target directory
  hosts: all
  roles:
    - detect-become-method

  tasks:

    - name: Copy file from agent to target server
      become: true
      copy:
        src: # agent directory where file to be transferred is located.
        dest: # directory on target server
        mode: '0755'

    - name: list target directories contents
      command: ls
      register: ls_output

    - name: display raw output of the ls command
      debug:
        var: ls_output

...

Reassembly

After all chunks had been transferred to the target system, I reassembled them using the cat command which concatenates all the parts back into the original .tar file.:

cat largefile_part_* > largefile.tar

In my case I could SSH into the server so I was able to run this command myself. I could of course have also added these next steps to my ansible playbook though (and any further steps like running installation binaries).

It’s a good idea to verify that the reassembled file is identical to the original. You can compare checksums or extract the .tar file to ensure it was reassembled correctly.

And that’s it!


⚡️ Enjoyed this post? Buy me a coffee… in sats! Just tip me via my Lightning address: [email protected]