Home

Awesome

DESCRIPTION

An LWRP that can be used to fetch files from S3.

I created this LWRP to solve the chicken-and-egg problem of fetching files from S3 on the first Chef run on a newly provisioned machine. Ruby libraries that are installed on that first run are not available to Chef during the run, so I couldn't use a library like Fog to get what I needed from S3.

This LWRP has no dependencies beyond the Ruby standard library, so it can be used on the first run of Chef.

REQUIREMENTS

An Amazon Web Services account and something in S3 to fetch.

Multi-part S3 uploads do not put the MD5 of the content in the ETag header. If x-amz-meta-digest is provided in User-Defined Metadata on the S3 Object it is processed as if it were a Digest header (RFC 3230).

The MD5 of the local file will be checked against the MD5 from x-amz-meta-digest if it is present. If not it will check against the ETag. If there is no match or the local file is absent it will be downloaded.

By default, a catalog file in Chef's cache path will be kept for all downloaded files tracking their etag and md5 at time of download. If either of these don't match, the file will be downloaded. To disable this behavior, set node['s3_file']['use_catalog'] to false.

If credentials are not provided, s3_file will attempt to use the first instance profile associated with the instance. See documentation at http://docs.aws.amazon.com/IAM/latest/UserGuide/instance-profiles.html for more on instance profiles.

USAGE

s3_file acts like other file resources. The only supported action is :create, which is the default.

Attribute Parameters:

Example:

s3_file "/tmp/somefile" do
	remote_path "/my/s3/key"
	bucket "my-s3-bucket"
	aws_access_key_id "mykeyid"
	aws_secret_access_key "mykey"
	s3_url "https://s3.amazonaws.com/bucket"
	owner "me"
	group "mygroup"
	mode "0644"
	action :create
	decryption_key "my SHA256 digest key"
	decrypted_file_checksum "SHA256 hex digest of decrypted file"
end

MD5 and Multi-Part Upload

s3_file compares the MD5 hash of a local file, if present, and the ETag header of the S3 object. If they do not match, then the remote object will be downloaded and notifiations will be fired.

In most cases, the ETag of an S3 object will be identical to its MD5 hash. However, if the file was uploaded to S3 via multi-part upload, then the ETag will be set to the MD5 hash of the first uploaded part. In these cases, MD5 of the local file and remote object will never match.

To work around this issue, set an X-Amz-Meta-Digest tag on your S3 object with value set to md5=MD5 of the entire object. s3_file will then use that value in place of the ETag value, and will skip downloading in case the MD5 of the local file matches the value of the X-Amz-Meta-Digest header.

USING ENCRYPTED S3 FILES

s3_file can decrypt files that have been encrypted using an AES-256-CBC cipher. To use the decryption part of the resource, you must provide a decryption_key which can be generated by following the instructions below. You can also include an optional decrypted_file_checksum which allows Chef to check to see if it needs to redownload the encrypted file. Note that this checksum is different from the one in S3 because the file you compare to is already decrypted so a SHA256 checksum is used instead of the MD5. Instructions to generate the decrypted_file_checksum are below as well.

To use s3_file with encrypted files:

  1. Create a new key using bin/s3_crypto -g > my_new_key.
  2. Create a SHA256 hex digest checsksum of your source file by calling bin/s3_crypto -c -i my_source_file [ -o my_checksum_file ].
  3. Encrypt your file using the new key by calling bin/s3_crypto -e -k my_new_key -i my_source_file [ -o my_destination_file ].
  4. You can test decryption of your file using bin/s3_crypto -d -k my_new_key -i my_encoded_file [ -o my_decoded_destionation ].
  5. Upload your encrypted file to S3 as normal.
  6. In the s3_file resource call, provide the string within my_new_key as the decryption_key of the resource.
  7. In the s3_file resource call, provide the string within my_checksum_file as the decrypted_file_checksum of the resource.

Note that when you make the s3_file call, it is best if you make decryption_key a node property and provide it via an encrypted databag or pull the key from the environment. It is not wise to check in your decryption key to your recipe.

To create your cipher, run bin/s3_crypto -g > my_new_key and a new 256-bit (32 hexidecimal characters) will be generated for you. Paste that key into a file for later use. DO NOT include an endline in the file otherwise the encryption and decryption will fail.

Try bin/s3_crypto -g > my_new_key.

You can use the utility bin/s3_crypto to encrypt files prior to uploading to S3 and to decrypt files prior to make sure the encryption is working.

ChefSpec matcher

s3_file comes with a matcher to use in ChefSpec.

This spec checks the code from the USAGE example above:

it 'downloads some file from s3' do
    expect(chef_run).to create_s3_file('/tmp/somefile')
        .with(bucket: "my-s3-bucket", remote_path: "/my/s3/key")
end

Testing

This cookbook has Test Kitchen integration tests. To test, create a .s3.yml file with the following S3 details.

file: file
bucket: bucket
region: xx-xxxx-x
access_key: XXXXXXXXXXXXXXXXXXXX
secret_key: XXXXXXXXXXXXXXXXXXXX

If you're using the ChefDK then type chef exec kitchen test, otherwise kitchen test.