Django and Oscar

Preparing AWS S3 / CloudFront To Be a Django Webapp Datastore

Published 2021-03-12. Last modified 2021-03-13.
Time to read: 2 minutes.

This page is part of the django collection.

This article shows how to prepare an AWS account for storing a Django webapp’s static assets in an AWS S3 bucket, associate a new CloudFront distribution with the S3 bucket. The goal is to be able to serve a Django webapp’s assets from the edge nodes of the CloudFront CDN.

A follow-on article discusses Django’s AWS S3 storage provider.

AWS Setup: IAM, S3, CloudFront

I generally use the AWS CLI as much as possible, instead of the web console. See the AWS CLI documentation for more information.

Configure AWS CLI User

Here is how to configure AWS CLI with a pre-existing user and a default region. You might use your root AWS access key and secret now, just to get things rolling. Later in this article, we will create a more restricted AWS IAM account, with just the permissions necessary to do its job. The next article will show how the Django webapp’s manage.py script will synchronize assets with AWS, using the more restrictive keys.

Shell
(aw) $ aws configure
AWS Access Key ID [None]: root_access_key_here
AWS Secret Access Key [None]: root_secret_key_here
Default region name [None]: us-east-1
Default output format [None]: JSON
😁

AWS S3 Bucket Setup

I modified a script I wrote a few years ago for another project into the script shown next. It creates an S3 bucket with a CloudFront distribution. It’s dependancies are jq and awscli.

Shell
(aw) $ yes | sudo apt install awscli jq

This is the makeAwsBucketAndDistribution script:

makeAwsBucketAndDistribution
#!/bin/bash

# Creates web-enabled AWS S3 bucket and a CloundFront distribution.
#
# Author: Mike Slinn mslinn@mslinn.com
#
# SPDX-License-Identifier: Apache-2.0

set -e

if [ -z "$1" ]; then
  echo "Usage: $0 assets.mydomain.com"
  exit 1
fi

BUCKET_NAME="$1"
LOG=".makeAwsBucketAndDistribution.log"

read -r -d '' NEW_DIST_JSON <<EOF
{
  "CallerReference": "$BUCKET_NAME",
  "Aliases": {
    "Quantity": 0
  },
  "DefaultRootObject": "index.html",
  "Origins": {
    "Quantity": 1,
    "Items": [
      {
        "Id": "$BUCKET_NAME",
        "DomainName": "$BUCKET_NAME.s3.amazonaws.com",
        "S3OriginConfig": {
          "OriginAccessIdentity": ""
        }
      }
    ]
  },
  "DefaultCacheBehavior": {
    "TargetOriginId": "$BUCKET_NAME",
    "ForwardedValues": {
      "QueryString": true,
      "Cookies": {
        "Forward": "none"
      }
    },
    "TrustedSigners": {
      "Enabled": false,
      "Quantity": 0
    },
    "ViewerProtocolPolicy": "redirect-to-https",
    "MinTTL": 3600
  },
  "CacheBehaviors": {
    "Quantity": 0
  },
  "Comment": "",
  "Logging": {
    "Enabled": false,
    "IncludeCookies": true,
    "Bucket": "",
    "Prefix": ""
  },
  "PriceClass": "PriceClass_All",
  "Enabled": true
}
EOF

if [ "$( aws s3api head-bucket --bucket $BUCKET_NAME 2> >(grep -i 'Not Found') )" ]; then
  echo "Making AWS S3 bucket $BUCKET_NAME"
  aws s3 mb s3://$BUCKET_NAME | tee "$LOG"
else
  echo "Bucket $BUCKET_NAME already exists" | tee "$LOG"
fi

echo "Setting the ACL for $BUCKET_NAME to allow public-read"
aws s3api put-bucket-acl --bucket $BUCKET_NAME --acl public-read | tee "$LOG"

#echo "Enabling the $BUCKET_NAME bucket's ability to serve web assets"
#aws s3 website s3://$BUCKET_NAME --index-document index.html --error-document 404.html >> "$LOG"

echo "Creating AWS CloudFront distribution for S3 bucket $BUCKET_NAME"
NEW_DIST_RESULT_JSON = "$( aws cloudfront create-distribution --distribution-config "$NEW_DIST_JSON" )"
echo "$NEW_DIST_RESULT_JSON" >> "$LOG"
DISTRIBUTION_ID="$( jq -r '.Distribution.Id' <<< "$NEW_DIST_RESULT_JSON" )"
echo "Created new AWS CloudFront distribution for S3 bucket $BUCKET_NAME with ID $DISTRIBUTION_ID"

cat << 'EOF'
To view the log, type:
  less $LOG

Type the following if you want to delete the bucket:
  aws s3 rb s3://$BUCKET_NAME --force

You will then need to also delete the CloudFront distribution for the bucket.
First disable the distribution, and wait until disabling completes, before deleting it:

DIST_CONFIG="$( aws cloudfront get-distribution-config --id $DISTRIBUTION_ID )"

ETAG="$( jq '. | .ETag' <<< "$DIST_CONFIG" )"

UPDATED_DIST_CONFIG="$( jq '.DefaultRootObject = null | .PriceClass = "PriceClass_All" | .Enabled = false' <<< "$DIST_CONFIG" )"

aws cloudfront update-distribution \
    --id $DISTRIBUTION_ID \
    --distribution-config "$UPDATED_DIST_CONFIG"

aws cloudfront wait distribution-deployed --id $DISTRIBUTION_ID

aws cloudfront delete-distribution --id $DISTRIBUTION_ID --if-match $ETAG
EOF

Here is how to use makeAwsBucketAndDistribution to create a new AWS S3 bucket called assets.ancientwarmth.com in your AWS CLI user’s default region.

Shell
(aw) $ chmod a+x makeAwsBucketAndDistribution
(aw) $ export BUCKET_NAME=assets.ancientwarmth.com
(aw) $ makeAwsBucketAndDistribution $BUCKET_NAME
😁

You might want to associate a custom SSL certificate with the CloudFront distribution. If you do that, be sure to use a wildcard certificate (for example, *.ancientwarmth.com).

AWS User and Group Setup

– Create an AWS IAM user called awWebProxy ...
(aw) $ aws iam create-user --user-name awWebProxy
{
  "User": {
      "Path": "/",
      "UserName": "awWebProxy",
      "UserId": "AIDAQOTPVZIYNUAQX6MSL",
      "Arn": "arn:aws:iam::031372724784:user/awWebProxy",
      "CreateDate": "2021-03-14T16:45:40Z"
  }
} 
... belonging to an IAM group called AncientWarmthProg ...
(aw) $ aws iam create-group --group-name AncientWarmthProg
{
  "Group": {
      "Path": "/",
      "GroupName": "AncientWarmthProg",
      "GroupId": "AGPAQOTPVZIYDB7ABCDEF",
      "Arn": "arn:aws:iam::031372724784:group/AncientWarmthProg",
      "CreateDate": "2021-03-14T16:46:01Z"
  }
} 
(aw) $ aws iam add-user-to-group \ --user-name awWebProxy --group-name AncientWarmthProg
... with just enough privilege to manage S3 content ...
(aw) $ aws iam put-group-policy --group-name AncientWarmthProg \
  --policy-document "{
    \"Version\": \"2012-10-17\",
    \"Statement\": [
        {
            \"Effect\": \"Allow\",
            \"Action\": \"s3:*\",
            \"Resource\": [
                \"arn:aws:s3:::$BUCKET_NAME\",
                \"arn:aws:s3:::$BUCKET_NAME/*\"
            ]
        }
    ]
  }"
Create the new user AccessKeyId and SecretAccessKey
(aw) $ NEW_KEY_JSON="$( aws iam create-access-key \
  --user-name awWebProxy
)"
{
  "AccessKey": {
      "UserName": "awWebProxy",
      "AccessKeyId": "JI1OKdRTrXMV2rD+tQ5yfiI/SE+i9ABCDEFABCDE",
      "Status": "Active",
      "SecretAccessKey": "a7c05EWzc4xLA2FcEy1qnRgSxczCOABCDEFABCDE",
      "CreateDate": "2021-03-14T16:48:23Z"
  }
} 
Append the new AccessKeyId and SecretAccessKey
(aw) $ cat >> ~/.aws/credentials << EOF
[$BUCKET_NAME] aws_access_key_id = $( jq -r .AccessKey.AccessKeyId <<< $NEW_KEY_JSON) aws_secret_access_key = $( jq -r .AccessKey.SecretAccessKey <<< $NEW_KEY_JSON) EOF

Now you should have at least 2 user profiles set up in ~/.aws/credentials:

~/.aws/credentials
[default]
aws_access_key_id = AKIAIDA74HT5ABCDEFABCDE
aws_secret_access_key = JI1OKdRTrXMV2rD+tQ5yfiI/SE+i9ABCDEFABCDE

[assets.ancientwarmth.com]
aws_access_key_id = AKIAQOTPVABCDEFABCDE
aws_secret_access_key = a7c05EWzc4xLA2FcEy1qnRgSxczCOABCDEFABCDE
😁

I now have IAM user entries called default (this is often used for the AWS root account) and the new IAM entry called assets.ancientwarmth.com.

Switching User Profiles

Setting the AWS_PROFILE environment variable causes AWS CLI to look for a section in ~/.aws/credentials with a matching name. This section, if found, contains sensitive information for specific IAM users. At a minimum, each user entry will contain aws_access_key_id and aws_secret_access_key.

Here is an example of how to issue a command on behalf of the assets.ancientwarmth.com user profile. This example merely shows the S3 buckets visible to the assets.ancientwarmth.com user.

Shell
$ AWS_PROFILE=assets.ancientwarmth.com s3 ls
2021-02-17 18:54:24 www.test1418860461349.com
2021-02-17 19:29:41 www.test1418862578230.com
2021-02-17 19:55:42 www.test1418864139816.com
2021-02-17 19:56:17 www.test1418864174464.com
... Many more ...

Here is a quick Bash script I wrote to delete those empty buckets left over from testing. Note that I export AWS_PROFILE so I do not have to set it every time I call an aws command.

delete_empty_buckets
export AWS_PROFILE=assets.ancientwarmth.com
for F in `aws s3 ls | cut -d" " -f3`; do
  aws s3 rm "s3://$F"
done

If you always want to use the same AWS credentials, set AWS_PROFILE in .bashrc:

~/.bashrc snippet
export AWS_PROFILE=assets.ancientwarmth.com
* indicates a required field.

Please select the following to receive Mike Slinn’s newsletter:

You can unsubscribe at any time by clicking the link in the footer of emails.

Mike Slinn uses Mailchimp as his marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices.