Site Implementation: Part Two
Last time I walked outside, money still wasn't growing from trees. I also checked my drivers license and it didn't say "Elon Musk". Therefore I would rather not have to maintain whole network on AWS, with a public facing EC2 server behind a load balancer and a backend database isolated in a private network. I mean if I had the money, I would even through in an elastic load balance and schedule it to expand on the 10th of every hour and decrease at the 50th just for giggles.
While studying for my Amazon Certified Developer certification, I learned that Amazon S3 buckets have the ability to be ran as a static website. The cost for the user breaks down to this:
From a development standpoint I really need to only be concerned with PUT/COPY/POST requests based on how often changes are made and the number of files. Overall storage costs are not a concern because a site like this would only be 50 mbs if most images are hosted offsite.
Now this brings us back to Ghost. It runs off of Nodejs which is a server-side execution engine. Which means in its pure form, it cannot be run as a static website.
Then... I found HTTrack (https://www.httrack.com/). HTTrack is a free software that allows one to download a copy of a website to be viewed offline. This means all I need to do is to initialize the site locally, run HTTrack against it and then upload the resulting files to the S3 Bucket.
Some New Problems:
- S3 is not optimized for sending its content throughout the US/Europe.
- There would be many requests against the S3 when someone views the websites on initial load.
- There are now extra steps when deploying the website. (This will be addressed on Part 3)
Solution:
Cloudfront does some amazing things here. Cloudfront is an AWS CDN service that specializes in caching your files at certain end nodes of the AWS network with the result being that the requests to view the site does not have to go all the way to the S3 Bucket but rather to one of the end nodes. Cloudfront also only pulls the files from the S3 Bucket based on a user configured TTL. This means one request of files from S3 can be used to serve files to all visitors for the next few hours.
What's next?
A proper CI/CD implementation. Which you can read about in Part 3.