CDNs provide many security, availability and performance improvements over serving content straight off your servers. They improve website load times by serving static content from a location closer to the end user. This reduces the overall impact on the servers as less traffic passes through to them, the benefit of this is definitely seen during a big influx of traffic. Many CDNs also allow you set up basic web application firewall rules to block any malicious or unwanted traffic from hitting your infrastructure, this is nice as it gives you more confidence in dealing with any attacks.
Why does a CDN improve load times? The simplest answer is that your users retrieve the data from the nearest datacenter, these are normally referred to as edge locations. Once a user has connected to CloudFront, any requests to retrieve data not yet distributed will utilise the internal AWS network; providing a much faster and reliable connection to the origin of your data. The map below shows CloudFronts points of presence across the globe.
Image from https://aws.amazon.com/cloudfront/features/
A cool thing you can do is run Lambda functions on the Edge, this allows you to modify inflight requests passing through CloudFront. You can read more about it here.
Apart from reducing the physical distance between the users and the data, CDNs are able to improve speeds by compressing data at the edge; CloudFront currently only supports Gzip. Another thing you can do is use HTTP/2, this allows you initiate a single connection in which you can send multiple requests and receive multiple responses.
Doing as much as possible on the Edge vastly improves performance, the most noticeable difference is the speed of TLS handshakes and the time to first byte; it also reduces some computational load from servers as CloudFront is doing all the heavy lifting.
Setting TTL to 0 is the magic that allows you to serve dynamic content through CloudFront. What actually happens is that CloudFront will still cache the content from the origin and serve it from the Edge location, but will also make a request to the origin to check if the cached content hasn't changed.
You can create multiple cache behaviors for your various origins under a single distribution, this lets you control how your static and dynamic content should be served.
Here is a screenshot from Grafana showing the performance benefits of using CloudFront for serving dynamic content.
Using a combination of AWS CloudFront and ECS Service auto scaling has allowed for reliably dealing with both, big spikes and gradual increases of traffic.
This architecture diagram uses Route 53, CloudFront, an Application Load Balancer and ECS Containers running on Fargate as a solution to serving dynamic content. It is best practice to split up a VPC into public and private subnets over multiple availability zones, this improves security by not exposing your servers directly to the internet whilst making your networks more highly available. If an availability zone was to go down, your traffic should get routed to the other healthy zones.