About how to encrypt objects in S3 buckets,
Which of the following is fake?
You can encrypt objects in S3 buckets using:
* A) Server Side Encryption (SSE). SSE with S3 Managed Keys (SSE-S3) is enabled by default.
* B) SSE with KMS keys stored in aws kms (SSE-KMS). Leverage aws key management service with manage encryption keys.
* C) SSE with Customer Provided Keys (SSE-C). when you want to manage your own encryption keys
* D) SSE with Certificate authority (CA) Cert. when you want to use a CA to manage encryption keys.
* E) Client side encryption: encrypt everything client side and then upload to S3
D is fake. There is no true version of that one.
About SSE-S3. Which, if any, is false and what is true version?
possibly important exam
B and C are false. True versions are:
* B) encyrption type AES256
* C) must set header to “x-amz-server-side-encryption”:”AES256”
About SSE-KMS. Which, if any, is false (or missing critical information) and what is true version?
possibly important exam
E is missing info. correct version:
About SSE-KMS. Which, if any, is false (or missing critical information) and what is true version?
About SSE-KMS Limitations. Which, if any, is false and what is the true version?
possibly important exam
All true. About F i think he said that this is something the exam may test you on.
About SSE-C. Which, if any, is false (or missing critical information) and what is true version?
possibly important exam
About client side encryption Which, if any, is false (or missing critical information) and what is true version?
possibly important exam
D is false. Client must decrypt data themselves when retrieving data from S3.
T/F Encryption in transit (SSL/TLS)
* encryption in flight is also called SSL/TLS
* S3 exposes both HTTP and HTTPS, but HTTPS is reccommended and is mandatory for SSE-C but most people use HTTPS by default now
* how to force encryption in transit
aws:SecureTransport.
All T
Is this a correct example of how to force encryption in transit (using HTTPS) for all objects in your S3 bucket?
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Deny”,
“Principal”: “”,
“Action”: “s3:GetObject”,
“Resource”: “arn:aws:s3:::my-bucket/”
“Condition”: {
“Bool”: {
“aws:SecureTransport”: “true”
}
}
}
]
}
True. It would have been false if
“aws:SecureTransport”: “true”
was set to “false” (as it would be for http?)
rea
T/F
creating your own kms key does cost you some money every month
T
Default Encruption
* A_ SSE-S3 applied auotmatically to new upejects stored in S3 (unless you say otherwise)
* B) you can force encryption using a bucket policy and refuse any api call to put an s3 object without encryption headers (SSE-KMS or SSE-C).
* C) Default encryption settings evaluated before bucket policies
C is false. bucket policies are evaluated before default encryption settings. (though I don’t know if it’s implying a priority. possibly>)
Subsection: S3 Default Encryption
T/F CORS
* A) cross origin resource sharing
* B) origin = scheme (protocol) + host (domain) + port
* C) ex: in https://www.example.com the implied port is 443 for HTTPS. The domain is www.example.com and the protocol is HTTPS. And altogether, that makes the origin.
* D) http://example.com/app1 and https://example.com/app2 have the same origin
* E) http://example.com/app1 and http://other.example.com/app2 have different origins (note the different domains)
* F) If two origins are different, requests won’t be fulfilled unless the other origin allows for the requests using CORS Headers (ex: Access-Control-Allow-Origin)
D is false. Correct version is:
http://example.com/app1 and http://example.com/app2 have the same origin
In the question, the bottom address used https.
Subsection: CORS
True
popular exam qusetion
True
T/F, assuming everything is set up correctly (static website hosting enabled, block-public-access off, good bucket policy that allows everyone to GET the objects in a bucket), the following CORS configuration should allow your content from site 2 to be read and used by your site 1 (assuming site 1 is named whatever is in the AllowedOrigins list)
[
{
“AllowedHeaders”: [
“Authorization”
],
“AllowedMethods”: [
“GET”
],
“AllowedOrigins”: [
“https://2023-283-tuesday-s3.s3.us-east-2.amazonaws.com/index-with-fetch-and-cors.html”
],
“ExposeHeaders”: [],
“MaxAgeSeconds”: 3000
}
]
True
T/F (and provide corrected versions, if appropriate). If you turn on MFA for an S3 bucket, then MFA is required to
* A) permanently delete an object
* B) suspend versioning on the bucket
* C) enable versioning
* D) list deleted versoining
C and D are false. Even if you have MFA enabled for an S3 bucket, you still won’t need to use MFA to do those things.
T/F (and provide corrected versions, if appropriate). If you turn on MFA for an S3 bucket, then MFA is required to
Both are false! Here are the correct versions:
Would this work if I was non-root? What if I was non-root?
aws s3api put-bucket-versioning –bucket somethingsomething1 –versioning-configuration Status=Enabled,MFADelete=Enabled –mfa “arn:aws:iam::some-real-value 864127” –profile some-real-cli-profile
It would work if you were root if you had also set up mfa for your root account. it would not work otherwise. It is, at time of writing, the only known way of enabling MFADelete for an S3 bucket.
T/F
If you have MFA delete enabled for a bucket (don’t forget that versioning needs to be on prior to setting up MFA delete) then you can’t actually permanelty delete something using the UI. You have to use something else (aws cli, aws sdk, or s3 rest api (or remove mfa delete ability))
True
What happened when you tried to enable MFADelete from a non-root account, assuming you set everything up correctly.
jack all. got a permissions issue. It really does have to be a root account, root mfa, root access key, cli profile made with the root access key (so it has root permissions).
if any are false, what is/are the true version(s)?
S3 Access logs
* A) may want to log all access for audit purpsoes
* B) any request made to s3 will be logged into another s3 bucket
* C) data can be analuzed
* D) target logging bucket can be in any aws region
D) is false.
True version: target logging bucket must be in the same aws region as the bucket you want the logs for.
log format https://docs/amazon.com/AmazonS3/latest/dev/LogFormat.html
T/F
A is False. If you do this you will create a logging loop and your bucket will grow exponentially.
S3 Access Logs
T/F (one of these is more like a caveat than a true false, which one do you think it is)
Well, C is true but i suspect it’s missing POST. Steph doesn’t mention it on the slides, but later he does mention that a user can use a presigned url to upload a file, and he doesn’t indicate that the file needs to be merely editing an existing record, so it seems like the upload could be used to create a new record, which would make it a POST, not a PUT (at least according to some definitions of POST. perhaps all of them, idk)
S3 Pre-signed URLS
T/F these are good examples of use cases for S3 presigned urls
* A) allow only logged in users to download a premium video from your S3 bucket
* B) allow an ever changing list of users to download files by generating URLs dyamically
* C) allow temporarily a user to upload a file to a precise location in your S3 bucket (that does seem post-y)
T
S3 Pre-Signed URL
T/F
True
S3 Access Points