Home > Amazon-AWS > AWS Certified Solutions Architect Study Guide

AWS Certified Solutions Architect Study Guide

October 31st, 2017 Leave a comment Go to comments

Just passed my RECertification exam today with a 92%  60 questions 80 minutes.

I finished in 50m and flagged only 11 questions.

This is my 3rd time taking the exam, I got certified in 2014 years ago, and then failed to recertify with 64% 5 months ago, retryed today.

My summary guide for the certification


My Study Guide here:

AWS Certified Solutions Architect Study Guide

( old post  version here)


1) EC2 – Elastic Computer

EC2 support 2 types of block devices does Amazon EC2 support, EBS and instance storage(ephemeral)
Persistent Storage (EBS) “Elastic Block Store”: separte from the EC2 instance NAS devices (via network) You can disconnect/encrypt/backup w/ snapshots
Ephemeral Storage (Local ) EC2 instance store: Physically attached to Host, Instance stoped/terminated (loses data) no STOP options available, Reboot(intentionanlly or unintentionally) data will remain!
EBS pay what you: provisioned storage, I/O requests, and snapshot storage.
EBS standard volumes are charged per GB-month of provisioned storage and per million I/O requests.
EBS Provisioned IOPS volumes are charged per GB-month of provisioned storage and per Provisioned IOPS-month.
EBS volumes, you are charged for provisioned (allocated) storage, whether or not you actually use it
Storage Types: Temporary storage best use: local instance volumes, SQS, ElastiCache ( Memcached or Redis)
Device name /dev/sda1 (within Amazon EC2) is Reserved for the root device
AWS SLA if the instance is attached to an EBS-Optimized instance, then the Provisioned IOPS
Volumes are designed to deliver within 10% of the provisioned IOPS performance 99.9% of the time in a given year.
General Purpose (SSD) and Provisioned IOPS (SSD) volumes have a throughput limit of 128 MB/s per volume.
There is a relationship between the maximum performance of your EBS volumes, the amount of I/O you
are doing to them, and the amount of time it takes for each transaction to complete.
There is a 5 to 50 percent reduction in IOPS when you first access each block of data on a newly created
or restored EBS volume
EBS volutmes Frequent snapshots provide a higher level of data durability, but they may slightly degrade the
IOPS EBS volume can range in size from 10 GB to 1 TB and the user can provision up to 4000 IOPS per volume. The ratio of IOPS provisioned
to the volume size requested should be a maximum of 30; for example, a volume with 3000 IOPS must be at least 100 GB.
EBS-optimized instance that provides more dedicated EBS throughput than your application needs; otherwise, the Amazon EBS to Amazon EC2
connection will become a performance bottleneck
EBS Snapshots are only available through the Amazon EC2 API not Amazon S3 API.
EBS is particularly helpful for database-style applications that frequently encounter many random reads and writes across the data set
EBS snapshot can be expose to only specific AWS accounts, choose Private, enter the ID of the AWS account (without hyphens) performance of your application while the snapshot is in progress

EC2 uses SSH keys, passwords, and security groups to control who has access to the OS There’s no method in the IAM to allow or deny access to the OS.
EC2 you cannot detach a primary network interface
Operational process performed by AWS for data security? Decommissioning of storage devices using industry-standard practices
EC2-VPC: The instance retains its associated Elastic IP addresses. You’re charged for any Elastic IP addresses associated with a stopped instance.
A block device is a storage device that moves data in sequences of bytes or bits (blocks).
These devices support random access and generally use buffered I/O. Examples include hard disks, CD-ROM drives, and flash drives.

Highly-durable storage: S3 11 9’s per year duability per object or Glacier. EBS only 99.5%-99.9%
Snapshots – EBS , creating an AMI(snapshot) by default will reboot instance
EBS snapshots are charged per GB-month of storage actually used (consumed)
To encrypt a snapshot its possible by using the copy option
Public snapshots of encrypted volumes are not supported, but you can share an encrypted snapshot with specific accounts.

EC2 stop /start will cause the underlying host for the instance to change
EC2 uses the ec2config service to set new randmin admin password, and ecrypts using ec2 key pair public key.
The user can get Windows instance password by using console or cli.
EC2 Types: Storage iX, dX Memory xX,rX GPU pX, gX FPGA fX Compute(CPU) cX General Purpose mX, tX


Does not provide read-after-write consistency, recommend to use random key(file name) for better performance.
IA- Infrequently Accessed data. has a 30-day minimum retention period and a 128KB minimum object size.
provides read-after-write consistency for PUTS of new objects in your S3 bucket
offers eventual consistency for overwrite PUTS and DELETES in all regions.

Cross-region replication (CRR) replicates every object uploaded to your source bucket to a destination bucket in a different AWS region
S3 event notifications can be sent in response to actions taken on objects uploaded or stored in Amazon S3.To: SNS, SQS, or delivered directly to AWS Lambda
S3 Transfer Acceleration maximize transfer speeds over long distances, carry traffic over a highly optimized network bridge between the AWS Edge Location nearest your clients and your buckets
S3 Transfer Acceleration as a HIPAA eligible service

Encrypt data using Server Side Encryption: (SSE-S3) AES-256 – AWS provided (no cost), AWS-KMS – (SSE-C) Customer-provided key you provide,
AWS KMS (SSE-KMS) encrypt your data at rest using keys that you manage in the AWS Key Management Service (KMS)
S3 provides additional security with Multi-Factor Authentication (MFA) Delete
Amazon S3 bucket policies can be used to add or deny permissions across some or all of the objects within a single bucket.
With Query string authentication, you have the ability to share Amazon S3 objects through URLs that are valid for a specified period of time
S3 supports query string authentication, which allows you to provide a URL that is valid only for a length of time that you define.
Permission: Allows access from another AWS Account, by providing accountID or e-mail.
S3 bucket name can contain alpha numberic chars
x-amz-meta to set metada when uploading to S3 via Rest API
S3 supports ACL and bucket policy
Multipart upload larger then 5GB max 5TB max number of objects parts is 10000 and min size 5mb
S3 upload via console can specify standard or RRS storage class
S3 bucket name is min 3 chars and max 63, can be separated by single dot
S3 provides protection of data at REST Permission (Policy) Encryption (Client and Server Side) Bucket Versioning and MFA based delete.
S3 To find out when an object will expire, use GET or HEAD API command
S3 lifecycle rule cannot be applied on a version enable bucket (as if 10/2017 it is possible to enable lifecycle even with versioning)
S3 Amazon Import/Export is ideal for upload large data to S3
S3 doesn’t automatically give a user who creates a bucket or object permission to perform other actions on that bucket or object. Therefore,
in your IAM, you must explicitly give users permission to use the S3 resources they create.
S3 provides four different access control mechanisms: (IAM) policies, Access Control Lists (ACLs), bucket policies,and query string authentication.
S3 supports client side or server side encryption to encrypt all data at rest. The server side encryption can either have the S3 supplied AES-256 encryption key or the user
can send the key along with each API call to supply his own encryption key. The encryption with the user supplied key (SSE-C. does not work with the AWSconsole.
The S3 does not store the keys and the user has to send a key with each request. The SSE-C works when the user has enabled versioning.
S3 Access Control List is the document that defines who can access a particular bucket or object in Amazon S3
S3 RRS – Reducedd Redundancy Storage provides 99,99%, must desing automation around replacing lost objects.
Amazon Glacier automatically encrypts the data using AES-256 and stores it durably in an immutable form.
is designed to provide average annual durability of 99.999999999% for an archive. It stores each archive in
multiple facilities and multiple devices
CloudFront can have origins: S3, EC2, Custom origin HTTP server,

3) VPC

VPC, if the virtual private gateway is attached with VPC and the user deletes the VPC from the console it will first detach the gateway
automatically and only then delete the VPC
VPC higest number for a rule in Network ACL is 32766
VPC Network ACLs are stateless; responses to allowed inbound traffic are subject to the rules for outbound traffic (and vice versa).
VPC-peering – allows access from one VPC to another (even different AWS accounts) non transitive (1-1 rule)
Transit VPC supports several important use cases:
Private Networking – You can build a private network that spans two or more AWS Regions.
Shared Connectivity – Multiple VPCs can share connections to data centers, partner networks, and other clouds.
Cross-Account AWS Usage – The VPCs and the AWS resources within them can reside in multiple AWS accounts.


RDS Updates Order(3): 1) Perform maintenance on standby, 2) Promote standby to primary, 3) Perform maintenance on original primary, which becomes the new standby.
RDS Storage Maximum channel bandwidth is DB instance class dependent.
RDS Storage You can’t decrease storage allocated for a DB instance.
RDS Storage While Provisioned IOPS can work with I/O sizes up to 256 KB, most databases do not typically use such large I/O. An I/O request smaller than 32 KB is handled as one I/O;
RDS have 3 types of security groups: DB security groups, VPC security groups, and EC2 security groups.
RDS SG you do not need to specify a destination port number,the port number defined for the DB instance is used as the destination port number for all rules
defined for the DB security group.
RDS Read replicas require a transactional storage engine. Replication is only supported for the InnoDB
RDS Dynamo DB, you provision throughput capacity reserved for read and write.
Sharding embodies the “share-nothing” architecture and essentially just involves breaking a larger database up into smaller databases. Common ways to split a database are:
Splitting tables that are not joined in the same query onto different hosts : Duplicating a table across multiple hosts and then splitting where a row goes
RDS allows up to 5 Read Replicas, it allows Read Replicas from second-tier
RDS, can only acess to binary logs if using MySQL 5.6+
DynamoDB – if query result is to large, can use ExclusiveStartKey to get the rest of the result

5)Route 53

LBR Latency Based Routing automatic route requests to closest instance/region
Multi-Site solution allows standby DR (good for failover, fault tolarence, DR)
Route 53 support domain name registration
Route 53 name of each resource record set in a hosted zone must end with the name of the hosted zone(sub-domains)
Route 53 account is limited to a max 500 hosted zones and 10,000 record per hosted zone, can ask for more.
Route 53 currently offer a private DNS service
The Route 53 weighted routing policy when you have multiple resources that perform the same function to route traffic to those resources in
proportions that you specify (for example, one quarter to one server and three quarters to the other). For more information about weighted resource record sets, see Weighted Routing.

6)ELB – load balance

classic LB – routes traffice based on app and network
application LB – route traffice catering for multiple app on same EC2 instance
Setup: Definle LB > Security Grou > Healthcheck > Add EC2
Internal or External -Select multiple zones resources



AWS Import/Export supports: Import to Amazon S3
Export from Amazon S3 Import to Amazon EBS Import to Amazon Glacier
Rest services verbs are GET PUT POST and use action as command to invoke.
Query requests are HTTP or HTTPS requests that use the HTTP verb GET or POST and a Query parameter named Action
Qrequest consists of the following: Endpoint: Action: Parameters:(separated by an ampersand “&”): Version: Authorization parameters
Data transfer – You are not charged for the data transfer incurred in replicating data between your primary and standby.
Each AWS Direct Connect location enables connectivity to all Availability Zones within the geographically nearest AWS region

CloudWatch alarm on the instance that should perform the termination action once the instance is idle.
CloudWatch provides the functionality to graph the metric data generated, and The console provides the option to save the URL and share it!
CloudWatch a user can disable the alarm using the DisableAlarmActions API or from the command line mon-disable-alarm-actions.
or bookmark it so that it can be used in the future by typing the same URL. The Copy URL functionality is
available under the console when the user selects any metric to view.

If a subnet’s traffic is routed to an internet gateway, the subnet is known as a public subnet.

Amazon Redshift achieves efficient storage and optimum query performance through a combination of
massively parallel processing, columnar data storage, and very efficient, targeted data compression encoding


ELB default healthcheck is HTTP on port 80
ELB supports the following versions of the SSL protocol:TLS 1.2 TLS 1.1 TLS 1.0 SSL 3.0 SSL 2.0 (TLS 1.3 is still not in production as of July 2017)

Amazon EMR uses Apache Hadoop as its distributed data processing engine.
Docker is the only container platform supported by EC2 Container Service
IAM users fine-grained control to your Amazon S3 bucket or objects by selectively add (grant) certain permissions on indMdual objects.

ARN Amazon Resource Names, uniquely identify resources
Glacier data is stored in vaults
RDS supports: MySQL,Oracle,MSSQL,Postgre SQL,Auror
RedShift uses port 5439 by default
AWS FPS Flexibale Payment Service can help on ecomerce (shop cart)
Easy DKIM is a feature of AWS SES to sign every message sent with a 1024 bit DKIM key
RedShift is a DataWarehouse that integrates with BI tools for U$1000/TB/year, SSL + VPC


DKIM-DomainKeys Identified Mail allows sender to sign ther email messages and ISPs to verify if message is legitimate
Glacier does not provide version management.
RAID-0 is best performance
RAID-1 fault tolerance
RAID-5,6 not recommended
AWS SWF – SimpleWorkFlow

AWS SES – Simple Email Service outbound email only

Amazon topic can be created to a SNS witch generates a ARN
VPN gateways use Dead Peer Detection
ElasticCache is a in-memory caching service, it uses memcached and redis

S3DistCp can not sync on-premesis DB to S3 Q204.(wrong answer)
S3DistCp does not support Amazon S3 bucket names that contain the underscore
if S3DistCp fails, it does not clean the temporary HDFS directory, so you must purge the temporary files manually
S3DistCp copy large amounts of data from S3 into HDFS where it can be processed by subsequent steps in your Amazon EMR cluster.
S3DistCp can copy data between Amazon S3 buckets or from HDFS to Amazon S3.
S3DistCp is more scalable and efficient for parallel copying large numbers of objects across buckets and across AWS accounts.


Tagging the string value can be from 1 to 128 Unicode characters in length and cannot be prefixed with “aws:” or “rds:”.
Tagging string may contain only the set of Unicode letters, digits, white-space, ‘_’, ‘.’, ‘/’, ‘=’, ‘+’, ‘-‘ (Java regex: “^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-]*)$”).

If you need to add a large number of users, consider using temporary security credentials. For more information about temporary security credentials, go to Temporary Security Credentials.


Dont waste money on VCE software or whizlabs exams.

I do recommend: CloudAcademy and Acloud Guru, read  white papers,read  FAQs, watch youtube Re-Invent presetantions

Study real exam questions for free here:




Categories: Amazon-AWS Tags:
  1. No comments yet.