Your success in Amazon AWS-Certified-DevOps-Engineer-Professional is our sole target and we develop all our AWS-Certified-DevOps-Engineer-Professional braindumps in a way that facilitates the attainment of this target. Not only is our AWS-Certified-DevOps-Engineer-Professional study material the best you can find, it is also the most detailed and the most updated. AWS-Certified-DevOps-Engineer-Professional Practice Exams for Amazon AWS-Certified-DevOps-Engineer-Professional are written to the highest standards of technical accuracy.

Q9. What method should I use to author automation if I want to wait for a CIoudFormation stack to finish completing in a script?

A. Event subscription using SQS.

B. Event subscription using SNS.

C. Poll using <code>ListStacks</code> / <code>Iist-stacks</code>.

D. Poll using <code>GetStackStatus</code> / <code>get-stack-status</code>. 

Answer: C

Explanation:

Event driven systems are good for IFTTT logic, but only polling will make a script wait to complete. ListStacks / list-stacks is a real method, GetStackStatus / get-stack-status is not.

Reference:       http://docs.aws.amazon.com/cli/latest/reference/cloudformation/Iist-stacks.html


Q10. For AWS Auto Scaling, what is the first transition state a new instance enters after leaving steady state when scaling out due to increased load?

A. EnteringStandby

B. Pending

C. Terminating:Wait

D. Detaching 

Answer: B

Explanation:

When a scale out event occurs, the Auto Scaling group launches the required number of EC2 instances, using its assigned launch configuration. These instances start in the Pending state. If you add a lifecycle hook to your Auto Scaling group, you can perform a custom action here. For more information, see Lifecycle Hooks.

Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveIoperGuide/AutoScaIingGroupLifecycIe.html


Q11. What is the scope of an EBS volume?

A. VPC

B. Region

C. Placement Group

D. Availability Zone 

Answer: D

Explanation:

An Amazon EBS volume is tied to its Availability Zone and can be attached only to instances in the same Availability Zone.

Reference:       http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/resources.htmI


Q12. Your system automatically provisions EIPs to EC2 instances in a VPC on boot. The system provisions the whole VPC and stack at once. You have two of them per VPC. On your new AWS account, your attempt to create a Development environment failed, after successfully creating Staging and Production environments in the same region. What happened?

A. You didn't choose the Development version of the AMI you are using.

B. You didn't set the Development flag to true when deploying EC2 instances.

C. You hit the soft limit of 5 EIPs per region and requested a 6th.

D. You hit the soft limit of 2 VPCs per region and requested a 3rd. 

Answer: C

Explanation:

There is a soft limit of 5 E|Ps per Region for VPC on new accounts. The third environment could not allocate the 6th EIP.

Reference:        http://docs.aws.amazon.com/generaI/latest/gr/aws_service_|imits.htmI#Iimits_vpc


Q13. Your CTO thinks your AWS account was hacked. What is the only way to know for certain if there was unauthorized access and what they did, assuming your hackers are very sophisticated AWS engineers and doing everything they can to cover their tracks?

A. Use CloudTrai| Log File Integrity Validation.

B. Use AWS Config SNS Subscriptions and process events in real time.

C. Use CIoudTraiI backed up to AWS S3 and Glacier.

D. Use AWS Config Timeline forensics. 

Answer: A

Explanation:

You must use CloudTraiI Log File Validation (default or custom implementation), as any other tracking method is subject to forgery in the event of a full account compromise by sophisticated enough hackers. Validated log files are invaluable in security and forensic investigations. For example, a validated log file enables you to assert positively that the log file itself has not changed, or that particular user credentials performed specific API actMty. The CIoudTraiI log file integrity validation process also lets you know if a log file has been deleted or changed, or assert positively that no log files were delivered to your account during a given period of time.

Reference:

http://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-fiIe-validation-intro.html


Q14. You are building out a layer in a software stack on AWS that needs to be able to scale out to react to increased demand as fast as possible. You are running the code on EC2 instances in an Auto Scaling Group behind an ELB. Which application code deployment method should you use?

A. SSH into new instances that come online, and deploy new code onto the system by pulling it from an S3 bucket, which is populated by code that you refresh from source control on new pushes.

B. Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration.

C. Create a Dockerfile when preparing to deploy a new version to production and publish it to S3. Use UserData in the Auto Scaling Launch configuration to pull down the Dockerfile from S3 and run it when new instances launch.

D. Create a new Auto Scaling Launch Configuration with UserData scripts configured to pull the latest code at all times.

Answer:

Explanation:

the bootstrapping process can be slower if you have a complex application or multiple applications to install. Managing a fileet of applications with several build tools and dependencies can be a challenging task during rollouts. Furthermore, your deployment service should be designed to do faster rollouts to take advantage of Auto Scaling.

Reference:        https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf


Q15. You want to pass queue messages that are 1GB each. How should you achieve this?

A. Use Kinesis as a buffer stream for message bodies. Store the checkpoint id for the placement in the Kinesis Stream in SQS.

B. Use the Amazon SQS Extended Client Library for Java and Amazon S3 as a storage mechanism for message bodies.

C. Use SQS's support for message partitioning and multi-part uploads on Amazon S3.

D. Use AWS EFS as a shared pool storage medium. Store filesystem pointers to the files on disk in the SQS message bodies.

Answer:

Explanation:

You can manage Amazon SQS messages with Amazon S3. This is especially useful for storing and retrieving messages with a message size of up to 2 GB. To manage Amazon SQS messages with Amazon S3, use the Amazon SQS Extended Client Library for Java.

Reference:

http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/s3-messages.html


Q16. What is the maximum supported single-volume throughput on EBS?

A. 320IV|iB/s

B. 160MiB/s

C. 40MiB/s

D. 640MiB/s 

Answer: A

Explanation:

The ceiling throughput for PIOPS on EBS is 320MiB/s.

Reference:       http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVo|umeTypes.htm| IIIIIEZIIII HWS-IIEIIII|]S-EII§iII|}|}I‘-PI‘0I|}SSi0IIilI EIIEIII