Free of AWS-Certified-DevOps-Engineer-Professional practice exam materials and lab for Amazon certification for candidates, Real Success Guaranteed with Updated AWS-Certified-DevOps-Engineer-Professional pdf dumps vce Materials. 100% PASS AWS Certified DevOps Engineer Professional exam Today!

Q33. You run accounting software in the AWS cloud. This software needs to be online continuously during the day every day of the week, and has a very static requirement for compute resources. You also have other, unrelated batch jobs that need to run once per day at any time of your choosing. How should you minimize cost?

A. Purchase a Heavy Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.

B. Purchase a Medium Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.

C. Purchase a Light Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.

D. Purchase a Full Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.

Answer:

Explanation:

Because the instance will always be online during the day, in a predictable manner, and there are a sequence of batch jobs to perform at any time, we should run the batch jobs when the account software is off. We can achieve Heavy Utilization by alternating these times, so we should purchase the reservation as such, as this represents the lowest cost. There is no such thing a "FuII" level utilization purchases on EC2.

Reference:       https://d0.awsstatic.com/whitepapers/Cost_Optimization_with_AWS.pdf


Q34. Your company wants to understand where cost is coming from in the company's production AWS account. There are a number of applications and services running at any given time. Without expending too much initial development time, how best can you give the business a good understanding of which applications cost the most per month to operate?

A. Create an automation script which periodically creates AWS Support tickets requesting detailed intra-month information about your bill.

B. Use custom CIoudWatch Metrics in your system, and put a metric data point whenever cost is incurred.

C. Use AWS Cost Allocation Tagging for all resources which support it. Use the Cost Explorer to analyze costs throughout the month.

D. Use the AWS Price API and constantly running resource inventory scripts to calculate total price based on multiplication of consumed resources over time.

Answer:

Explanation:

Cost Allocation Tagging is a built-in feature of AWS, and when coupled with the Cost Explorer, provides a simple and robust way to track expenses.

You can also use tags to filter views in Cost Explorer. Note that before you can filter views by tags in Cost Explorer, you must have applied tags to your resources and activate them, as described in the following sections. For more information about Cost Explorer, see Analyzing Your Costs with Cost Explorer. Reference:       http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html


Q35. Your application consists of 10% writes and 90% reads. You currently service all requests through a Route53 Alias Record directed towards an AWS ELB, which sits in front of an EC2 Auto Scaling Group. Your system is getting very expensive when there are large traffic spikes during certain news events, during which many more people request to read similar data all at the same time. What is the simplest  and cheapest way to reduce costs and scale with spikes like this?

A. Create an S3 bucket and asynchronously replicate common requests responses into S3 objects. When a request comes in for a precomputed response, redirect to AWS S3.

B. Create another ELB and Auto Scaling Group layer mounted on top of the other system, adding a tier to the system. Serve most read requests out of the top layer.

C. Create a CloudFront Distribution and direct Route53 to the Distribution. Use the ELB as an Origin and specify Cache Behaviours to proxy cache requests which can be served late.

D. Create a Memcached cluster in AWS EIastiCache. Create cache logic to serve requests which can be served late from the in-memory cache for increased performance.

Answer: C

Explanation:

CIoudFront is ideal for scenarios in which entire requests can be served out of a cache and usage patterns involve heavy reads and spikiness in demand.

A cache behavior is the set of rules you configure for a given URL pattern based on file extensions, file names, or any portion of a URL path on your website (e.g., *.jpg). You can configure multiple cache behaviors for your web distribution. Amazon CIoudFront will match incoming viewer requests with your list of URL patterns, and if there is a match, the service will honor the cache behavior you configure for that URL pattern. Each cache behavior can include the following Amazon CIoudFront configuration values: origin server name, viewer connection protocol, minimum expiration period, query string parameters, cookies, and trusted signers for private content.

Reference:     https://aws.amazon.com/Cloudfront/dynamic-content/


Q36. What is the scope of an EC2 security group?

A. Availability Zone

B. Placement Group

C. Region

D. VPC

Answer:

Explanation:

A security group is tied to a region and can be assigned only to instances in the same region. You can't enable an instance to communicate with an instance outside its region using security group rules. Traffic

from an instance in another region is seen as WAN bandwidth.

Reference:       http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/resources.htmI


Q37. When thinking of AWS Elastic Beanstalk, the 'Swap Environment URLs' feature most directly aids in what?

A. Immutable Rolling Deployments

B. MutabIe Rolling Deployments

C. Canary Deployments

D. Blue-Green Deployments 

Answer: D

Explanation:

Simply upload the new version of your application and let your deployment service (AWS Elastic Beanstalk, AWS CIoudFormation, or AWS OpsWorks) deploy a new version (green). To cut over to the new version, you simply replace the ELB URLs in your DNS records. Elastic Beanstalk has a Swap

Environment URLs feature to facilitate a simpler cutover process.

Reference:        https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf


Q38. Which of these is not a CIoudFormation Helper Script?

A. cfn-signal

B. cfn-hup

C. cfn-request

D. cfn-get-metadata 

Answer: C

Explanation:

This is the complete list of CloudFormation Helper Scripts: cfn-init, cfn-signal, cfn-get-metadata, cfn-hup Reference:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-helper-scripts-reference.html


Q39. You need to process long-running jobs once and only once. How might you do this?

A. Use an SNS queue and set the visibility timeout to long enough forjobs to process.

B. Use an SQS queue and set the reprocessing timeout to long enough forjobs to process.

C. Use an SQS queue and set the visibility timeout to long enough forjobs to process.

D. Use an SNS queue and set the reprocessing timeout to long enough forjobs to process. 

Answer: C

Explanation:

The message timeout defines how long after a successful receive request SQS waits before allowing jobs to be seen by other components, and proper configuration prevents duplicate processing.

Reference: http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/MessageLifecycIe.ht ml


Q40. You need to replicate API calls across two systems in real time. What tool should you use as a buffer and transport mechanism for API call events?

A. AWS SQS

B. AWS Lambda

C. AWS Kinesis

D. AWS SNS

Answer:

Explanation:

AWS Kinesis is an event stream service. Streams can act as buffers and transport across systems for in-order programmatic events, making it ideal for replicating API calls across systems.

A typical Amazon Kinesis Streams application reads data from an Amazon Kinesis stream as data  records. These applications can use the Amazon Kinesis Client Library, and they can run on Amazon EC2 instances. The processed records can be sent to dashboards, used to generate alerts, dynamically

change pricing and advertising strategies, or send data to a variety of other AWS services. For information about Streams features and pricing, see Amazon Kinesis Streams.

Reference:      http://docs.aws.amazon.com/kinesis/Iatest/dev/introduction.htmI