Pass4sure DOP-C01 Questions are updated and all DOP-C01 answers are verified by experts. Once you have completely prepared with our DOP-C01 exam prep kits you will be ready for the real DOP-C01 exam without a problem. We have Leading Amazon-Web-Services DOP-C01 dumps study guide. PASSED DOP-C01 First attempt! Here What I Did.

Online Amazon-Web-Services DOP-C01 free dumps demo Below:

NEW QUESTION 1
You have an application hosted in AWS. You wanted to ensure that when certain thresholds are reached, a Devops Engineer is notified. Choose 3 answers from the options given below

  • A. Use CloudWatch Logs agent to send log data from the app to CloudWatch Logs from Amazon EC2 instances
  • B. Pipe data from EC2 to the application logs using AWS Data Pipeline and CloudWatch
  • C. Once a CloudWatch alarm is triggered, use SNS to notify the Senior DevOps Engineer.
  • D. Set the threshold your application can tolerate in a CloudWatch Logs group and link a CloudWatch alarm on that threshold.

Answer: ACD

Explanation:
You can use Cloud Watch Logs to monitor applications and systems using log data. For example,
CloudWatch Logs can track the number of errors that occur in your
application logs and send you a notification whenever the rate of errors exceeds a threshold you specify. CloudWatch Logs uses your log data for monitoring; so, no code changes are required. For example, you can monitor application logs for specific literal terms (such as "NullReferenceLxception") or count the number of occurrences of a literal term at a particular position in log data (such as "404" status codes in an Apache access log). When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify. For more information on Cloudwatch Logs please refer to the below link:
http://docs.ws.amazon.com/AmazonCloudWatch/latest/logs/WhatlsCloudWatchLogs.html
Amazon CloudWatch uses Amazon SNS to send email. First, create and subscribe to an SNS topic.
When you create a CloudWatch alarm, you can add this SNS topic to send an email notification when the alarm changes state.
For more information on Cloudwatch and SNS please refer to the below link: http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/US_SetupSNS.html

NEW QUESTION 2
Which of the following is incorrect when it comes to using the instances in an Opswork stack?

  • A. In a stack you can use a mix of both Windowsand Linux operating systems
  • B. You can start and stop instances manually in a stack
  • C. You can use custom AMI'S as long as they are based on one of the AWS OpsWorks Stacks- supported AMIs
  • D. You can use time-based automatic scaling with any stack

Answer: A

Explanation:
The AWS documentation mentions the following about Opswork stack
• A stack's instances can run either Linux or Windows.
A stack can have different Linux versions or distributions on different instances, but you cannot mix Linux and Windows instances.
• You can use custom AMIs (Amazon Machine Images), but they must be based on one of the AWS Ops Works Stacks-supported AMIs
• You can start and stop instances manually or have AWS OpsWorks Stacks automatically scale the number of instances. You can use time-based automatic scaling with any stack; Linux stacks also can use load-based scaling.
• In addition to using AWS OpsWorks Stacks to create Amazon EC2 instances, you can also register instances with a Linux stack that were created outside of AWS OpsWorks Stacks.
For more information on Opswork stacks, please visit the below link: http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os.html

NEW QUESTION 3
You have a code repository that uses Amazon S3 as a data store. During a recent audit of your security controls, some concerns were raised about maintaining the integrity of the data in the Amazon S3 bucket. Another concern was raised around securely deploying code from Amazon S3 to applications running on Amazon EC2 in a virtual private cloud. What are some measures that you can implement to mitigate these concerns? Choose two answers from the options given below.

  • A. Add an Amazon S3 bucket policy with a condition statement to allow access only from Amazon EC2 instances with RFC 1918 IP addresses and enable bucket versioning.
  • B. Add an Amazon S3 bucket policy with a condition statement that requires multi-factor authentication in order to delete objects and enable bucket versioning.
  • C. Use a configuration management service to deploy AWS Identity and Access Management user credentials to the Amazon EC2 instance
  • D. Use these credentials to securely access the Amazon S3 bucket when deploying code.
  • E. Create an Amazon Identity and Access Management role with authorization to access the Amazon S3 bucket, and launch all of your application's Amazon EC2 instances with this role.
  • F. Use AWS Data Pipeline to lifecycle the data in your Amazon S3 bucket to Amazon Glacier on a weekly basis.
  • G. Use AWS Data Pipeline with multi-factor authentication to securely deploy code from the Amazon S3 bucket to your Amazon EC2 instances.

Answer: BD

Explanation:
You can add another layer of protection by enabling MFA Delete on a versioned bucket. Once you do
so, you must provide your AWS account's access keys and a
valid code from the account's MFA device in order to permanently delete an object version or suspend or reactivate versioning on the bucket.
For more information on MFA please refer to the below link: https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/
IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you to manage the security credentials that the applications use. Instead of creating and distributing your AWS credentials, you can delegate permission to make API requests using 1AM roles For more information on Roles for CC2 please refer to the below link: http://docs.aws.a mazon.com/AWSCC2/latest/UserGuide/iam-roles-for-amazon-ec2. htmI
Option A is invalid because this will not address either the integrity or security concern completely. Option C is invalid because user credentials should never be used in CC2 instances to access AWS resources.
Option C and F are invalid because AWS Pipeline is an unnecessary overhead when you already have inbuilt controls to manager security for S3.

NEW QUESTION 4
When deploying applications to Elastic Beanstalk, which of the following statements is false with regards to application deployment

  • A. Theapplication can be bundled in a zip file
  • B. Caninclude parent directories
  • C. Shouldnot exceed 512 MB in size
  • D. Canbe a war file which can be deployed to the application server

Answer: B

Explanation:
The AWS Documentation mentions
When you use the AWS Clastic Beanstalk console to deploy a new application or an application version, you'll need to upload a source bundle. Your source bundle must meet the following requirements:
Consist of a single ZIP file or WAR file (you can include multiple WAR files inside your ZIP file) Not exceed 512 MB
Not include a parent folder or top-level directory (subdirectories are fine)
For more information on deploying applications to Clastic Beanstalk please see the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/applications-sourcebundle.html

NEW QUESTION 5
You are creating an application which stores extremely sensitive financial information. All information in the system must be encrypted at rest and in transit. Which of these is a violation of this policy?

  • A. ELB SSL termination.
  • B. ELB Using Proxy Protocol v1.
  • C. CloudFront Viewer Protocol Policy set to HTTPS redirection.
  • D. Telling S3 to use AES256 on the server-side.

Answer: A

Explanation:
If you use SSL termination, your servers will always get non-secure connections and will never know whether users used a more secure channel or not. If you are using Elastic beanstalk to configure the ELB, you can use the below article to ensure end to end encryption. http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-endtoend.html

NEW QUESTION 6
A company is running three production web server reserved EC2 instances with EBS-backed root volumes. These instances have a consistent CPU load of 80%. Traffic is being distributed to these instances by an Elastic Load Balancer. They also have production and development Multi-AZ RDS MySQL databases. What recommendation would you make to reduce cost in this environment without affecting availability of mission-critical systems? Choose the correct answer from the options given below

  • A. Considerusing on-demand instances instead of reserved EC2 instances
  • B. Considernot using a Multi-AZ RDS deployment for the development database
  • C. Considerusing spot instances instead of reserved EC2 instances
  • D. Considerremovingthe Elastic Load Balancer

Answer: B

Explanation:
Multi-AZ databases is better for production environments rather than for development environments, so you can reduce costs by not using this for development environments Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi- AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Cach AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable.
In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention
For more information on Multi-AZ RDS, please refer to the below link: https://aws.amazon.com/rds/details/multi-az/

NEW QUESTION 7
Which of the following Cloudformation helper scripts can help install packages on EC2 resources

  • A. cfn-init
  • B. cfn-signal
  • C. cfn-get-metadata
  • D. cfn-hup

Answer: A

Explanation:
The AWS Documentation mentions
Currently, AWS CloudFormation provides the following helpers:
cf n-init: Used to retrieve and interpret the resource metadata, installing packages, creating files and starting services.
cf n-signal: A simple wrapper to signal an AWS CloudFormation CreationPolicy or WaitCondition,
enabling you to synchronize other resources in the stack with the application being ready.
cf n-get-metadata: A wrapper script making it easy to retrieve either all metadata defined for a resource or path to a specific key or subtree of the resource metadata.
cf n-hup: A daemon to check for updates to metadata and execute custom hooks when the changes are detected. For more information on helper scripts, please visit the below URL: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/cfn-helper-scripts-reference.htmI

NEW QUESTION 8
Your public website uses a load balancer and an Auto Scalinggroup in a virtual private cloud. Your chief security officer has asked you to set up a monitoring system that quickly detects and alerts your team when a large sudden traffic increase occurs. How should you set this up?

  • A. Setup an Amazon CloudWatch alarm for the Elastic Load Balancing Networkln metricand then use Amazon SNS to alert your team.
  • B. Usean Amazon EMR job to run every thirty minutes, analyze the Elastic LoadBalancing access logs in a batch manner to detect a sharp increase in trafficand then use the Amazon Simple Email Service to alert your team.
  • C. Usean Amazon EMR job to run every thirty minutes analyze the CloudWatch logs fromyour application Amazon EC2 instances in a batch manner to detect a sharpincrease in traffic and then use the Amazon SNS SMS notification to alert yourteam
  • D. Setup an Amazon CloudWatch alarm for the Amazon EC2 Networkln metric for the AutoScaling group and then use Amazon SNS to alert your team.
  • E. Setup a cron job to actively monitor the AWS CloudTrail logs for increased trafficand use Amazon SNS to alert your team.

Answer: D

Explanation:
The below snapshot from the AWS documentation gives details on the Networkln metric
DOP-C01 dumps exhibit

NEW QUESTION 9
When creating an Elastic Beanstalk environment using the Wizard, what are the 3 configuration options presented to you

  • A. Choosingthetypeof Environment- Web or Worker environment
  • B. Choosingtheplatformtype-Nodejs,IIS,etc
  • C. Choosing the type of Notification - SNS or SQS
  • D. Choosing whether you want a highly available environment or not

Answer: ABD

Explanation:
The below screens are what are presented to you when creating an Elastic Beanstalk environment
DOP-C01 dumps exhibit
The high availability preset includes a load balancer; the low cost preset does not For more information on the configuration settings, please refer to the below link: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-create-wizard.html

NEW QUESTION 10
You need to implement Blue/Green Deployment for several multi-tier web applications. Each of them has Its Individual infrastructure:
Amazon Elastic Compute Cloud (EC2) front-end servers, Amazon ElastiCache clusters, Amazon Simple Queue Service (SQS) queues, and Amazon Relational Database (RDS) Instances.
Which combination of services would give you the ability to control traffic between different deployed versions of your application?

  • A. Create one AWS Elastic Beanstalk application and all AWS resources (using configuration files inside the application source bundle) for each web applicatio
  • B. New versions would be deployed using Elastic Beanstalk environments and using the Swap URLs feature.
  • C. Using AWS CloudFormation templates, create one Elastic Beanstalk application and all AWS resources (in the same template) for each web applicatio
  • D. New versions would be deployed using AWS CloudFormation templates to create new Elastic Beanstalk environments, and traffic would be balanced between them using weighted Round Robin (WRR) records in Amazon Route 53. >/
  • E. Using AWS CloudFormation templates, create one Elastic Beanstalk application and all AWS resources (in the same template) for each web applicatio
  • F. New versions would be deployed updating a parameter on the CloudFormation template and passing it to the cfn-hup helper daemon, and traffic would be balanced between them using Weighted Round Robin (WRR) records in Amazon Route 53.
  • G. Create one Elastic Beanstalk application and all AWS resources (using configuration files inside the application source bundle) for each web applicatio
  • H. New versions would be deployed updating theElastic Beanstalk application version for the current Elastic Beanstalk environment.

Answer: B

Explanation:
This an example of Blue green deployment
DOP-C01 dumps exhibit
With Amazon Route 53, you can define a percentage of traffic to go to the green environment and gradually update the weights until the green environment carries
the full production traffic. A weighted distribution provides the ability to perform canary analysis where a small percentage of production traffic is introduced to a
new environment. You can test the new code and monitor for errors, limiting the blast radius if any issues are encountered. It also allows the green environment to
scale out to support the full production load if you're using Elastic Load Balancing.
When if s time to promote the green environment/stack into production, update DNS records to point to the green environment/stack's load balancer. You can also
do this DNS flip gradually by using the Amazon Route 53 weighted routing policy. For more information on Blue green deployment, please refer to the link:
• https://dOawsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 11
There is a very serious outage at AWS. EC2 is not affected, but your EC2 instance deployment scripts
stopped working in the region with the outage. What might be the issue?

  • A. The AWS Console is down, so your CLI commands do not work.
  • B. S3 is unavailable, so you can't create EBS volumes from a snapshot you use to deploy new volumes.
  • C. AWS turns off the DeployCode API call when there are major outages, to protect from system floods.
  • D. None of the other answers make sens
  • E. If EC2 is not affected, it must be some other issue.

Answer: B

Explanation:
The CBS Snapshots are stored in S3, so if you have an scripts which deploy CC2 Instances, the CBS volumes need to be constructed from snapshots stored in S3.
You can back up the data on your Amazon CBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Cach snapshot contains all of the information needed to restore your data (from the moment when the snapshot was taken) to a new CBS volume. For more information on CBS Snapshots, please visit the below URL:
• http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/ CBSSnapshots.htm I

NEW QUESTION 12
Your company releases new features with high frequency while demanding high application availability. As part of the application's A/B testing, logs from each updated Amazon EC2 instance of the application need to be analyzed in near real-time, to ensure that the application is working
flawlessly after each deployment. If the logs show any anomalous behavior, then the application version of the instance is changed to a more stable one. Which of the following methods should you use for shipping and analyzing the logs in a highly available manner?

  • A. Ship the logs to Amazon S3 for durability and use Amazon EMR to analyze the logs in a batch manner each hour.
  • B. Ship the logs to Amazon CloudWatch Logs and use Amazon EMR to analyze the logs in a batch manner each hour.
  • C. Ship the logs to an Amazon Kinesis stream and have the consumers analyze the logs in a live manner.
  • D. Ship the logs to a large Amazon EC2 instance and analyze the logs in a live manner.

Answer: C

Explanation:
Answer - C
You can use Kinesis Streams for rapid and continuous data intake and aggregation. The type of data used includes IT infrastructure log data, application logs, social media, market data feeds, and web clickstream data. Because the response time for the data intake and processing is in real time, the processing is typically lightweight.
The following are typical scenarios for using Kinesis Streams:
• Accelerated log and data feed intake and processing - You can have producers push data directly into a stream. For example, push system and application logs and they'll be available for processing in seconds. This prevents the log data from being lost if the front end or application server fails. Kinesis Streams provides accelerated data feed intake because you don't batch the data on the servers before you submit it for intake.
• Real-time metrics and reporting - You can use data collected into Kinesis Streams for simple data analysis and reporting in real time. For example, your data-processing application can work on metrics and reporting for system and application logs as the data is streaming in, rather than wait to receive batches of data.
For more information on Amazon Kinesis and SNS please refer to the below link:
• http://docs.aws.a mazon.com/streams/latest/dev/introduction.html

NEW QUESTION 13
You have carried out a deployment using Elastic Beanstalk with All at once method, but the application is unavailable. What could be the reason for this

  • A. You need to configure ELB along with Elastic Beanstalk
  • B. You need to configure Route53 along with Elastic Beanstalk
  • C. There will always be a few seconds of downtime before the application is available
  • D. The cooldown period is not properly configured for Elastic Beanstalk

Answer: C

Explanation:
The AWS Documentation mentions
Because Elastic Beanstalk uses a drop-in upgrade process, there might be a few seconds of downtime. Use rolling deployments to minimize the effect of deployments on your production environments.
For more information on troubleshooting Elastic Beanstalk, please refer to the below link:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/troubleshooting-deployments.html
• https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.de ploy-existing- version, html

NEW QUESTION 14
You have an application running in us-west-2 that requires 6 EC2 instances running at all times. With 3 AZ available in that region, which of the following deployments provides 100% fault tolerance if any single AZ in us-west-2 becomes unavailable. Choose 2 answers from the options below

  • A. us-west-2awith 2 instances, us-west-2b with 2 instances, us-west-2c with 2 instances
  • B. us-west-2awith 3 instances, us-west-2b with 3 instances, us-west-2c with 0 instances
  • C. us-west-2awith 4 instances, us-west-2b with 2 instances, us-west-2c with 2 instances
  • D. us-west-2awith 6 instances, us-west-2b with 6 instances, us-west-2c with 0 instances
  • E. us-west-2awith 3 instances, us-west-2b with 3 instances, us-west-2c with 3 instances

Answer: DE

Explanation:
Since we need 6 instances running at all times, only D and C fulfil this option. The AWS documentation mentions the following on Availability zones
When you launch an instance, you can select an Availability Zone or let us choose one for you. If you distribute your instances across multiple Availability Zones and one instance fails, you can design your application so that an instance in another Availability Zone can handle requests.
For more information on Regions and AZ's please visit the URL:
• http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/using-regions-avai lability-zones.htm I

NEW QUESTION 15
Which of the following Cache Engines does Opswork have built in support for?

  • A. Redis
  • B. Memcache
  • C. Both Redis and Memcache
  • D. There is no built in support as of yet for any cache engine

Answer: B

Explanation:
The AWS Documentation mentions
AWS OpsWorks Stacks provides built-in support for Memcached. However, if Redis better suits your requirements, you can customize your stack so that your application servers use OastiCache Redis. Although it works with Redis clusters, AWS clearly specifies that AWS Opsworks stacks provide built in support for Memcached.
Amazon OastiCache is an AWS service that makes it easy to provide caching support for your application server, using either the Memcached or Redis caching engines. OastiCache can be used to improve the application server performance running on AWS Opsworks stacks.
For more information on Opswork and Cache engines please refer to the below link:
• http://docs^ws.a mazon.com/opsworks/latest/userguide/other-se rvices-redis.htm I

NEW QUESTION 16
When an Auto Scaling group is running in Amazon Elastic Compute Cloud (EC2), your application rapidly scales up and down in response to load within a 10-minute window; however, after the load peaks, you begin to see problems in your configuration management system where previously terminated Amazon EC2 resources are still showing as active. What would be a reliable and efficient way to handle the cleanup of Amazon EC2 resources within your configuration management system? Choose two answers from the options given below

  • A. Write a script that is run by a daily cron job on an Amazon EC2 instance and that executes API Describe calls of the EC2 Auto Scalinggroup and removes terminated instances from the configuration management system.
  • B. Configure an Amazon Simple Queue Service (SQS) queue for Auto Scaling actions that has a script that listens for new messages and removes terminated instances from the configuration management system.
  • C. Use your existing configuration management system to control the launchingand bootstrapping of instances to reduce the number of moving parts in the automation.
  • D. Write a small script that is run during Amazon EC2 instance shutdown to de-register the resource from the configuration management system.

Answer: AD

Explanation:
There is a rich brand of CLI commands available for Cc2 Instances. The CLI is located in the following link:
• http://docs.aws.a mazon.com/cli/latest/reference/ec2/
You can then use the describe instances command to describe the EC2 instances.
If you specify one or more instance I Ds, Amazon CC2 returns information for those instances. If you do not specify instance IDs, Amazon EC2 returns information for all relevant instances. If you specify an instance ID that is not valid, an error is returned. If you specify an instance that you do not own, it is not included in the returned results.
• http://docs.aws.a mazon.com/cli/latest/reference/ec2/describe-insta nces.html
You can use the CC2 instances to get those instances which need to be removed from the configuration management system.

NEW QUESTION 17
You have deployed a Cloudformation template which is used to spin up resources in your account. Which of the following status in Cloudformation represents a failure.

  • A. UPDATE_COMPLETE_CLEANUPJN_PROGRESS
  • B. DELETE_COMPLETE
  • C. ROLLBACK_IN_PROGRESS
  • D. UPDATE_IN_PROGRESS

Answer: C

Explanation:
AWS Cloud Formation provisions and configures resources by making calls to the AWS services that are described in your template.
After all the resources have been created, AWS Cloud Formation reports that your stack has been created. You can then start using the resources in your stack. If
stack creation fails, AWS CloudFormation rolls back your changes by deleting the resources that it created.
The below snapshot from Cloudformation shows what happens when there is an error in the stack creation.
DOP-C01 dumps exhibit
For more information on how Cloud Formation works, please refer to the below link: http://docs.ws.amazon.com/AWSCIoudFormation/latest/UserGuide/cfn-whatis-howdoesitwork-html

NEW QUESTION 18
Your company uses an application hosted in AWS which conists of EC2 Instances. The logs of the EC2 instances need to be processed and analyzed in real time, since this is a requirement from the IT Security department. Which of the following can be used to process the logs in real time.

  • A. UseCloudwatch logs to process and analyze the logs in real time
  • B. UseAmazon Glacier to store the logs and then use Amazon Kinesis to process andanalyze the logs in real time
  • C. UseAmazon S3 to store the logs and then use Amazon Kinesis to process and analyzethe logs inreal time
  • D. Useanother EC2 Instance with a larger instance type to process the logs

Answer: C

Explanation:
The AWS Documentation mentions the below Real-time metrics and reporting
You can use data collected into Kinesis Streams for simple data analysis and reporting in real time. For example, your data-processing application can work on metrics and reporting for system and application logs as the data is streaming in, rather than wait to receive batches of data.
Real-time data analytics
This combines the power of parallel processing with the value of real-time data. For example, process website clickstreams in real time, and then analyze site usability engagement using multiple different Kinesis Streams applications running in parallel.
Amazon Glacier is meant for Archival purposes and should not be used for storing the logs for real time processing.
For more information on Amazon Kinesis, please refer to the below link:
• http://docs.aws.amazon.com/streams/latest/dev/introduction.html

NEW QUESTION 19
You've been tasked with improving the current deployment process by making it easier to deploy and reducing the time it takes. You have been tasked with creating a continuous integration (CI) pipeline that can build AMI'S. Which of the below is the best manner to get this done. Assume that at max your development team will be deploying builds 5 times a week.

  • A. Use a dedicated EC2 instance with an EBS Volum
  • B. Download and configure the code and then crate an AMI out of that.
  • C. Use OpsWorks to launch an EBS-backed instance, then use a recipe to bootstrap the instance, and then have the CI system use the Createlmage API call to make an AMI from it.
  • D. Upload the code and dependencies to Amazon S3, launch an instance, download the package fromAmazon S3, then create the AMI with the CreateSnapshot API call
  • E. Have the CI system launch a new instance, then bootstrap the code and dependencies on that instance, and create an AMI using the Createlmage API call.

Answer: D

Explanation:
Since the number of calls is just a few times a week, there are many open source systems such as Jenkins which can be used as CI based systems.
Jenkins can be used as an extensible automation server, Jenkins can be used as a simple CI server or turned into the continuous delivery hub for any project.
For more information on the Jenkins CI tool please refer to the below link:
• https://jenkins.io/
Option A and C are partially correct, but since you just have 5 deployments per week, having separate instances which consume costs is not required. Option B is partially correct, but again having a separate system such as Opswork for such a low number of deployments is not required.

NEW QUESTION 20
You need to investigate one of the instances which is part of your Autoscaling Group. How would you implement this.

  • A. Suspend the AZRebalance process so that Autoscaling will not terminate the instance
  • B. Put the instance in a standby state
  • C. Put the instance in a InService state
  • D. Suspend the AddToLoadBalancer process

Answer: B

Explanation:
The AWS Documentation mentions
Auto Scaling enables you to put an instance that is in the InService state into the Standbystate, update or troubleshoot the instance, and then return the instance to service. Instances that are on standby are still part of the Auto Scaling group, but they do not actively handle application traffic.
For more information on the standby state please refer to the below link:
• http://docs.aws.a mazon.com/autoscaling/latest/userguide/as-enter-exit-standby.html

NEW QUESTION 21
Your company has an application hosted in AWS which makes use of DynamoDB. There is a requirement from the IT security department to ensure that all source IP addresses which make calls to the DynamoDB tables are recorded. Which of the following services can be used to ensure this requirement is fulfilled.

  • A. AWSCode Commit
  • B. AWSCode Pipeline
  • C. AWSCIoudTrail
  • D. AWSCIoudwatch

Answer: C

Explanation:
The AWS Documentation mentions the following
DynamoDB is integrated with CloudTrail, a service that captures low-level API requests made by or
on behalf of DynamoDB in your AWS account and delivers the log files to an Amazon S3 bucket that you specify. CloudTrail captures calls made from the DynamoDB console or from the DynamoDB low-level API. Using the information collected by CloudTrail, you can determine what request was made to DynamoDB, the source IP address from which the request was made, who made the request, when it was made, and so on.
For more information on DynamoDB and Cloudtrail, please refer to the below link:
• http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/logging-using- cloudtrail.htmI

NEW QUESTION 22
You have deployed an Elastic Beanstalk application in a new environment and want to save the current state of your environment in a document. You want to be able to restore your environment to the current state later or possibly create a new environment. You also want to make sure you have a restore point. How can you achieve this?

  • A. Use CloudFormation templates
  • B. Configuration Management Templates
  • C. Saved Configurations
  • D. Saved Templates

Answer: C

Explanation:
You can save your environment's configuration as an object in Amazon S3 that can be applied to other environments during environment creation, or applied to a running environment. Saved configurations are YAML formatted templates that define an environment's platform configuration, tier, configuration option settings,
and tags.
For more information on Saved Configurations please refer to the below link:
• http://docs.aws.a mazon.com/elasticbeanstalk/latest/dg/envi ronment-configuration- savedconfig.html

NEW QUESTION 23
You have an asynchronous processing application usingan Auto Scaling Group and an SQS Queue. The Auto Scaling Group scales according to the depth of the job queue. The completion velocity of the jobs has gone down, the Auto Scaling Group size has maxec out, but the inbound job velocity did not increase. What is a possible issue?

  • A. Some of the new jobs coming in are malformed and unprocessable.
  • B. The routing tables changed and none of the workers can process events anymore.
  • C. Someone changed the 1AM Role Policy on the instances in the worker group and broke permissions to access the queue.
  • D. The scaling metric is not functioning correctly.

Answer: A

Explanation:
This question is more on the grounds of validating each option
Option B is invalid, because the Route table would have an effect on all worker processes and no jobs would have been completed.
Option C is invalid because if the 1AM Role was invalid then no jobs would be completed.
Option D is invalid because the scaling is happening, its just that the jobs are not getting completed. For more information on Scaling on Demand, please visit the below URL:
• http://docs.aws.a mazon.com/autoscaling/latest/userguide/as-scale-based-on-demand.html

NEW QUESTION 24
Your company uses AWS to host its resources. They have the following requirements
1) Record all API calls and Transitions
2) Help in understanding what resources are there in the account
3) Facility to allow auditing credentials and logins
Which services would suffice the above requirements

  • A. AWS Config, CloudTrail, 1AM Credential Reports
  • B. CloudTrail, 1AM Credential Reports, AWS Config
  • C. CloudTrail, AWS Config, 1AM Credential Reports
  • D. AWS Config, 1AM Credential Reports, CloudTrail

Answer: C

Explanation:
You can use AWS CloudTrail to get a history of AWS API calls and related events for your account. This history includes calls made with the AWS Management
Console, AWS Command Line Interface, AWS SDKs, and other AWS services. For more information on Cloudtrail, please visit the below URL:
• http://docs.aws.a mazon.com/awscloudtrail/latest/userguide/cloudtrai l-user-guide.html
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting. For more information on the config service, please visit the below URL:
• https://aws.amazon.com/config/
You can generate and download a credential reportthat lists all users in your account and the status of their various credentials, including passwords, access keys, and MFA devices. You can get a credential report from the AWS Management Console, the AWS SDKs and Command Line Tools, or the 1AM API. For more information on Credentials Report, please visit the below URL:
• http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html

NEW QUESTION 25
Which of the following features of the Elastic Beanstalk service will allow you to perform a Blue Green Deployment

  • A. Rebuild Environment
  • B. Swap Environment
  • C. Swap URL's
  • D. Environment Configuration

Answer: C

Explanation:
With the Swap url feature, you can keep a version of your environment ready. And when you are ready to cut over, you can just use the swap url feature to switch over
to your new environment
For more information on swap url feature, please refer to the below link:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMCSwap.html

NEW QUESTION 26
......

P.S. Easily pass DOP-C01 Exam with 116 Q&As Downloadfreepdf.net Dumps & pdf Version, Welcome to Download the Newest Downloadfreepdf.net DOP-C01 Dumps: https://www.downloadfreepdf.net/DOP-C01-pdf-download.html (116 New Questions)