we provide High value Amazon-Web-Services SAP-C01 exam topics which are the best for clearing SAP-C01 test, and to get certified by Amazon-Web-Services AWS Certified Solutions Architect- Professional. The SAP-C01 Questions & Answers covers all the knowledge points of the real SAP-C01 exam. Crack your Amazon-Web-Services SAP-C01 Exam with latest dumps, guaranteed!

Check SAP-C01 free dumps before getting the full version:

NEW QUESTION 1
A company has an existing on-premises three-tier web application. The Linux web servers serve content from a centralized file share on a NAS server because the content is refreshed several times a day from various sources. The existing infrastructure is not optimized and the company would like to move to AWS in order to gain the ability to scale resources up and down in response to load. On-premises and AWS resources are connected using AWS Direct Connect.
How can the company migrate the web infrastructure to AWS without delaying the content refresh process?

  • A. Create a cluster of web server Amazon EC2 instances behind a Classic Load Balancer on AW
  • B. Share an Amazon EBS volume among all instances for the conten
  • C. Schedule a periodic synchronization of this volume and the NAS server.
  • D. Create an on-premises file gateway using AWS Storage Gateway to replace the NAS server and replicate content to AW
  • E. On the AWS side, mount the same Storage Gateway bucket to each web server Amazon EC2 instance to serve the content.
  • F. Expose an Amazon EFS share to on-premises users to serve as the NAS serv
  • G. Mount the same EFS share to the web server Amazon EC2 instances to serve the content.
  • H. Create web server Amazon EC2 instances on AWS in an Auto Scaling grou
  • I. Configure a nightly process where the web server instances are updated from the NAS server.

Answer: C

Explanation:
File gateway is limited by performance its gateway instance, whether EC2 or On-premises, Cache will get filled up fast if not properly configured, For large number of EC2 instances EFS scales better. So, bottom line is File Storage gateway is for legacy applications and you have to add cost of large gateway instances before comparing it to same quantity of EFS storage. https://www.reddit.com/r/aws/comments/82pyop/storage_gateway_vs_efs/
https://docs.aws.amazon.com/efs/latest/ug/efs-onpremises.html

NEW QUESTION 2
A company has a web application that securely uploads pictures and videos to an Amazon S3 bucket The company requires that only authenticated users are allowed to post content The application generates a preasigned URL that is used to upload objects through a browser interface Most users are reporting slow upload times for objects larger than 100 MB.
What can a Solutions Architect do to improve the performance of these uploads while ensuring only authenticated users are allowed to post content?

  • A. Set up an Amazon API Gateway with an edge-optimized API endpoint that has a resource as an S3 service proxy Configure the PUT method for this resource to expose the S3 Putobject operation Secure the API Gateway using a COGNITO_USER_POOLS authorize
  • B. Have the browser interface use API Gateway instead of the presigned URL to upload objects
  • C. Set up an Amazon API Gateway with a regional API endpoint that has a resource as an S3 service proxy Configure the PUT method for this resource to expose the S3 Putobject operation Secure the API Gateway using an AWS Lambda authorizer Have the browser interface use API Gateway instead of the presigned URL lo upload objects
  • D. Enable an S3 Transfer Acceleration endpoint on the S3 bucket Use the endpoint when generating the presigned URL Have the browser interface upload the objects to the URL using the S3 multipart upload API.
  • E. Configure an Amazon CloudFront distribution for the destination S3 bucket Enable PUT and POST methods for the CloudFront cache behavior Update the CloudFront origin to use an origin access identity (OAI). Give the OAI user s3:PutObject permissions in the bucket policy Have the browser interface upload objects using the CloudFront distribution.

Answer: A

NEW QUESTION 3
A retail company processes point-of-state data on application servers in its data center and writes outputs to Amazon DynamoDB table. The data center is connected to the company’s VPC with an AWS Direct Connect (DX) connection, and the application servers require a consistent network connection at speed greater than 2 Gbps.
The company decides that the DynamoDB table needs to be highly available and fault tolerant. The company policy states that the data should be available across two regions.
What changes should the company make to meet these requirements?

  • A. Establish a second DX connection for redundanc
  • B. Use DynamoDB global tables to replicate data to a second Region modify the application to fail over to the second Region.
  • C. Use an AWS managed VPN as a backup to D
  • D. Create an identical DynamoDB table in a second Regio
  • E. Modify the application to replicate data to both regions.
  • F. Establish a second DX connection for redundanc
  • G. Create an identical DynamoDB table in a second Regio
  • H. Enable DynamoDB auto scaling to manage throughput capacit
  • I. Modify the application to write to the second Region.
  • J. Use AWS managed VPN as a backup to D
  • K. Create an identical DynamoDB table in a second Region.Enable DynamoDB streams to capture changes to the tabl
  • L. Use AWS Lambda to replicate changes to the second Region.

Answer: A

NEW QUESTION 4
A company has several teams, and each team has their own Amazon RDS database that totals 100 TB The company is building a data query platform for Business Intelligence Analysts to generate a weekly business report The new system must run ad-hoc SQL queries
What is the MOST cost-effective solution?

  • A. Create a new Amazon Redshift cluster Create an AWS Glue ETL job to copy data from the RDS databases to the Amazon Redshift cluster Use Amazon Redshift to run the query
  • B. Create an Amazon EMR cluster with enough core nodes Run an Apache Spark job to copy data from the RDS databases to an Hadoop Distributed File System (HDFS) Use a local Apache Hive metastore to maintain the table definition Use Spark SQL to run the query
  • C. Use an AWS Glue ETL job to copy all the RDS databases to a single Amazon Aurora PostgreSQL database Run SQL queries on the Aurora PostgreSQL database
  • D. Use an AWS Glue crawler to crawl all the databases and create tables in the AWS Glue Data Catalog Use an AWS Glue ETL Job to load data from the RDS databases to Amazon S3, and use Amazon Athena to run the queries.

Answer: C

NEW QUESTION 5
A company is implementing a multi-account strategy; however, the Management team has expressed concerns that services like DNS may become overly complex. The company needs a solution that allows private DNS to be shared among virtual private clouds (VPCs) in different accounts. The company will have approximately 50 accounts in total.
What solution would create the LEAST complex DNS architecture and ensure that each VPC can resolve all AWS resources?

  • A. Create a shared services VPC in a central account, and create a VPC peering connection from the shared services VPC to each of the VPCs in the other account
  • B. Within Amazon Route 53, create a privately hosted zone in the shared services VPC and resource record sets for the domain and subdomains.Programmatically associate other VPCs with the hosted zone.
  • C. Create a VPC peering connection among the VPCs in all account
  • D. Set the VPC attributes enableDnsHostnames and enableDnsSupport to “true” for each VP
  • E. Create an Amazon Route 53 private zone for each VP
  • F. Create resource record sets for the domain and subdomain
  • G. Programmatically associate the hosted zones in each VPC with the other VPCs.
  • H. Create a shared services VPC in a central accoun
  • I. Create a VPC peering connection from the VPCs in other accounts to the shared services VP
  • J. Create an Amazon Route 53 privately hosted zone in the shared services VPC with resource record sets for the domain and subdomain
  • K. Allow UDP and TCP port 53 over the VPC peering connections.
  • L. Set the VPC attributes enableDnsHostnames and enableDnsSupport to “false” in every VP
  • M. Create an AWS Direct Connect connection with a private virtual interfac
  • N. Allow UDP and TCP port 53 over the virtual interfac
  • O. Use the on-premises DNS servers to resolve the IP addresses in each VPC on AWS.

Answer: A

Explanation:
https://aws.amazon.com/blogs/networking-and-content-delivery/centralized-dns-management-of-hybrid-cloud-w

NEW QUESTION 6
A company had a tight deadline to migrate its on-premises environment to AWS. It moved over Microsoft SQL Servers and Microsoft Windows Servers using the virtual machine import/export service and rebuild other applications native to the cloud. The team created both Amazon EC2 databases and used Amazon RDS. Each team in the company was responsible for migrating their applications, and they have created individual accounts for isolation of resources. The company did not have much time to consider costs, but now it would like suggestions on reducing its AWS spend.
Which steps should a Solutions Architect take to reduce costs?

  • A. Enable AWS Business Support and review AWS Trusted Advisor’s cost check
  • B. Create Amazon EC2 Auto Scaling groups for applications that experience fluctuating deman
  • C. Save AWS Simple Monthly Calculator reports in Amazon S3 for trend analysi
  • D. Create a master account under Organizations and have teams join for consolidating billing.
  • E. Enable Cost Explorer and AWS Business Support Reserve Amazon EC2 and Amazon RDS DB instance
  • F. Use Amazon CloudWatch and AWS Trusted Advisor for monitoring and to receive cost-savings suggestion
  • G. Create a master account under Organizations and have teams join for consolidated billing.
  • H. Create an AWS Lambda function that changes the instance size based on Amazon CloudWatch alarms.Reserve instances based on AWS Simple Monthly Calculator suggestion
  • I. Have an AWSWell-Architected framework review and apply recommendation
  • J. Create a master account under Organizations and have teams join for consolidated billing.
  • K. Create a budget and monitor for costs exceeding the budge
  • L. Create Amazon EC2 Auto Scaling groups for applications that experience fluctuating deman
  • M. Create an AWS Lambda function that changes instance sizes based on Amazon CloudWatch alarm
  • N. Have each team upload their bill to an Amazon S3 bucket for analysis of team spendin
  • O. Use Spot instances on nightly batch processing jobs.

Answer: B

Explanation:
Import/Export supports importing and exporting data into and out of Amazon S3 buckets. For significant data sets, AWS Import/Export is often faster than Internet transfer and more cost effective than upgrading your connectivity.

NEW QUESTION 7
A three-tier web application runs on Amazon EC2 instances. Cron daemons are used to trigger scripts that collect the web server, application, and database logs and send them to a centralized location every hour. Occasionally, scaling events or unplanned outages have caused the instances to stop before the latest logs were collected, and the log files were lost.
Which of the following options is the MOST reliable way of collecting and preserving the log files?

  • A. Update the cron jobs to run every 5 minutes instead of every hour to reduce the possibility of log messages being lost in an outage.
  • B. Use Amazon CloudWatch Events to trigger Amazon Systems Manager Run Command to invoke the log collection scripts more frequently to reduce the possibility of log messages being lost in an outage.
  • C. Use the Amazon CloudWatch Logs agent to stream log messages directly to CloudWatch Logs.Configure the agent with a batch count of 1 to reduce the possibility of log messages being lost in an outage.
  • D. Use Amazon CloudWatch Events to trigger AWS Lambda to SSH into each running instance and invoke the log collection scripts more frequently to reduce the possibility of log messages being lost in an outage.

Answer: C

Explanation:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html

NEW QUESTION 8
An online e-commerce business is running a workload on AWS. The application architecture includes a web tier, an application tier for business logic, and a database tier for user and transactional data management. The database server has a 100 GB memory requirement. The business requires cost-efficient disaster recovery for the application with an RTO of 5 minutes and an RPO of 1 hour. The business also has a regulatory for out-of region disaster recovery with a minimum distance between the primary and alternate sites of 250 miles.
Which of the following options can the Solutions Architect design to create a comprehensive solution for this customer that meets the disaster recovery requirements?

  • A. Back up the application and database data frequently and copy them to Amazon S3. Replicate the backups using S3 cross-region replication, and use AWS CloudFormation to instantiate infrastructure for disaster recovery and restore data from Amazon S3.
  • B. Employ a pilot light environment in which the primary database is configured with mirroring to build a standby database on m4.large in the alternate regio
  • C. Use AWS CloudFormation to instantiate the web servers, application servers and load balancers in case of a disaster to bring the application up in the alternate regio
  • D. Vertically resize the database to meet the full production demands, and use Amazon Route 53 to switch traffic to the alternate region.
  • E. Use a scaled-down version of the fully functional production environment in the alternate region that includes one instance of the web server, one instance of the application server, and a replicated instance of the database server in standby mod
  • F. Place the web and the application tiers in an Auto Scaling behind a load balancer, which can automatically scale when the load arrives to the applicatio
  • G. Use Amazon Route 53 to switch traffic to the alternate region.
  • H. Employ a multi-region solution with fully functional web, application, and database tiers in both regions with equivalent capacit
  • I. Activate the primary database in one region only and the standby database in the other regio
  • J. Use Amazon Route 53 to automatically switch traffic from one region to another using health check routing policies.

Answer: C

NEW QUESTION 9
A large global company wants to migrate a stateless mission-critical application to AWS. The application is based on IBM WebSphere (application and integration middleware), IBM MQ (messaging middleware), and IBM DB2 (database software) on a z/OS operating system.
How should the Solutions Architect migrate the application to AWS?

  • A. Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling.Re-platform the IBM MQ to an Amazon EC2-based M
  • B. Re-platform the z/OS-based DB2 to Amazon RDS DB2.
  • C. Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling.Re-platform the IBM MQ to an Amazon M
  • D. Re-platform z/OS-based DB2 to Amazon EC2-based DB2.
  • E. Orchestrate and deploy the application by using AWS Elastic Beanstal
  • F. Re-platform the IBM MQ to Amazon SQ
  • G. Re-platform z/OS-based DB2 to Amazon RDS DB2.
  • H. Use the AWS Server Migration Service to migrate the IBM WebSphere and IBM DB2 to an Amazon EC2-based solutio
  • I. Re-platform the IBM MQ to an Amazon MQ.

Answer: B

Explanation:
https://aws.amazon.com/blogs/database/aws-database-migration-service-and-aws-schema-conversion-tool-now- https://aws.amazon.com/quickstart/architecture/ibm-mq/

NEW QUESTION 10
A company is running a .NET three-tier web application on AWS. The team currently uses XL storage optimized instances to store serve the website’s image and video files on local instance storage. The company has encountered issues with data loss from replication and instance failures. The Solutions Architect has been asked to redesign this application to improve its reliability while keeping costs low.
Which solution will meet these requirements?

  • A. Set up a new Amazon EFS share, move all image and video files to this share, and then attach this new drive as a mount point to all existing server
  • B. Create an Elastic Load Balancer with Auto Scaling general purpose instance
  • C. Enable Amazon CloudFront to the Elastic Load Balance
  • D. Enable Cost Explorer and use AWS Trusted advisor checks to continue monitoring the environment for future savings.
  • E. Implement Auto Scaling with general purpose instance types and an Elastic Load Balance
  • F. Enable an Amazon CloudFront distribution to Amazon S3 and move images and video files to Amazon S3. Reserve general purpose instances to meet base performance requirement
  • G. Use Cost Explorer and AWSTrusted Advisor checks to continue monitoring the environment for future savings.
  • H. Move the entire website to Amazon S3 using the S3 website hosting featur
  • I. Remove all the web servers and have Amazon S3 communicate directly with the application servers in Amazon VPC.
  • J. Use AWS Elastic Beanstalk to deploy the .NET applicatio
  • K. Move all images and video files to Amazon EF
  • L. Create an Amazon CloudFront distribution that points to the EFS shar
  • M. Reserve the m4.4xl instances needed to meet base performance requirements.

Answer: B

NEW QUESTION 11
A financial company is using a high-performance compute cluster running on Amazon EC2 instances to perform market simulations A DNS record must be created in an Amazon Route 53 private hosted zone when instances start The DNS record must be removed after instances are terminated.
Currently the company uses a combination of Amazon CtoudWatch Events and AWS Lambda to create the
DNS record. The solution worked well in testing with small clusters, but in production with clusters containing thousands of instances the company sees the following error in the Lambda logs:
HTTP 400 error (Bad request).
The response header also includes a status code element with a value of "Throttling" and a status message element with a value of "Rate exceeded "
Which combination of steps should the Solutions Architect take to resolve these issues? (Select THREE)

  • A. Configure an Amazon SOS FIFO queue and configure a CloudWatch Events rule to use this queue as a targe
  • B. Remove the Lambda target from the CloudWatch Events rule
  • C. Configure an Amazon Kinesis data stream and configure a CloudWatch Events rule to use this queue as a target Remove the Lambda target from the CloudWatch Events rule
  • D. Update the CloudWatch Events rule to trigger on Amazon EC2 "Instance Launch Successful" and "Instance Terminate Successful" events for the Auto Scaling group used by the cluster
  • E. Configure a Lambda function to retrieve messages from an Amazon SQS queue Modify the Lambda function to retrieve a maximum of 10 messages then batch the messages by Amazon Route 53 API call type and submit Delete the messages from the SQS queue after successful API calls.
  • F. Configure an Amazon SQS standard queue and configure the existing CloudWatch Events rule to use this queue as a target Remove the Lambda target from the CloudWatch Events rule.
  • G. Configure a Lambda function to read data from the Amazon Kinesis data stream and configure the batch window to 5 minutes Modify the function to make a single API call to Amazon Route 53 with all records read from the kinesis data stream

Answer: BEF

NEW QUESTION 12
A large company experienced a drastic increase in its monthly AWS spend. This is after Developers accidentally launched Amazon EC2 instances in unexpected regions. The company has established practices around least privileges for Developers and controls access to on-premises resources using Active Directory groups. The company now wants to control costs by restricting the level of access that Developers have to the AWS Management Console without impacting their productivity. The company would also like to allow Developers to launch Amazon EC2 in only one region, without limiting access to other services in any region.
How can this company achieve these new security requirements while minimizing the administrative burden on the Operations team?

  • A. Set up SAML-based authentication tied to an IAM role that has an AdministrativeAccess managed policy attached to i
  • B. Attach a customer managed policy that denies access to Amazon EC2 in each region except for the one required.
  • C. Create an IAM user for each Developer and add them to the developer IAM group that has the PowerUserAccess managed policy attached to i
  • D. Attach a customer managed policy that allows the Developers access to Amazon EC2 only in the required region.
  • E. Set up SAML-based authentication tied to an IAM role that has a PowerUserAccess managed policy and a customer managed policy that deny all the Developers access to any AWS services except AWS Service Catalo
  • F. Within AWS Service Catalog, create a product containing only the EC2 resources in the approved region.
  • G. Set up SAML-based authentication tied to an IAM role that has the PowerUserAccess managed policy attached to i
  • H. Attach a customer managed policy that denies access to Amazon EC2 in each region except for the one required.

Answer: D

Explanation:
The tricks here are: - SAML for AD federation and authentication - PowerUserAccess vs AdministrativeAccess. (PowerUSer has less privilege, which is the required once for developers). Admin, has more rights. The description of "PowerUser access" given by AWS is “Provides full access to AWS services and resources, but does not allow management of Users and groups.”

NEW QUESTION 13
An e-commerce company is revamping its IT infrastructure and is planning to use AWS services. The company’s CIO has asked a Solutions Architect to design a simple, highly available, and loosely coupled order processing application. The application is responsible for receiving and processing orders before storing them in an Amazon DynamoDB table. The application has a sporadic traffic pattern and should be able to scale during marketing campaigns to process the orders with minimal delays.
Which of the following is the MOST reliable approach to meet the requirements?

  • A. Receive the orders in an Amazon EC2-hosted database and use EC2 instances to process them.
  • B. Receive the orders in an Amazon SQS queue and trigger an AWS Lambda function to process them.
  • C. Receive the orders using the AWS Step Functions program and trigger an Amazon ECS container to process them.
  • D. Receive the orders in Amazon Kinesis Data Streams and use Amazon EC2 instances to process them.

Answer: B

NEW QUESTION 14
A media storage application uploads user photos to Amazon S3 for processing. End users are reporting that some uploaded photos are not being processed properly. The Application Developers trace the logs and find that AWS Lambda is experiencing execution issues when thousands of users are on the system simultaneously. Issues are caused by:
SAP-C01 dumps exhibit Limits around concurrent executions.
SAP-C01 dumps exhibit The performance of Amazon DynamoDB when saving data.
Which actions can be taken to increase the performance and reliability of the application? (Choose two.)

  • A. Evaluate and adjust the read capacity units (RCUs) for the DynamoDB tables.
  • B. Evaluate and adjust the write capacity units (WCUs) for the DynamoDB tables.
  • C. Add an Amazon ElastiCache layer to increase the performance of Lambda functions.
  • D. Configure a dead letter queue that will reprocess failed or timed-out Lambda functions.
  • E. Use S3 Transfer Acceleration to provide lower-latency access to end users.

Answer: BD

Explanation:
B:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.h
D: https://aws.amazon.com/blogs/compute/robust-serverless-application-design-with-aws-lambda-dlq/c

NEW QUESTION 15
A Solutions Architect must design a highly available, stateless, REST service. The service will require multiple persistent storage layers for service object meta information and the delivery of content. Each request needs to be authenticated and securely processed. There is a requirement to keep costs as low as possible?
How can these requirements be met?

  • A. Use AWS Fargate to host a container that runs a self-contained REST servic
  • B. Set up an Amazon ECS service that is fronted by an Application Load Balancer (ALB). Use a custom authenticator to control access to the AP
  • C. Store request meta information in Amazon DynamoDB with Auto Scaling and static content in a secured S3 bucke
  • D. Make secure signed requests for Amazon S3 objects and proxy the data through the REST service interface.
  • E. Use AWS Fargate to host a container that runs a self-contained REST servic
  • F. Set up an ECS service that is fronted by a cross-zone AL
  • G. Use an Amazon Cognito user pool to control access to the AP
  • H. Store request meta information in DynamoDB with Auto Scaling and static content in a secured S3 bucke
  • I. Generate presigned URLs when returning references to content stored in Amazon S3.
  • J. Set up Amazon API Gateway and create the required API resources and method
  • K. Use an Amazon Cognito user pool to control access to the AP
  • L. Configure the methods to use AWS Lambda proxy integrations, and process each resource with a unique AWS Lambda functio
  • M. Store request meta information in DynamoDB with Auto Scaling and static content in a secured S3 bucke
  • N. Generate presigned URLs when returning references to content stored in Amazon S3.
  • O. Set up Amazon API Gateway and create the required API resources and method
  • P. Use an Amazon API Gateway custom authorizer to control access to the AP
  • Q. Configure the methods to use AWS Lambda custom integrations, and process each resource with a unique Lambda functio
  • R. Store request meta information in an Amazon ElastiCache Multi-AZ cluster and static content in a secured S3 bucke
  • S. Generate presigned URLs when returning references to content stored in Amazon S3.

Answer: C

NEW QUESTION 16
A company deployed a three-tier web application in two regions: us-east-1 and eu-west-1. The application must be active in both regions at the same time. The database tier of the application uses a single Amazon RDS Aurora database globally, with a master in us-east-1 and a read replica in eu-west-1. Both regions are connected by a VPN.
The company wants to ensure that the application remains available even in the event of a region-level failure of all of the application’s components. It is acceptable for the application to be in read-only mode for up to 1 hour. The company plans to configure two Amazon Route 53 record sets, one for each of the regions.
How should the company complete the configuration to meet its requirements while providing the lowest latency for the application end-users? (Choose two.)

  • A. Use failover routing and configure the us-east-1 record set as primary and the eu-west-1 record set as secondar
  • B. Configure an HTTP health check for the web application in us-east-1, and associate it to the us-east-1 record set.
  • C. Use weighted routing and configure each record set with a weight of 50. Configure an HTTP health check for each region, and attach it to the record set for that region.
  • D. Use latency-based routing for both record set
  • E. Configure a health check for each region and attach it to the record set for that region.
  • F. Configure an Amazon CloudWatch alarm for the health checks in us-east-1, and have it invoke an AWS Lambda function that promotes the read replica in eu-west-1.
  • G. Configure an Amazon RDS event notifications to react to the failure of the database in us-east-1 by invoking an AWS Lambda function that promotes the read replica in eu-west-1.

Answer: CE

Explanation:
https://docs.aws.amazon.com/lambda/latest/dg/services-rds.html

NEW QUESTION 17
A company runs an application on a fleet of Amazon EC2 instances The application requires low latency and random access to 100 GB of data The application must be able to access the data at up to 3.000 IOPS A Development team has configured the EC2 launch template to provision a 100-GB Provisioned IOPS (PIOPS) Amazon EBS volume with 3 000 IOPS provisioned A Solutions Architect is tasked with lowering costs without impacting performance and durability
Which action should be taken?

  • A. Create an Amazon EFS file system with the performance mode set to Max I/O Configure the EC2 operating system to mount the EFS file system
  • B. Create an Amazon EFS file system with the throughput mode set to Provisioned Configure the EC2 operating system to mount the EFS file system
  • C. Update the EC2 launch template to allocate a new 1-TB EBS General Purpose SSO (gp2) volume
  • D. Update the EC2 launch template to exclude the PIOPS volume Configure the application to use local instance storage

Answer: A

NEW QUESTION 18
A company is having issues with a newly deployed server less infrastructure that uses Amazon API Gateway, Amazon Lambda, and Amazon DynamoDB.
In a steady state, the application performs as expected However, during peak load, tens of thousands of simultaneous invocations are needed and user request fail multiple times before succeeding. The company has checked the logs for each component, focusing specifically on Amazon CloudWatch Logs for Lambda. There are no error logged by the services or applications.
What might cause this problem?

  • A. Lambda has very memory assigned, which causes the function to fail at peak load.
  • B. Lambda is in a subnet that uses a NAT gateway to reach out to the internet, and the function instance does not have sufficient Amazon EC2 resources in the VPC to scale with the load.
  • C. The throttle limit set on API Gateway is very low during peak load, the additional requests are not making their way through to Lambda
  • D. DynamoDB is set up in an auto scaling mod
  • E. During peak load, DynamoDB adjust capacity and through successfully.

Answer: A

NEW QUESTION 19
A company is currently using AWS CodeCommit for its source control and AWS CodePipeline for continuous integration. The pipeline has a build stage for building the artifacts which is then staged in an Amazon S3 bucket.
The company has identified various improvement opportunities in the existing process, and a Solutions Architect has been given the following requirement:
SAP-C01 dumps exhibit Create a new pipeline to support feature development
SAP-C01 dumps exhibit Support feature development without impacting production applications
SAP-C01 dumps exhibit Incorporate continuous testing with unit tests
SAP-C01 dumps exhibit Isolate development and production artifacts
SAP-C01 dumps exhibit Support the capability to merge tested code into production code. How should the Solutions Architect achieve these requirements?

  • A. Trigger a separate pipeline from CodeCommit feature branche
  • B. Use AWS CodeBuild for running unit test
  • C. Use CodeBuild to stage the artifacts within an S3 bucket in a separate testing account.
  • D. Trigger a separate pipeline from CodeCommit feature branche
  • E. Use AWS Lambda for running unit test
  • F. Use AWS CodeDeploy to stage the artifacts within an S3 bucket in a separate testing account.
  • G. Trigger a separate pipeline from CodeCommit tags Use Jenkins for running unit test
  • H. Create a stage in the pipeline with S3 as the target for staging the artifacts with an S3 bucket in a separate testing account.
  • I. Create a separate CodeCommit repository for feature development and use it to trigger the pipelin
  • J. Use AWS Lambda for running unit test
  • K. Use AWS CodeBuild to stage the artifacts within different S3 buckets in the same production account.

Answer: A

Explanation:
https://docs.aws.amazon.com/codebuild/latest/userguide/how-to-create-pipeline.html

NEW QUESTION 20
A company stores sales transaction data in Amazon DynamoDB tables. To detect anomalous behaviors and respond quickly, all changes to the items stored in the DynamoDB tables must be logged within 30 minutes. Which solution meets the requirements?

  • A. Copy the DynamoDB tables into Apache Hive tables on Amazon EMR every hour and analyze them for anomalous behavior
  • B. Send Amazon SNS notifications when anomalous behaviors are detected.
  • C. Use AWS CloudTrail to capture all the APIs that change the DynamoDB table
  • D. Send SNS notifications when anomalous behaviors are detected using CloudTrail event filtering.
  • E. Use Amazon DynamoDB Streams to capture and send updates to AWS Lambd
  • F. Create a Lambda function to output records to Amazon Kinesis Data Stream
  • G. Analyze any anomalies with Amazon Kinesis Data Analytic
  • H. Send SNS notifications when anomalous behaviors are detected.
  • I. Use event patterns in Amazon CloudWatch Events to capture DynamoDB API call events with an AWS Lambda function as a target to analyze behavio
  • J. Send SNS notifications when anomalous behaviors are detected.

Answer: C

Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html

NEW QUESTION 21
......

Recommend!! Get the Full SAP-C01 dumps in VCE and PDF From Dumpscollection, Welcome to Download: http://www.dumpscollection.net/dumps/SAP-C01/ (New 179 Q&As Version)