We offers aws solution architect associate exam dumps. "AWS Certified Solutions Architect - Associate", also known as AWS-Solution-Architect-Associate exam, is a Amazon Certification. This set of posts, Passing the AWS-Solution-Architect-Associate exam with aws solution architect associate questions, will help you answer those questions. The aws solution architect associate questions covers all the knowledge points of the real exam. 100% real aws solution architect associate exam dumps and revised by experts!

Online AWS-Solution-Architect-Associate free questions and answers of New Version:

NEW QUESTION 1
What is a placement group in Amazon EC2?

  • A. It is a group of EC2 instances within a single Availability Zone.
  • B. It the edge location of your web content.
  • C. It is the AWS region where you run the EC2 instance of your web content.
  • D. It is a group used to span multiple Availability Zone

Answer: A

Explanation: A placement group is a logical grouping of instances within a single Availability Zone. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

NEW QUESTION 2
A user is making a scalable web application with compartmentalization. The user wants the log module to be able to be accessed by all the application functionalities in an asynchronous way. Each module of the application sends data to the log module, and based on the resource availability it will process the logs. Which AWS service helps this functionality?

  • A. AWS Simple Queue Service.
  • B. AWS Simple Notification Service.
  • C. AWS Simple Workflow Service.
  • D. AWS Simple Email Servic

Answer: A

Explanation: Amazon Simple Queue Service (SQS) is a highly reliable distributed messaging system for storing messages as they travel between computers. By using Amazon SQS, developers can simply move data between distributed application components. It is used to achieve compartmentalization or loose coupling. In this case all the modules will send a message to the logger queue and the data will be processed by queue as per the resource availability.
Reference: http://media.amazonwebservices.com/AWS_Building_FauIt_To|erant_AppIications.pdf

NEW QUESTION 3
IAM's Policy Evaluation Logic always starts with a default _ for every request, except for those that use the AWS account's root security credentials b

  • A. Permit
  • B. Deny
  • C. Cancel

Answer: B

NEW QUESTION 4
Can you specify the security group that you created for a VPC when you launch an instance in EC2-Classic?

  • A. No, you can specify the security group created for EC2-Classic when you launch a VPC instance.
  • B. No
  • C. Yes
  • D. No, you can specify the security group created for EC2-Classic to a non-VPC based instance onl

Answer: B

Explanation: If you're using EC2-Classic, you must use security groups created specifically for EC2-Classic. When you launch an instance in EC2-Classic, you must specify a security group in the same region as the instance. You can't specify a security group that you created for a VPC when you launch an instance in
EC2-Classic.
Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.htmI#ec2-classic-securit y-groups

NEW QUESTION 5
When you resize the Amazon RDS DB instance, Amazon RDS will perform the upgrade during the next maintenance window. If you want the upgrade to be performed now, rather than waiting for the maintenance window, specify the _ option.

  • A. Apply Now
  • B. Apply Soon
  • C. Apply This
  • D. Apply Immediately

Answer: D

NEW QUESTION 6
Your application is using an ELB in front of an Auto Scaling group of web/application sewers deployed across two AZs and a MuIti-AZ RDS Instance for data persistence.
The database CPU is often above 80% usage and 90% of 1/0 operations on the database are reads. To improve performance you recently added a single-node Memcached EIastiCache Cluster to cache frequent DB query results. In the next weeks the overall workload is expected to grow by 30%.
Do you need to change anything in the architecture to maintain the high availability or the application with the anticipated additional load? Why?

  • A. Yes, you should deploy two Memcached EIastiCache Clusters in different AZs because the RDS instance will not be able to handle the load if the cache node fails.
  • B. No, if the cache node fails you can always get the same data from the DB without having any availability impact.
  • C. No, if the cache node fails the automated EIastiCache node recovery feature will prevent any availability impact.
  • D. Yes, you should deploy the Memcached EIastiCache Cluster with two nodes in the same AZ as the RDS DB master instance to handle the load if one cache node fails.

Answer: A

Explanation: EIastiCache for Memcached
The primary goal of caching is typically to offload reads from your database or other primary data source. In most apps, you have hot spots of data that are regularly queried, but only updated periodically. Think of the front page of a blog or news site, or the top 100 leaderboard in an online game. In this type of case, your app can receive dozens, hundreds, or even thousands of requests for the same data before it's updated again. Having your caching layer handle these queries has several advantages. First, it's considerably cheaper to add an in-memory cache than to scale up to a larger database cluster. Second,
an in-memory cache is also easier to scale out, because it's easier to distribute an in-memory cache horizontally than a relational database.
Last, a caching layer provides a request buffer in the event of a sudden spike in usage. If your app or game ends up on the front page of Reddit or the App Store, it's not unheard of to see a spike that is 10 to 100 times your normal application load. Even if you autoscale your application instances, a IOx request spike will likely make your database very unhappy.
Let's focus on EIastiCache for Memcached first, because it is the best fit for a caching focused solution. We'II revisit Redislater in the paper, and weigh its advantages and disadvantages.
Architecture with EIastiCache for Memcached
When you deploy an EIastiCache Memcached cluster, it sits in your application as a separate tier alongside your database. As mentioned previously, Amazon EIastiCache does not directly communicate with your database tier, or indeed have any particular knowledge of your database. A simplified deployment for a web application looks something like this:
AWS-Solution-Architect-Associate dumps exhibit
In this architecture diagram, the Amazon EC2 application instances are in an Auto Scaling group, located behind a load balancer using Elastic Load Balancing, which distributes requests among the instances. As requests come into a given EC2 instance, that EC2 instance is responsible for communicating with EIastiCache and the database tier. For development purposes, you can begin with a single EIastiCache node to test your application, and then scale to additional cluster nodes by modifying t he EIastiCache cluster. As you add additional cache nodes, the EC2 application instances are able to distribute cache keys across multiple EIastiCache nodes. The most common practice is to use client-side sharding to distribute keys across cache nodes, which we will discuss later in this paper.
AWS-Solution-Architect-Associate dumps exhibit
When you launch an EIastiCache cluster, you can choose the Availability Zone(s) that the cluster lives in. For best performance, you should configure your cluster to use the same Availability Zones as your application servers. To launch an EIastiCache cluster in a specific Availability Zone, make sure to specify the Preferred Zone(s) option during cache cluster creation. The Availability Zones that you specify will be where EIastiCache will launch your cache nodes. We recommend that you select Spread Nodes Across Zones, which tells EIastiCache to distribute cache nodes across these zones as evenly as possible. This distribution will mitigate the impact of an Availability Zone disruption on your E|astiCache nodes. The trade-off is that some of the requests from your application to EIastiCache will go to a node in a different Availability Zone, meaning latency will be slightly higher.
For more details, refer to Creating a Cache Cluster in the Amazon EIastiCache User Guide.
As mentioned at the outset, EIastiCache can be coupled with a wide variety of databases. Here is an example architecture that uses Amazon DynamoDB instead of Amazon RDS and IV|ySQL:
AWS-Solution-Architect-Associate dumps exhibit
This combination of DynamoDB and EIastiCache is very popular with mobile and game companies, because DynamoDB allows for higher write throughput at lower cost than traditional relational databases. In addition, DynamoDB uses a key-value access pattern similar to EIastiCache, which also simplifies the programming model. Instead of using relational SQL for the primary database but then key-value patterns for the cache, both the primary database and cache can be programmed similarly.
In this architecture pattern, DynamoDB remains the source of truth for data, but application reads are offloaded to EIastiCache for a speed boost.

NEW QUESTION 7
After moving an E-Commerce website for a client from a dedicated server to AWS you have also set up auto scaling to perform health checks on the instances in your group and replace instances that fail these checks. Your client has come to you with his own health check system that he wants you to use as it has proved to be very useful prior to his site running on AWS. What do you think would be an appropriate response to this given all that you know about auto scaling?

  • A. It is not possible to implement your own health check syste
  • B. You need to use AWSs health check system.
  • C. It is not possible to implement your own health check system due to compatibility issues.
  • D. It is possible to implement your own health check system and then send the instance's health information directly from your system to Cloud Watch.
  • E. It is possible to implement your own health check system and then send the instance's health information directly from your system to Cloud Watch but only in the US East (
  • F. Virginia) region.

Answer: C

Explanation: Auto Scaling periodically performs health checks on the instances in your group and replaces instances that fail these checks. By default, these health checks use the results of EC2 instance status checks to determine the health of an instance. If you use a load balancer with your Auto Scaling group, you can optionally choose to include the results of Elastic Load Balancing health checks.
Auto Scaling marks an instance unhealthy if the calls to the Amazon EC2 action DescribeInstanceStatus returns any other state other than running, the system status shows impaired, or the calls to Elastic Load Balancing action DescribeInstanceHeaIth returns OutOfService in the instance state field.
After an instance is marked unhealthy because of an Amazon EC2 or Elastic Load Balancing health check, it is scheduled for replacement.
You can customize the health check conducted by your Auto Scaling group by specifying additional checks or by having your own health check system and then sending the instance's health information directly from your system to Auto Scaling.
Reference: http://docs.aws.amazon.com/AutoScaIing/latest/Deve|operGuide/healthcheck.html

NEW QUESTION 8
While performing the volume status checks, if the status is insufficient-data, what does it mean?

  • A. the checks may still be in progress on the volume
  • B. the check has passed
  • C. the check has failed

Answer: A

NEW QUESTION 9
You have deployed a web application targeting a global audience across multiple AWS Regions under the domain name.exampIe.com. You decide to use Route53 Latency-Based Routing to serve web requests to users from the region closest to the user. To provide business continuity in the event of server downtime you configure weighted record sets associated with two web servers in separate Availability Zones per region. Dunning a DR test you notice that when you disable all web sewers in one of the regions Route53 does not automatically direct all users to the other region. What could be happening? {Choose 2 answers)

  • A. Latency resource record sets cannot be used in combination with weighted resource record sets.
  • B. You did not setup an HTIP health check tor one or more of the weighted resource record sets associated with me disabled web sewers.
  • C. The value of the weight associated with the latency alias resource record set in the region with the disabled sewers is higher than the weight for the other region.
  • D. One of the two working web sewers in the other region did not pass its HTIP health check.
  • E. You did not set "Evaluate Target Health" to "Yes" on the latency alias resource record set associated with example com in the region where you disabled the sewers.

Answer: BE

Explanation: How Health Checks Work in Complex Amazon Route 53 Configurations
Checking the health of resources in complex configurations works much the same way as in simple configurations. However, in complex configurations, you use a combination of alias resource record sets (including weighted alias, latency alias, and failover alias) and nonalias resource record sets to build a decision tree that gives you greater control over how Amazon Route 53 responds to requests.
For more information, see How Health Checks Work in Simple Amazon Route 53 Configurations.
For example, you might use latency alias resource record sets to select a region close to a user and use weighted resource record sets for two or more resources within each region to protect against the failure of a single endpoint or an Availability Zone. The following diagram shows this configuration.
Here's how Amazon EC2 and Amazon Route 53 are configured:
You have Amazon EC2 instances in two regions, us-east-1 and ap-southeast-2. You want Amazon Route 53 to respond to queries by using the resource record sets in the region that provides the lowest latency for your customers, so you create a latency alias resource record set for each region.
(You create the latency alias resource record sets after you create resource record sets for the indMdual Amazon EC2 instances.)
Within each region, you have two Amazon EC2 instances. You create a weighted resource record set for each instance. The name and the type are the same for both of the weighted resource record sets in each region.
When you have multiple resources in a region, you can create weighted or failover resource record sets for your resources. You can also create even more complex configurations by creating weighted alias or failover alias resource record sets that, in turn, refer to multiple resources.
Each weighted resource record set has an associated health check. The IP address for each health check matches the I P address for the corresponding resource record set. This isn't required, but it's the most common configuration.
For both latency alias resource record sets, you set the value of Evaluate Target Health to Yes.
You use the Evaluate Target Health setting for each latency alias resource record set to make Amazon Route 53 evaluate the health of the alias targets-the weighted resource record sets-and respond accordingly.
The preceding diagram illustrates the following sequence of events:
Amazon Route 53 receives a query for exampIe.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource record set for the us-east-1 region.
Amazon Route 53 selects a weighted resource record set based on weight. Evaluate Target Health is Yes for the latency alias resource record set, so Amazon Route 53 checks the health of the selected weighted resource record set.
The health check failed, so Amazon Route 53 chooses another weighted resource record set based on weight and checks its health. That resource record set also is unhealthy.
Amazon Route 53 backs out of that branch of the tree, looks for the latency alias resource record set with the next-best latency, and chooses the resource record set for ap-southeast-2.
Amazon Route 53 again selects a resource record set based on weight, and then checks the health of the selected resource record set . The health check passed, so Amazon Route 53 returns the applicable value in response to the query.
What Happens When You Associate a Health Check with an Alias Resource Record Set?
You can associate a health check with an alias resource record set instead of or in addition to setting the value of Evaluate Target Health to Yes. However, it's generally more useful if Amazon Route 53 responds to queries based on the health of the underlying resources- the HTTP sewers, database servers, and
other resources that your alias resource record sets refer to. For example, suppose the following configuration:
You assign a health check to a latency alias resource record set for which the alias target is a group of weighted resource record sets.
You set the value of Evaluate Target Health to Yes for the latency alias resource record set.
In this configuration, both of the following must be true before Amazon Route 53 will return the applicable value for a weighted resource record set:
The health check associated with the latency alias resource record set must pass.
At least one weighted resource record set must be considered healthy, either because it's associated with a health check that passes or because it's not associated with a health check. In the latter case, Amazon Route 53 always considers the weighted resource record set healthy.
If the health check for the latency alias resource record set fails, Amazon Route 53 stops responding to queries using any of the weighted resource record sets in the alias target, even if they're all healthy. Amazon Route 53 doesn't know the status of the weighted resource record sets because it never looks past the failed health check on the alias resource record set.
What Happens When You Omit Health Checks?
In a complex configuration, it's important to associate health checks with all of the non-alias resource record sets. Let's return to the preceding example, but assume that a health check is missing on one of the weighted resource record sets in the us-east-1 region:
Here's what happens when you omit a health check on a non-alias resource record set in this configuration:
Amazon Route 53 receives a query for exampIe.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource record set for the us-east-1 region.
Amazon Route 53 looks up the alias target for the latency alias resource record set, and checks the status of the corresponding health checks. The health check for one weighted resource record set failed, so that resource record set is omitted from consideration.
The other weighted resource record set in the alias target for the us-east-1 region has no health check. The corresponding resource might or might not be healthy, but without a health check, Amazon Route 53 has no way to know. Amazon Route 53 assumes that the resource is healthy and returns the applicable value in response to the query.
What Happens When You Set Evaluate Target Health to No?
In general, you also want to set Evaluate Target Health to Yes for all of the alias resource record sets. In the following example, all of the weighted resource record sets have associated health checks, but Evaluate Target Health is set to No for the latency alias resource record set for the us-east-1 region:
Here's what happens when you set Evaluate Target Health to No for an alias resource record set in this configuration:
Amazon Route 53 receives a query for exampIe.com. Based on the latency for the user making the request, Amazon Route 53 selects the latency alias resource record set for the us-east-1 region.
Amazon Route 53 determines what the alias target is for the latency alias resource record set, and checks the corresponding health checks. They're both failing.
Because the value of Evaluate Target Health is No for the latency alias resource record set for the us-east-1 region, Amazon Route 53 must choose one resource record set in this branch instead of backing out of the branch and looking for a healthy resource record set in the ap-southeast-2 region.

NEW QUESTION 10
You have a periodic Image analysis application that gets some files In Input analyzes them and tor each file writes some data in output to a ten file the number of files in input per day is high and concentrated in a few hours of the day.
Currently you have a server on EC2 with a large EBS volume that hosts the input data and the results it takes almost 20 hours per day to complete the process
What services could be used to reduce the elaboration time and improve the availability of the solution?

  • A. 53 to store 1/0 file
  • B. SOS to distribute elaboration commands to a group of hosts working in paralle
  • C. Auto scaling to dynamically size the group of hosts depending on the length of the SOS queue
  • D. EBS with Provisioned IOPS (PIOPS) to store 1/0 file
  • E. SNS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group of hosts depending on the number of SNS notifications
  • F. 53 to store 1/0 files, SNS to distribute evaporation commands to a group of hosts working in paralle
  • G. Auto scaling to dynamically size the group of hosts depending on the number of SNS notifications
  • H. EBS with Provisioned IOPS (PIOPS) to store 1/0 files SOS to distribute elaboration commands to a group of hosts working in parallel Auto Scaling to dynamically size the group ot hosts depending on the length of the SOS queue.

Answer: D

Explanation: Amazon EBS allows you to create storage volumes and attach them to Amazon EC2 instances. Once attached, you can create a file system on top of these volumes, run a database, or use them in any other way you would use a block device. Amazon EBS volumes are placed in a specific Availability Zone, where they are automatically replicated to protect you from the failure of a single component.
Amazon EBS provides three volume types: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. The three volume types differ in performance characteristics and cost, so you can choose the right storage performance and price for the needs of your applications. All EBS volume types offer the same durable snapshot capabilities and are designed for 99.999% availability.

NEW QUESTION 11
You have a load balancer configured for VPC, and all back-end Amazon EC2 instances are in service. However, your web browser times out when connecting to the load baIancer's DNS name. Which options are probable causes of this behavior? Choose 2 answers

  • A. The load balancer was not configured to use a public sub net with an Internet gateway configured
  • B. The Amazon EC2 instances do not have a dynamically allocated private IP address
  • C. The security groups or network ACLs are not property configured for web traffic.
  • D. The load balancer is not configured in a private subnet with a NAT instance.
  • E. The VPC does not have a VGW configure

Answer: AC

NEW QUESTION 12
Can I use Provisioned IOPS with VPC?

  • A. Only Oracle based RDS
  • B. No
  • C. Only with MSSQL based RDS
  • D. Yes for all RDS instances

Answer: D

NEW QUESTION 13
Amazon EC2 provides a repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. What is the monthly charge for using the public data sets?

  • A. A 1 time charge of 10$ for all the datasets.
  • B. 1$ per dataset per month
  • C. 10$ per month for all the datasets
  • D. There is no charge for using the public data sets

Answer: D

NEW QUESTION 14
Your company has multiple IT departments, each with their own VPC. Some VPCs are located within the same AWS account, and others in a different AWS account. You want to peer together all VPCs to enable the IT departments to have full access to each others' resources. There are certain limitations placed on VPC peering. Which of the following statements is incorrect in relation to VPC peering?

  • A. Private DNS values cannot be resolved between instances in peered VPCs.
  • B. You can have up to 3 VPC peering connections between the same two VPCs at the same time.
  • C. You cannot create a VPC peering connection between VPCs in different regions.
  • D. You have a limit on the number active and pending VPC peering connections that you can have per VPC.

Answer: B

Explanation: To create a VPC peering connection with another VPC, you need to be aware of the following limitations and rules:
You cannot create a VPC peering connection between VPCs that have matching or overlapping CIDR blocks.
You cannot create a VPC peering connection between VPCs in different regions.
You have a limit on the number active and pending VPC peering connections that you can have per VPC. VPC peering does not support transitive peering relationships; in a VPC peering connection, your VPC will not have access to any other VPCs that the peer VPC may be peered with. This includes VPC peering connections that are established entirely within your own AWS account.
You cannot have more than one VPC peering connection between the same two VPCs at the same time. The Maximum Transmission Unit (MTU) across a VPC peering connection is 1500 bytes.
A placement group can span peered VPCs; however, you will not get full-bisection bandwidth between instances in peered VPCs.
Unicast reverse path forwarding in VPC peering connections is not supported.
You cannot reference a security group from the peer VPC as a source or destination for ingress or egress rules in your security group. Instead, reference CIDR blocks of the peer VPC as the source or destination of your security group's ingress or egress rules.
Private DNS values cannot be resolved between instances in peered VPCs. Reference:
http://docs.aws.amazon.com/AmazonVPC/Iatest/PeeringGuide/vpc-peering-overview.htmI#vpc-peering-Ii mitations

NEW QUESTION 15
How are the EBS snapshots saved on Amazon 53?

  • A. Exponentially
  • B. Incrementally
  • C. EBS snapshots are not stored in the Amazon 53
  • D. Decrementally

Answer: B

NEW QUESTION 16
You want to establish a dedicated network connection from your premises to AWS in order to save money by transferring data directly to AWS rather than through your internet service provider. You are sure there must be some other benefits beyond cost savings. Which of the following would not be considered a benefit if you were to establish such a connection?

  • A. Elasticity
  • B. Compatibility with all AWS services.
  • C. Private connectMty to your Amazon VPC.
  • D. Everything listed is a benefi

Answer: D

Explanation: AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS.
Using AWS Direct Connect, you can establish private connectMty between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based
connections.
You could expect the following benefits if you use AWS Direct Connect. Reduced bandwidth costs
Consistent network performance Compatibility with all AWS services Private connectMty to your Amazon VPC Elasticity
Simplicity
Reference: http://aws.amazon.com/directconnect/

100% Valid and Newest Version AWS-Solution-Architect-Associate Questions & Answers shared by 2passeasy, Get Full Dumps HERE: https://www.2passeasy.com/dumps/AWS-Solution-Architect-Associate/ (New 672 Q&As)