Examcollection offers free demo for DBS-C01 exam. "AWS Certified Database - Specialty", also known as DBS-C01 exam, is a Amazon-Web-Services Certification. This set of posts, Passing the Amazon-Web-Services DBS-C01 exam, will help you answer those questions. The DBS-C01 Questions & Answers covers all the knowledge points of the real exam. 100% real Amazon-Web-Services DBS-C01 exams and revised by experts!

Also have DBS-C01 free dumps questions for you:

NEW QUESTION 1
A company is using Amazon RDS for PostgreSQL. The Security team wants all database connection requests to be logged and retained for 180 days. The RDS for PostgreSQL DB instance is currently using the default parameter group. A Database Specialist has identified that setting the log_connections parameter to 1 will enable connections logging.
Which combination of steps should the Database Specialist take to meet the logging and retention requirements? (Choose two.)

  • A. Update the log_connections parameter in the default parameter group
  • B. Create a custom parameter group, update the log_connections parameter, and associate the parameterwith the DB instance
  • C. Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to180 days
  • D. Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days
  • E. Connect to the RDS PostgreSQL host and update the log_connections parameter in the postgresql.conf file

Answer: AE

NEW QUESTION 2
A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.
What can the Database Specialist do to reduce the overall cost?

  • A. Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.
  • B. Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.
  • C. Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.
  • D. Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.

Answer: A

NEW QUESTION 3
A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL. The migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate that the data was migrated accurately from the source to the target before the cutover. The migration must have minimal impact on the performance of the source database.
Which approach will MOST effectively meet these requirements?

  • A. Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluste
  • B. Verify the datatype of the columns.
  • C. Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.
  • D. Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigrationchecklist to make sure there are no issues with the conversion.
  • E. Enable AWS DMS data validation on the task so the AWS DMS task compares the source and targetrecords, and reports any mismatches.

Answer: D

NEW QUESTION 4
A company is looking to move an on-premises IBM Db2 database running AIX on an IBM POWER7 server. Due to escalating support and maintenance costs, the company is exploring the option of moving the workload to an Amazon Aurora PostgreSQL DB cluster.
What is the quickest way for the company to gather data on the migration compatibility?

  • A. Perform a logical dump from the Db2 database and restore it to an Aurora DB cluste
  • B. Identify the gaps andcompatibility of the objects migrated by comparing row counts from source and target tables.
  • C. Run AWS DMS from the Db2 database to an Aurora DB cluste
  • D. Identify the gaps and compatibility of theobjects migrated by comparing the row counts from source and target tables.
  • E. Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate themigration compatibility.
  • F. Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster.Create a migration assessment report to evaluate the migration compatibility.

Answer: D

NEW QUESTION 5
An Amazon RDS EBS-optimized instance with Provisioned IOPS (PIOPS) storage is using less than half of its allocated IOPS over the course of several hours under constant load. The RDS instance exhibits multi-second read and write latency, and uses all of its maximum bandwidth for read throughput, yet the instance uses less than half of its CPU and RAM resources.
What should a Database Specialist do in this situation to increase performance and return latency to sub-second levels?

  • A. Increase the size of the DB instance storage
  • B. Change the underlying EBS storage type to General Purpose SSD (gp2)
  • C. Disable EBS optimization on the DB instance
  • D. Change the DB instance to an instance class with a higher maximum bandwidth

Answer: B

NEW QUESTION 6
A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after-the-fact analyses.
What should a Database Specialist do to meet these requirements with minimal effort?

  • A. Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucke
  • B. Set a lifecycle policy to expire the objects after 90 days.
  • C. Modify the RDS databases to publish log to Amazon CloudWatch Log
  • D. Change the log retention policy for each log group to expire the events after 90 days.
  • E. Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucke
  • F. Set a lifecycle policy to expire the objects after 90 days.
  • G. Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Log
  • H. Change the log retention policy for the log group to expire the events after 90 days.

Answer: A

NEW QUESTION 7
A company is load testing its three-tier production web application deployed with an AWS CloudFormation template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes made by the Application team will not change the Amazon RDS database resources already deployed.
Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)

  • A. Review the stack drift before modifying the template
  • B. Create and review a change set before applying it
  • C. Export the database resources as stack outputs
  • D. Define the database resources in a nested stack
  • E. Set a stack policy for the database resources

Answer: AD

NEW QUESTION 8
A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region.
Where should the AWS DMS replication instance be placed for the MOST optimal performance?

  • A. In the same Region and VPC of the source DB instance
  • B. In the same Region and VPC as the target DB instance
  • C. In the same VPC and Availability Zone as the target DB instance
  • D. In the same VPC and Availability Zone as the source DB instance

Answer: D

NEW QUESTION 9
A Database Specialist needs to speed up any failover that might occur on an Amazon Aurora PostgreSQL DB cluster. The Aurora DB cluster currently includes the primary instance and three Aurora Replicas.
How can the Database Specialist ensure that failovers occur with the least amount of downtime for the application?

  • A. Set the TCP keepalive parameters low
  • B. Call the AWS CLI failover-db-cluster command
  • C. Enable Enhanced Monitoring on the DB cluster
  • D. Start a database activity stream on the DB cluster

Answer: B

NEW QUESTION 10
A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores in Amazon DynamoDB tables in each Region. A Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed. The solution should also automate configuration changes across all Regions.
Which solution would meet these requirements and deploy the DynamoDB tables?

  • A. Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.
  • B. Create an AWS CloudFormation template and deploy the template to all the Regions.
  • C. Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.
  • D. Create DynamoDB tables using the AWS Management Console in all the Regions and create a step-bystep guide for future deployments.

Answer: B

NEW QUESTION 11
A gaming company has implemented a leaderboard in AWS using a Sorted Set data structure within Amazon ElastiCache for Redis. The ElastiCache cluster has been deployed with cluster mode disabled and has a replication group deployed with two additional replicas. The company is planning for a worldwide gaming event and is anticipating a higher write load than what the current cluster can handle.
Which method should a Database Specialist use to scale the ElastiCache cluster ahead of the upcoming event?

  • A. Enable cluster mode on the existing ElastiCache cluster and configure separate shards for the Sorted Setacross all nodes in the cluster.
  • B. Increase the size of the ElastiCache cluster nodes to a larger instance size.
  • C. Create an additional ElastiCache cluster and load-balance traffic between the two clusters.
  • D. Use the EXPIRE command and set a higher time to live (TTL) after each call to increment a given key.

Answer: B

NEW QUESTION 12
A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will spike rapidly.
How should a Database Specialist ensure DynamoDB can handle the increased traffic?

  • A. Ensure the table is always provisioned to meet peak needs
  • B. Allow burst capacity to handle the additional load
  • C. Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
  • D. Preprovision additional capacity for the known peaks and then reduce the capacity after the event

Answer: B

NEW QUESTION 13
A Database Specialist is designing a new database infrastructure for a ride hailing application. The application data includes a ride tracking system that stores GPS coordinates for all rides. Real-time statistics and metadata lookups must be performed with high throughput and microsecond latency. The database should be fault tolerant with minimal operational overhead and development effort.
Which solution meets these requirements in the MOST efficient way?

  • A. Use Amazon RDS for MySQL as the database and use Amazon ElastiCache
  • B. Use Amazon DynamoDB as the database and use DynamoDB Accelerator
  • C. Use Amazon Aurora MySQL as the database and use Aurora’s buffer cache
  • D. Use Amazon DynamoDB as the database and use Amazon API Gateway

Answer: D

NEW QUESTION 14
The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real-time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution.
Which approach will meet these requirements?

  • A. Use pg_audit to generate audit logs and send the logs to the Security team.
  • B. Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.
  • C. Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.
  • D. Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.

Answer: B

NEW QUESTION 15
A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379.
Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)

  • A. Enable in-transit and at-rest encryption on the ElastiCache cluster.
  • B. Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.
  • C. Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.
  • D. Create an IAM policy to allow the application service roles to access all ElastiCache API actions.
  • E. Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster’s security group.
  • F. Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.

Answer: ABE

NEW QUESTION 16
A Database Specialist is planning to create a read replica of an existing Amazon RDS for MySQL Multi-AZ DB instance. When using the AWS Management Console to conduct this task, the Database Specialist discovers that the source RDS DB instance does not appear in the read replica source selection box, so the read replica cannot be created.
What is the most likely reason for this?

  • A. The source DB instance has to be converted to Single-AZ first to create a read replica from it.
  • B. Enhanced Monitoring is not enabled on the source DB instance.
  • C. The minor MySQL version in the source DB instance does not support read replicas.
  • D. Automated backups are not enabled on the source DB instance.

Answer: D

NEW QUESTION 17
A company is planning to close for several days. A Database Specialist needs to stop all applications alongwith the DB instances to ensure employees do not have access to the systems during this time. All databasesare running on Amazon RDS for MySQL.
The Database Specialist wrote and executed a script to stop all the DB instances. When reviewing the logs,the Database Specialist found that Amazon RDS DB instances with read replicas did not stop.
How should the Database Specialist edit the script to fix this issue?

  • A. Stop the source instances before stopping their read replicas
  • B. Delete each read replica before stopping its corresponding source instance
  • C. Stop the read replicas before stopping their source instances
  • D. Use the AWS CLI to stop each read replica and source instance at the same

Answer: D

NEW QUESTION 18
A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load fata from Amazon S3 into the Neptune DB cluster using the Neptune bulk loader API. The Database Specialist receives the following error:
“Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = us-east-1. Please verify your S3 configuration.”
Which combination of actions should the Database Specialist take to troubleshoot the problem? (Choose two.)

  • A. Check that Amazon S3 has an IAM role granting read access to Neptune
  • B. Check that an Amazon S3 VPC endpoint exists
  • C. Check that a Neptune VPC endpoint exists
  • D. Check that Amazon EC2 has an IAM role granting read access to Amazon S3
  • E. Check that Neptune has an IAM role granting read access to Amazon S3

Answer: BD

NEW QUESTION 19
A company is writing a new survey application to be used with a weekly televised game show. The application will be available for 2 hours each week. The company expects to receive over 500,000 entries every week, with each survey asking 2-3 multiple choice questions of each user. A Database Specialist needs to select a platform that is highly scalable for a large number of concurrent writes to handle he anticipated volume.
Which AWS services should the Database Specialist consider? (Choose two.)

  • A. Amazon DynamoDB
  • B. Amazon Redshift
  • C. Amazon Neptune
  • D. Amazon Elasticsearch Service
  • E. Amazon ElastiCache

Answer: AE

NEW QUESTION 20
A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the data, the cluster is encrypted at rest using AWS KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region.
Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?

  • A. Create a new KMS customer master key in the source Regio
  • B. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.
  • C. Create a new IAM role with access to the KMS ke
  • D. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.
  • E. Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.
  • F. Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS ke
  • G. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.

Answer: A

NEW QUESTION 21
A company has a web-based survey application that uses Amazon DynamoDB. During peak usage, when survey responses are being collected, a Database Specialist sees the ProvisionedThroughputExceededException error.
What can the Database Specialist do to resolve this error? (Choose two.)

  • A. Change the table to use Amazon DynamoDB Streams
  • B. Purchase DynamoDB reserved capacity in the affected Region
  • C. Increase the write capacity units for the specific table
  • D. Change the table capacity mode to on-demand
  • E. Change the table type to throughput optimized

Answer: CE

NEW QUESTION 22
A company is running a finance application on an Amazon RDS for MySQL DB instance. The application is governed by multiple financial regulatory agencies. The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only. AWS KMS is used for encryption at rest.
Which step will provide additional security?

  • A. Set up NACLs that allow the entire EC2 subnet to access the DB instance
  • B. Disable the master user account
  • C. Set up a security group that blocks SSH to the DB instance
  • D. Set up RDS to use SSL for data in transit

Answer: D

NEW QUESTION 23
A company’s Security department established new requirements that state internal users must connect to an existing Amazon RDS for SQL Server DB instance using their corporate Active Directory (AD) credentials. A Database Specialist must make the modifications needed to fulfill this requirement.
Which combination of actions should the Database Specialist take? (Choose three.)

  • A. Disable Transparent Data Encryption (TDE) on the RDS SQL Server DB instance.
  • B. Modify the RDS SQL Server DB instance to use the directory for Windows authentication.Createappropriate new logins.
  • C. Use the AWS Management Console to create an AWS Managed Microsoft A
  • D. Create a trust relationshipwith the corporate AD.
  • E. Stop the RDS SQL Server DB instance, modify it to use the directory for Windows authentication, and startit agai
  • F. Create appropriate new logins.
  • G. Use the AWS Management Console to create an AD Connecto
  • H. Create a trust relationship withthecorporate AD.
  • I. Configure the AWS Managed Microsoft AD domain controller Security Group.

Answer: CDF

NEW QUESTION 24
A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete.
Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.
Which approach should the Database Specialist take to reduce downtime?

  • A. Deploy multiple read replicas and have the team members make changes to separate replica instances
  • B. Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot
  • C. Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature
  • D. Enable the Amazon RDS for MySQL Backtrack feature

Answer: A

NEW QUESTION 25
A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database.
The application needs to be deployed to production and other non-production environments. A Database Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS CloudFormation templates used for automated deployment. The CloudFormation templates are version controlled in the company’s code repository. The company also needs to meet compliance requirement by routinely rotating its database master password for production.
What is most secure solution to store the master password?

  • A. Store the master password in a parameter file in each environmen
  • B. Reference the environment-specific parameter file in the CloudFormation template.
  • C. Encrypt the master password using an AWS KMS ke
  • D. Store the encrypted master password in theCloudFormation template.
  • E. Use the secretsmanager dynamic reference to retrieve the master password stored in AWS SecretsManager and enable automatic rotation.
  • F. Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems ManagerParameter Store and enable automatic rotation.

Answer: C

NEW QUESTION 26
......

Recommend!! Get the Full DBS-C01 dumps in VCE and PDF From 2passeasy, Welcome to Download: https://www.2passeasy.com/dumps/DBS-C01/ (New 85 Q&As Version)