Practice Test

True or False: The read-intensive data access pattern requires more IOPS compared to write-intensive data access pattern.

  • True
  • False

Answer: False

Explanation: The write-intensive data access pattern typically requires more IOPS (input/output operations per second) compared to the read-intensive one. This is because writing data into storage systems usually involves more complex and time-consuming operations.

What type of data access pattern would be suitable for a logging system that constantly writes logs to storage?

  • A. Read-intensive
  • B. Write-intensive
  • C. Random
  • D. Sequential

Answer: B. Write-intensive

Explanation: A logging system constantly writing logs to storage is typically a write-intensive system

True or False: A database that frequently reads data but seldom writes can be categorized as a write-intensive data access pattern.

  • True
  • False

Answer: False

Explanation: Such a database is rather an example of a read-intensive data access pattern as it predominantly involves reading operations.

Multiple Select: Which AWS services are suitable for write-intensive data access patterns?

  • A. Amazon S3
  • B. Amazon DynamoDB
  • C. AWS Lambda
  • D. Amazon RDS
  • E. AWS EC2

Answer: B. Amazon DynamoDB, D. Amazon RDS

Explanation: Amazon DynamoDB and Amazon RDS are storage services that provide high-performance write capabilities, suitable for write-intensive data access patterns.

True or False: In selection between read-intensive and write-intensive data access patterns, the size of data doesn’t matter.

  • True
  • False

Answer: False

Explanation: The size of the data does matter, for example, large volumes of data may be more suitable for read-intensive data access patterns, while small volumes of data may be more suitable for write-intensive.

For a social media application which frequently pulls the data, which type of data access pattern is most suitable?

  • A. Read-intensive
  • B. Write-intensive
  • C. Random
  • D. Sequential

Answer: A. Read-intensive

Explanation: As the application frequently pulls (reads) data, a read-intensive data access pattern would be most suitable.

Which of the following AWS services would be suitable for large scale, read-intensive data access patterns?

  • A. Amazon S3
  • B. Amazon SQS
  • C. Amazon Redshift
  • D. AWS Glue

Answer: C. Amazon Redshift

Explanation: Amazon Redshift is designed for handling large scale, read-intensive data workloads.

True or False: A read-heavy data access pattern on DynamoDB should have provisioned read capacity units higher than write capacity units.

  • True
  • False

Answer: True

Explanation: If your application is read-heavy, you should provision more read capacity units for it.

Which access pattern is suitable for a real-time bidding system where bid, ask prices are constantly updated?

  • A. Read-Intensive
  • B. Write-Intensive
  • C. Random
  • D. Sequential

Answer: B. Write-Intensive

Explanation: A real-time bidding system which constantly updates data is indicative of a write-intensive access pattern.

True or False: Provisioned throughput in DynamoDB can be adjusted in response to predictable or unpredictable changes in read and write activity.

  • True
  • False

Answer: True

Explanation: Provisioned throughput in DynamoDB is designed to be flexible to accommodate the dynamic needs of the applications, which could change in response to read and write activity.

Which among the following is a good strategy to handle write-intensive workloads on Amazon DynamoDB?

  • A. Increase read capacity units
  • B. Choose on-demand capacity mode
  • C. Vertically scale your EC2 instances
  • D. Use S3 instead of DynamoDB

Answer: B. Choose on-demand capacity mode

Explanation: The on-demand capacity mode automatically allocates capacity as required, making it a good choice for handling variable workloads with high write activity.

Interview Questions

What is storage tiering in AWS?

Storage tiering in AWS is a method of storing data across various types of storage media with different performance characteristics to balance cost and performance. This could include AWS S3 Standard for frequently accessed data, S3 Infrequent Access for less accessed data, and Glacier or Glacier Deep Archive for long-term archiving.

What does cold tiering imply in terms of Object Storage on AWS?

Cold tiering refers to the use of AWS S3 Glacier or Glacier Deep Archive storage classes which are designed for long-term storage of data that is accessed less frequently and can tolerate retrieval latency.

How are S3 storage classes used in relation to storage tiering?

AWS S3 storage classes provide options for tiered storage. For instance, S3 Standard is used for frequently accessed data, S3 Intelligent-Tiering optimizes costs by automatically moving data to the most cost-effective access tier, and S3 Glacier and S3 Glacier Deep Archive tiers are used for lower-cost long-term archiving.

What AWS service could be used to automate storage tiering?

AWS S3 Lifecycle policies can be used to automate the process of moving objects between storage tiers depending on the age of the data.

What is the retrieval time for data stored in the cold storage tiers of AWS like Glacier?

AWS S3 Glacier has a retrieval time ranging from a few minutes to several hours depending on the retrieval option selected.

What are the pricing implications of accessing data in cold storage tiers?

Accessing data in the cold-storage tiers, such as S3 Glacier or S3 Glacier Deep Archive have additional costs related to data retrieval, which varies depending on the retrieval speed option selected.

In which scenario can S3 One Zone-IA storage class be ideally used?

S3 One Zone-IA is ideal for use cases with infrequently accessed, non-critical data that can withstand the loss of a single Availability Zone.

How does S3 Intelligent-Tiering storage class simplify storage tiering?

S3 Intelligent-Tiering automatically moves data between two access tiers – one for frequent access and one for infrequent access depending on data access patterns. This optimization comes with no retrieval fees, which can result in significant cost savings.

Does S3 Intelligent-Tiering storage class move data to Glacier automatically?

No, S3 Intelligent-Tiering only automatically moves data between the frequent and infrequent access tiers. Moving data to Glacier or Glacier Deep Archive has to be configured separately.

What is the optimal AWS storage class for data archiving and backup?

For long-term archiving and backup where immediate retrieval is not a concern, Amazon S3 Glacier or Glacier Deep Archive are the most cost-effective storage classes.

What is Amazon S3’s archival storage solution?

Amazon Glacier is the archival storage solution offered by Amazon S3 that is highly durable and extremely low-cost.

What is lifecycle configuration rule in Amazon S3?

A lifecycle configuration rule defines the actions that Amazon S3 applies to a group of objects. You can use the lifecycle configuration rules to transition objects to other storage classes, archive them, or expire (delete) them.

How long does it take to recover my data from AWS Glacier?

Recovery times for AWS Glacier can range from a few minutes to up to 12 hours, depending upon the type of retrieval request made – expedited, standard or bulk.

Is it possible to move a subset of my data to different storage tiers based on a schedule?

Yes, with the help of Amazon S3’s lifecycle policies, you can define rules to transition objects among storage classes at defined times.

Is the data stored in AWS S3 Glacier secure and durable?

Yes, AWS S3 Glacier is designed for 99.999999999% (11 nines) of durability, and data is protected using 256-bit advanced encryption standard (AES-256).

Leave a Reply

Your email address will not be published. Required fields are marked *