FREE DevOps Engineering on AWS Certification Trivia Questions and Answers
In AWS Code Commit, a corporation has control over the source code for an application. Using AWS CodePipeline, the firm is developing a CI/CD pipeline for the application. When modifications are made to the main branch of the CodeCommit repository, the pipeline must begin automatically. Because changes occur on a daily basis, the pipeline must be as responsive as feasible.
What should a DevOps engineer do to achieve these requirements?
The update will generate the event, and the event will initiate the pipeline. Option D's periodic checks will work, but they won't start the pipeline until the next periodic check takes place. Option B is not an attribute supported by AWS Code Commit. Option A is not a valid way to begin the pipeline.
A development team is working on an application that will serve a huge number of customers across three AWS Regions. To provide low-latency data access, the application will use an Amazon DynamoDB database that must be accessible in all three Regions. When one Region updates the database, the changes must be smoothly propagated to the other Regions.
What is the LEAST operational overhead a DevOps engineer should do when configuring the table to match these requirements?
Amazon DynamoDB global tables begin as single-region tables and may be expanded to support multi-region and multi-active workloads. Global tables allow low-latency data access to Region-specific workloads without having you to build or manage a replication solution.
Option D is inappropriate since using a distinct table in each Region would need the adoption of an additional replication solution. Option C is inappropriate because creating and managing a synchronization mechanism across the tables would be an unnecessary operational overhead. Option A is incorrect because global tables are multi-regional, multi-active tables without read replicates.
A business has a legacy API that is served by a fleet of Amazon EC2 machines behind a public Application Load Balancer (ALB). The ALB supports access logging and saves the logs on Amazon S3. The API may be accessed using the hostname api.example.com. The firm manages the hostname via Amazon Route 53.
Developers rebuilt five API endpoints using a distinct AWS Lambda function for each endpoint. A DevOps engineer want to test the new Lambda functions with a small number of random consumers. To guarantee compatibility with an existing log processing service, the test must not change the ALB access logs.
To achieve these objectives, how should the DevOps engineer conduct the test?
A business is building an application with AWS CodeBuild. All build artifacts must be encrypted at rest per company policy. Access to the artifacts must be limited to IAM users in an operations IAM group who have authority to take an operations IAM role.
Which solution fits these standards?
Except for requests that apply for the role, the Deny statement with the NotPrincipal element set to the operations IAM role will deny access to the S3 bucket. According to the scenario, the operations role has a permissions policy that permits access to the bucket.
Options C and D are inappropriate because the bucket policy refers to an IAM group rather than a role. Option D is likewise wrong since AWS suggests using default encryption rather than a bucket policy to enforce encryption. Option C also permits artifacts to be temporarily kept at rest without encryption. Option A is wrong because AWS Key Management Service (AWS KMS) Encrypt API actions are helpful for encrypting plaintext values like passwords but not for encrypting a build artifact file, archive, or object.
A business runs an application on Amazon EC2 instances using the most recent version of the Amazon Linux 2 AMI. When server administrators implement new security updates, they manually remove impacted instances from service, patch the instances, and reinstall the instances.
A new security policy mandates that the corporation implement security updates within 7 days of their release. The company's security staff must ensure that all EC2 instances follow this policy. The patching must take place at the least disruptive period for users.
Which solution will ensure that these needs are met automatically?
Patch Manager, an AWS Systems Manager function, will automatically run security fixes during a maintenance window based on a list of acceptable patches that you designate in a patch baseline. The company's security staff may check the patch compliance of the instances in the Systems Manager console or extract a summary using the AWS CLI.
Option D is wrong since AWS CodeBuild generates artifacts from your source code. Patches are not deployed to instances via CodeBuild. Option B is wrong since the Amazon Linux 2 preinstalled Systems Manager Agent (SSM Agent) does not need to be scheduled to fetch the fixes. Only a Systems Manager maintenance window has to be associated with the patching configuration. Option A is wrong because it lacks a method for the security team to verify patch compliance. The cron job also has a single point of failure as an option.
On AWS, a DevOps engineer must create a blue/green deployment procedure for an application. The traffic between the environments must be gradually shifted by the DevOps engineer.
The application operates on Amazon EC2 instances and is managed by an Application Load Balancer (ALB). The instances are run as part of an Amazon EC2 Auto Scaling group. Data is stored on an Amazon RDS Multi-AZ DB instance. External DNS is provided by Amazon Route 53.
To achieve these criteria, which combination of steps should the DevOps engineer take?
A blue/green deployment has two distinct settings. The blue environment has Amazon EC2 instances running the current production version of the application in an Auto Scaling group. The green environment comprises EC2 instances from a different Auto Scaling group that are running the updated version of the application. Because each Auto Scaling group is behind its own Application Load Balancer (ALB), you may create two Alias records as endpoints in Amazon Route 53 and use a weighted routing strategy to progressively move traffic from the blue ALB to the green ALB. Unless schema modifications are required for the new release, it is advisable to point both environments to the same database to ensure data consistency throughout the cutover.
Option F is wrong since it requires two ALBs as endpoints in order to utilize Route 53 to progressively transfer traffic. Option D is wrong because until a health check detects a failure, a failover routing strategy transmits all traffic to a single endpoint. As a result, this option cannot progressively shift traffic. Option A is wrong since the hot standby instance in an Amazon RDS Multi-AZ DB instance is not available for reads or writes.
All of a company's AWS accounts utilize AWS CloudTrail, which delivers all trails to the same Amazon S3 bucket. The organization uses S3 event notifications and an AWS Lambda code to deliver defined events to a third-party logging solution.
A security services provider has been asked by the business to set up a security operations center. The security services provider would want to receive CloudTrail logs via an Amazon Simple Queue Service (Amazon SQS) queue.
To send events to the third-party logging solution, the business must continue to use S3 event notifications and the Lambda function.
What is the MOST EFFECTIVE strategy to achieve these requirements?
You can change the S3 event notification destination to an Amazon Simple Notification Service (Amazon SNS) subject to build a fanout messaging scenario for Amazon S3 event notifications of one event to multiple consumers. You may subscribe to the subject with numerous consumers, including the AWS Lambda function and the Amazon Simple Queue Service (Amazon SQS) queue, without modifying the Lambda function code.
Because of overlapping notification event prefixes and suffixes, Option D is not acceptable and will result in a "Configuration is ambiguously defined" problem. Because Amazon Kinesis Data Streams is not a viable S3 event notification destination, Option B is invalid. Option A is unacceptable since it provides an insufficient solution. You cannot subscribe a SQS queue to an Amazon CloudWatch Logs log group directly.