Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Show

You can translate the content of this page by selecting a language in the select box.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Testimonial – I passed aws solutions architect exam with this App and Book from Djamgatech

aws solution architect cheat sheet 2022 pdf

aws certified solutions architect pdf 2022

aws solutions architect cheat sheet 2022

saa-c03

AWS Certified Solutions Architect – Associate  average salary

The AWS Certified Solutions Architect – Associate  average salary is  $149,446/year

In this blog, we will help you prepare for the AWS Solution Architect Associate Certification Exam, give you some  facts and summaries, provide AWS Solution Architect Associate Top  Questions and Answers Dump

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus


Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS  SAA-C03 Exam Prep

AWS Certified Solutions Architect – Associate (SAA-C03) Exam Guide

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C03 Exam Guide

The AWS Certified Solutions Architect – Associate (SAA-C03) exam is intended for individuals who perform in a solutions architect role.
The exam validates a candidate’s ability to use AWS technologies to design solutions based on the AWS Well-Architected Framework.

The exam also validates a candidate’s ability to complete the following tasks:• Design solutions that incorporate AWS services to meet current business requirements and future projected needs• Design architectures that are secure, resilient, high-performing, and cost-optimized• Review existing solutions and determine improvements

Unscored contentThe exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Target candidate descriptionThe target candidate should have at least 1 year of hands-on experience designing cloud solutions that use AWS services

Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720.Your score shows how you performed on the exam as a whole and whether or not you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.

Content outline:Domain 1: Design Secure Architectures 30%Domain 2: Design Resilient Architectures 26%Domain 3: Design High-Performing Architectures 24%Domain 4: Design Cost-Optimized Architectures 20%

Domain 1: Design Secure ArchitecturesThis exam domain is focused on securing your architectures on AWS and comprises 30% of the exam. Task statements include:

Task Statement 1: Design secure access to AWS resources.
Knowledge of:• Access controls and management across multiple accounts• AWS federated access and identity services (for example, AWS Identity and Access Management [IAM], AWS Single Sign-On [AWS SSO])• AWS global infrastructure (for example, Availability Zones, AWS Regions)• AWS security best practices (for example, the principle of least privilege)

• The AWS shared responsibility model

Skills in:• Applying AWS security best practices to IAM users and root users (for example, multi-factor authentication [MFA])• Designing a flexible authorization model that includes IAM users, groups, roles, and policies• Designing a role-based access control strategy (for example, AWS Security Token Service [AWS STS], role switching, cross-account access)• Designing a security strategy for multiple AWS accounts (for example, AWS Control Tower, service control policies [SCPs])• Determining the appropriate use of resource policies for AWS services

• Determining when to federate a directory service with IAM roles

Task Statement 2: Design secure workloads and applications.

Knowledge of:• Application configuration and credentials security• AWS service endpoints• Control ports, protocols, and network traffic on AWS• Secure application access• Security services with appropriate use cases (for example, Amazon Cognito, Amazon GuardDuty, Amazon Macie)

• Threat vectors external to AWS (for example, DDoS, SQL injection)

Skills in:• Designing VPC architectures with security components (for example, security groups, route tables, network ACLs, NAT gateways)• Determining network segmentation strategies (for example, using public subnets and private subnets)• Integrating AWS services to secure applications (for example, AWS Shield, AWS WAF, AWS SSO, AWS Secrets Manager)

• Securing external network connections to and from the AWS Cloud (for example, VPN, AWS Direct Connect)

Task Statement 3: Determine appropriate data security controls.

Knowledge of:• Data access and governance• Data recovery• Data retention and classification

• Encryption and appropriate key management

Skills in:• Aligning AWS technologies to meet compliance requirements• Encrypting data at rest (for example, AWS Key Management Service [AWS KMS])• Encrypting data in transit (for example, AWS Certificate Manager [ACM] using TLS)• Implementing access policies for encryption keys• Implementing data backups and replications• Implementing policies for data access, lifecycle, and protection

• Rotating encryption keys and renewing certificates

Domain 2: Design Resilient Architectures
This exam domain is focused on designing resilient architectures on AWS and comprises 26% of the exam. Task statements include:

Task Statement 1: Design scalable and loosely coupled architectures.
Knowledge of:• API creation and management (for example, Amazon API Gateway, REST API)• AWS managed services with appropriate use cases (for example, AWS Transfer Family, AmazonSimple Queue Service [Amazon SQS], Secrets Manager)• Caching strategies• Design principles for microservices (for example, stateless workloads compared with stateful workloads)• Event-driven architectures• Horizontal scaling and vertical scaling• How to appropriately use edge accelerators (for example, content delivery network [CDN])• How to migrate applications into containers• Load balancing concepts (for example, Application Load Balancer)• Multi-tier architectures• Queuing and messaging concepts (for example, publish/subscribe)• Serverless technologies and patterns (for example, AWS Fargate, AWS Lambda)• Storage types with associated characteristics (for example, object, file, block)• The orchestration of containers (for example, Amazon Elastic Container Service [Amazon ECS],Amazon Elastic Kubernetes Service [Amazon EKS])• When to use read replicas

• Workflow orchestration (for example, AWS Step Functions)

Skills in:• Designing event-driven, microservice, and/or multi-tier architectures based on requirements• Determining scaling strategies for components used in an architecture design• Determining the AWS services required to achieve loose coupling based on requirements• Determining when to use containers• Determining when to use serverless technologies and patterns• Recommending appropriate compute, storage, networking, and database technologies based on requirements

• Using purpose-built AWS services for workloads

Task Statement 2: Design highly available and/or fault-tolerant architectures.
Knowledge of:• AWS global infrastructure (for example, Availability Zones, AWS Regions, Amazon Route 53)• AWS managed services with appropriate use cases (for example, Amazon Comprehend, Amazon Polly)• Basic networking concepts (for example, route tables)• Disaster recovery (DR) strategies (for example, backup and restore, pilot light, warm standby,active-active failover, recovery point objective [RPO], recovery time objective [RTO])• Distributed design patterns• Failover strategies• Immutable infrastructure• Load balancing concepts (for example, Application Load Balancer)• Proxy concepts (for example, Amazon RDS Proxy)• Service quotas and throttling (for example, how to configure the service quotas for a workload in a standby environment)• Storage options and characteristics (for example, durability, replication)

• Workload visibility (for example, AWS X-Ray)

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Skills in:• Determining automation strategies to ensure infrastructure integrity• Determining the AWS services required to provide a highly available and/or fault-tolerant architecture across AWS Regions or Availability Zones• Identifying metrics based on business requirements to deliver a highly available solution• Implementing designs to mitigate single points of failure• Implementing strategies to ensure the durability and availability of data (for example, backups)• Selecting an appropriate DR strategy to meet business requirements• Using AWS services that improve the reliability of legacy applications and applications not built for the cloud (for example, when application changes are not possible)

• Using purpose-built AWS services for workloads

Domain 3: Design High-Performing Architectures
This exam domain is focused on designing high-performing architectures on AWS and comprises 24% of the exam. Task statements include:

Task Statement 1: Determine high-performing and/or scalable storage solutions.

Knowledge of:• Hybrid storage solutions to meet business requirements• Storage services with appropriate use cases (for example, Amazon S3, Amazon Elastic File System [Amazon EFS], Amazon Elastic Block Store [Amazon EBS])

• Storage types with associated characteristics (for example, object, file, block)

Skills in:• Determining storage services and configurations that meet performance demands

• Determining storage services that can scale to accommodate future needs

Task Statement 2: Design high-performing and elastic compute solutions.
Knowledge of:• AWS compute services with appropriate use cases (for example, AWS Batch, Amazon EMR, Fargate)• Distributed computing concepts supported by AWS global infrastructure and edge services• Queuing and messaging concepts (for example, publish/subscribe)• Scalability capabilities with appropriate use cases (for example, Amazon EC2 Auto Scaling, AWS Auto Scaling)• Serverless technologies and patterns (for example, Lambda, Fargate)

• The orchestration of containers (for example, Amazon ECS, Amazon EKS)

Skills in:• Decoupling workloads so that components can scale independently• Identifying metrics and conditions to perform scaling actions• Selecting the appropriate compute options and features (for example, EC2 instance types) to meet business requirements

• Selecting the appropriate resource type and size (for example, the amount of Lambda memory) to meet business requirements

Task Statement 3: Determine high-performing database solutions.
Knowledge of:• AWS global infrastructure (for example, Availability Zones, AWS Regions)• Caching strategies and services (for example, Amazon ElastiCache)• Data access patterns (for example, read-intensive compared with write-intensive)• Database capacity planning (for example, capacity units, instance types, Provisioned IOPS)• Database connections and proxies• Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations)• Database replication (for example, read replicas)

• Database types and services (for example, serverless, relational compared with non-relational, in-memory)

Skills in:• Configuring read replicas to meet business requirements• Designing database architectures• Determining an appropriate database engine (for example, MySQL compared withPostgreSQL)• Determining an appropriate database type (for example, Amazon Aurora, Amazon DynamoDB)

• Integrating caching to meet business requirements

Task Statement 4: Determine high-performing and/or scalable network architectures.
Knowledge of:• Edge networking services with appropriate use cases (for example, Amazon CloudFront, AWS Global Accelerator)• How to design network architecture (for example, subnet tiers, routing, IP addressing)• Load balancing concepts (for example, Application Load Balancer)

• Network connection options (for example, AWS VPN, Direct Connect, AWS PrivateLink)

Skills in:• Creating a network topology for various architectures (for example, global, hybrid, multi-tier)• Determining network configurations that can scale to accommodate future needs• Determining the appropriate placement of resources to meet business requirements

• Selecting the appropriate load balancing strategy

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Task Statement 5: Determine high-performing data ingestion and transformation solutions.
Knowledge of:• Data analytics and visualization services with appropriate use cases (for example, Amazon Athena, AWS Lake Formation, Amazon QuickSight)• Data ingestion patterns (for example, frequency)• Data transfer services with appropriate use cases (for example, AWS DataSync, AWS Storage Gateway)• Data transformation services with appropriate use cases (for example, AWS Glue)• Secure access to ingestion access points• Sizes and speeds needed to meet business requirements

• Streaming data services with appropriate use cases (for example, Amazon Kinesis)

Skills in:• Building and securing data lakes• Designing data streaming architectures• Designing data transfer solutions• Implementing visualization strategies• Selecting appropriate compute options for data processing (for example, Amazon EMR)• Selecting appropriate configurations for ingestion

• Transforming data between formats (for example, .csv to .parquet)

Domain 4: Design Cost-Optimized ArchitecturesThis exam domain is focused optimizing solutions for cost-effectiveness on AWS and comprises 20% of the exam. Task statements include:

Task Statement 1: Design cost-optimized storage solutions.
Knowledge of:• Access options (for example, an S3 bucket with Requester Pays object storage)• AWS cost management service features (for example, cost allocation tags, multi-account billing)• AWS cost management tools with appropriate use cases (for example, AWS Cost Explorer, AWS Budgets, AWS Cost and Usage Report)• AWS storage services with appropriate use cases (for example, Amazon FSx, Amazon EFS, Amazon S3, Amazon EBS)• Backup strategies• Block storage options (for example, hard disk drive [HDD] volume types, solid state drive [SSD] volume types)• Data lifecycles• Hybrid storage options (for example, DataSync, Transfer Family, Storage Gateway)• Storage access patterns• Storage tiering (for example, cold tiering for object storage)

• Storage types with associated characteristics (for example, object, file, block)

Skills in:• Designing appropriate storage strategies (for example, batch uploads to Amazon S3 compared with individual uploads)• Determining the correct storage size for a workload• Determining the lowest cost method of transferring data for a workload to AWS storage• Determining when storage auto scaling is required• Managing S3 object lifecycles• Selecting the appropriate backup and/or archival solution• Selecting the appropriate service for data migration to storage services• Selecting the appropriate storage tier• Selecting the correct data lifecycle for storage

• Selecting the most cost-effective storage service for a workload

Task Statement 2: Design cost-optimized compute solutions.
Knowledge of:• AWS cost management service features (for example, cost allocation tags, multi-account billing)• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)• AWS global infrastructure (for example, Availability Zones, AWS Regions)• AWS purchasing options (for example, Spot Instances, Reserved Instances, Savings Plans)• Distributed compute strategies (for example, edge processing)• Hybrid compute options (for example, AWS Outposts, AWS Snowball Edge)• Instance types, families, and sizes (for example, memory optimized, compute optimized, virtualization)• Optimization of compute utilization (for example, containers, serverless computing, microservices)

• Scaling strategies (for example, auto scaling, hibernation)

Skills in:• Determining an appropriate load balancing strategy (for example, Application Load Balancer [Layer 7] compared with Network Load Balancer [Layer 4] compared with Gateway Load Balancer)• Determining appropriate scaling methods and strategies for elastic workloads (for example, horizontal compared with vertical, EC2 hibernation)• Determining cost-effective AWS compute services with appropriate use cases (for example, Lambda, Amazon EC2, Fargate)• Determining the required availability for different classes of workloads (for example, production workloads, non-production workloads)• Selecting the appropriate instance family for a workload

• Selecting the appropriate instance size for a workload

Task Statement 3: Design cost-optimized database solutions.
Knowledge of:• AWS cost management service features (for example, cost allocation tags, multi-account billing)• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)• Caching strategies• Data retention policies• Database capacity planning (for example, capacity units)• Database connections and proxies• Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations)• Database replication (for example, read replicas)

• Database types and services (for example, relational compared with non-relational, Aurora, DynamoDB)

Skills in:• Designing appropriate backup and retention policies (for example, snapshot frequency)• Determining an appropriate database engine (for example, MySQL compared with PostgreSQL)• Determining cost-effective AWS database services with appropriate use cases (for example, DynamoDB compared with Amazon RDS, serverless)• Determining cost-effective AWS database types (for example, time series format, columnar format)

• Migrating database schemas and data to different locations and/or different database engines

Task Statement 4: Design cost-optimized network architectures.
Knowledge of:• AWS cost management service features (for example, cost allocation tags, multi-account billing)• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)• Load balancing concepts (for example, Application Load Balancer)• NAT gateways (for example, NAT instance costs compared with NAT gateway costs)• Network connectivity (for example, private lines, dedicated lines, VPNs)• Network routing, topology, and peering (for example, AWS Transit Gateway, VPC peering)

• Network services with appropriate use cases (for example, DNS)

Skills in:• Configuring appropriate NAT gateway types for a network (for example, a single shared NATgateway compared with NAT gateways for each Availability Zone)• Configuring appropriate network connections (for example, Direct Connect compared with VPN compared with internet)• Configuring appropriate network routes to minimize network transfer costs (for example, Region to Region, Availability Zone to Availability Zone, private to public, Global Accelerator, VPC endpoints)• Determining strategic needs for content delivery networks (CDNs) and edge caching• Reviewing existing workloads for network optimizations• Selecting an appropriate throttling strategy

• Selecting the appropriate bandwidth allocation for a network device (for example, a single VPN compared with multiple VPNs, Direct Connect speed)

Which key tools, technologies, and concepts might be covered on the exam?The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order.AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:• Compute• Cost management• Database• Disaster recovery• High performance• Management and governance• Microservices and component decoupling• Migration and data transfer• Networking, connectivity, and content delivery• Resiliency• Security• Serverless and event-driven design principles• Storage

AWS Services and FeaturesThere are lots of new services and feature updates in scope for the new AWS Certified Solutions Architect Associate certification! Here’s a list of some of the new services that will be in scope for the new version of the exam:

Analytics:• Amazon Athena• AWS Data Exchange• AWS Data Pipeline• Amazon EMR• AWS Glue• Amazon Kinesis• AWS Lake Formation• Amazon Managed Streaming for Apache Kafka (Amazon MSK)• Amazon OpenSearch Service (Amazon Elasticsearch Service)• Amazon QuickSight

• Amazon Redshift

Application Integration:• Amazon AppFlow• AWS AppSync• Amazon EventBridge (Amazon CloudWatch Events)• Amazon MQ• Amazon Simple Notification Service (Amazon SNS)• Amazon Simple Queue Service (Amazon SQS)

• AWS Step Functions

AWS Cost Management:• AWS Budgets• AWS Cost and Usage Report• AWS Cost Explorer

• Savings Plans

Compute:• AWS Batch• Amazon EC2• Amazon EC2 Auto Scaling• AWS Elastic Beanstalk• AWS Outposts• AWS Serverless Application Repository• VMware Cloud on AWS

• AWS Wavelength

Containers:• Amazon Elastic Container Registry (Amazon ECR)• Amazon Elastic Container Service (Amazon ECS)• Amazon ECS Anywhere• Amazon Elastic Kubernetes Service (Amazon EKS)• Amazon EKS Anywhere

• Amazon EKS Distro

Database:• Amazon Aurora• Amazon Aurora Serverless• Amazon DocumentDB (with MongoDB compatibility)• Amazon DynamoDB• Amazon ElastiCache• Amazon Keyspaces (for Apache Cassandra)• Amazon Neptune• Amazon Quantum Ledger Database (Amazon QLDB)• Amazon RDS• Amazon Redshift

• Amazon Timestream

Developer Tools:
• AWS X-Ray

Front-End Web and Mobile:• AWS Amplify• Amazon API Gateway• AWS Device Farm

• Amazon Pinpoint

Machine Learning:• Amazon Comprehend• Amazon Forecast• Amazon Fraud Detector• Amazon Kendra• Amazon Lex• Amazon Polly• Amazon Rekognition• Amazon SageMaker• Amazon Textract• Amazon Transcribe

• Amazon Translate

Management and Governance:• AWS Auto Scaling• AWS CloudFormation• AWS CloudTrail• Amazon CloudWatch• AWS Command Line Interface (AWS CLI)• AWS Compute Optimizer• AWS Config• AWS Control Tower• AWS License Manager• Amazon Managed Grafana• Amazon Managed Service for Prometheus• AWS Management Console• AWS Organizations• AWS Personal Health Dashboard• AWS Proton• AWS Service Catalog• AWS Systems Manager• AWS Trusted Advisor

• AWS Well-Architected Tool

Media Services:• Amazon Elastic Transcoder

• Amazon Kinesis Video Streams

Migration and Transfer:• AWS Application Discovery Service• AWS Application Migration Service (CloudEndure Migration)• AWS Database Migration Service (AWS DMS)• AWS DataSync• AWS Migration Hub• AWS Server Migration Service (AWS SMS)• AWS Snow Family

• AWS Transfer Family

Networking and Content Delivery: • Amazon CloudFront • AWS Direct Connect • Elastic Load Balancing (ELB) • AWS Global Accelerator • AWS PrivateLink • Amazon Route 53 • AWS Transit Gateway • Amazon VPC

• AWS VPN

Security, Identity, and Compliance: • AWS Artifact • AWS Audit Manager • AWS Certificate Manager (ACM) • AWS CloudHSM • Amazon Cognito • Amazon Detective • AWS Directory Service • AWS Firewall Manager • Amazon GuardDuty • AWS Identity and Access Management (IAM) • Amazon Inspector • AWS Key Management Service (AWS KMS) • Amazon Macie • AWS Network Firewall • AWS Resource Access Manager (AWS RAM) • AWS Secrets Manager • AWS Security Hub • AWS Shield • AWS Single Sign-On

• AWS WAF

Serverless: • AWS AppSync • AWS Fargate

• AWS Lambda

Storage: • AWS Backup • Amazon Elastic Block Store (Amazon EBS)• Amazon Elastic File System (Amazon EFS)• Amazon FSx (for all types) • Amazon S3 • Amazon S3 Glacier

• AWS Storage Gateway

Out-of-scope AWS services and featuresThe following is a non-exhaustive list of AWS services and features that are not covered on the exam.These services and features do not represent every AWS offering that is excluded from the exam content.

Analytics:
• Amazon CloudSearch

Application Integration:
• Amazon Managed Workflows for Apache Airflow (Amazon MWAA)

AR and VR:
• Amazon Sumerian

Blockchain:
• Amazon Managed Blockchain

Compute:
• Amazon Lightsail

Database:
• Amazon RDS on VMware

Developer Tools:• AWS Cloud9• AWS Cloud Development Kit (AWS CDK)• AWS CloudShell• AWS CodeArtifact• AWS CodeBuild• AWS CodeCommit• AWS CodeDeploy• Amazon CodeGuru• AWS CodeStar• Amazon Corretto• AWS Fault Injection Simulator (AWS FIS)

• AWS Tools and SDKs

Front-End Web and Mobile:
• Amazon Location Service

Game Tech:• Amazon GameLift• Amazon LumberyardInternet of Things:

• All services

Which new AWS services will be covered in the SAA-C03?AWS Data Exchange,AWS Data Pipeline,AWS Lake Formation,Amazon Managed Streaming for Apache Kafka,Amazon AppFlow,AWS Outposts,VMware Cloud on AWS,AWS Wavelength,Amazon Neptune,Amazon Quantum Ledger Database,Amazon Timestream,AWS Amplify,Amazon Comprehend,Amazon Forecast,Amazon Fraud Detector,Amazon Kendra,AWS License Manager,Amazon Managed Grafana,Amazon Managed Service for Prometheus,AWS Proton,Amazon Elastic Transcoder,Amazon Kinesis Video Streams,AWS Application Discovery Service,AWS WAF Serverless,AWS AppSync,

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA Exam Prep

Get the AWS  SAA-C03 Exam Prep App on:  iOS – Android – Windows 10/11

AWS Solutions Architect Associates SAA-C03 Certification Exam Prep

#AWS #SAAC02 #SAAC03 #SolutionsArchitect #AWSSAA #SAA #AWSCertification #AWSTraining #LearnAWS #CloudArchitect #SolutionsArchitect  #Djamgatech

AWS SAA Exam Prep App on iOs

AWS SAA Exam Prep App on android

AWS SAA Exam Prep App on Windows 10/11

AWS SAA App details and features

Solution architecture is a practice of defining and describing an architecture of a system delivered in context of a specific solution and as such it may encompass description of an entire system or only its specific parts. Definition of a solution architecture is typically led by a solution architect.

Solution Architecture  Definition 2:

The AWS Certified Solutions Architect – Associate examination is intended for individuals who perform a solutions architect role and have one or more years of hands-on experience designing available, cost-efficient, fault-tolerant, and scalable distributed systems on AWS.

AWS Solution Architect Associate Exam Facts and Summaries (SAA-C03)

  1. Take an AWS Training Class
  2. Study AWS Whitepapers and FAQs: AWS Well-Architected webpage (various whitepapers linked)
  3. If you are running an application in a production environment and must add a new EBS volume with data from a snapshot, what could you do to avoid degraded performance during the volume’s first use?Initialize the data by reading each storage block on the volume.

    Volumes created from an EBS snapshot must be initialized. Initializing occurs the first time a storage block on the volume is read, and the performance impact can be impacted by up to 50%. You can avoid this impact in production environments by pre-warming the volume by reading all of the blocks.

  4. If you are running a legacy application that has hard-coded static IP addresses and it is running on an EC2 instance; what is the best failover solution that allows you to keep the same IP address on a new instance?
    Elastic IP addresses (EIPs) are designed to be attached/detached and moved from one EC2 instance to another. They are a great solution for keeping a static IP address and moving it to a new instance if the current instance fails. This will reduce or eliminate any downtime uses may experience.
  5. Which feature of Intel processors help to encrypt data without significant impact on performance?
    AES-NI
  6. You can mount to EFS from which two of the following?
    • On-prem servers running Linux
    • EC2 instances running Linux

    EFS is not compatible with Windows operating systems.

  7. When a file(s) is encrypted and the stored data is not in transit it’s known as encryption at rest. What is an example of encryption at rest? 

  8. When would vertical scaling be necessary? When an application is built entirely into one source code, otherwise known as a monolithic application.

  9. Fault-Tolerance allows for continuous operation throughout a failure, which can lead to a low Recovery Time Objective.  RPO vs RTO

  10. High-Availability means automating tasks so that an instance will quickly recover, which can lead to a low Recovery Time Objective.  RPO vs. RTO
  11. Frequent backups reduce the time between the last backup and recovery point, otherwise known as the Recovery Point Objective.  RPO vs. RTO
  12. Which represents the difference between Fault-Tolerance and High-Availability? High-Availability means the system will quickly recover from a failure event, and Fault-Tolerance means the system will maintain operations during a failure.
  13. From a security perspective, what is a principal? An anonymous user falls under the definition of a principal. A principal can be an anonymous user acting on a system.

    An authenticated user falls under the definition of a principal. A principal can be an authenticated user acting on a system.

  14. What are two types of session data saving for an Application Session State? Stateless and Stateful

23. It is the customer’s responsibility to patch the operating system on an EC2 instance.

24. In designing an environment, what four main points should a Solutions Architect keep in mind? Cost-efficient, secure, application session state, undifferentiated heavy lifting: These four main points should be the framework when designing an environment.

26. What are the benefits of horizontal scaling?

Vertical scaling can be costly while horizontal scaling is cheaper.

Horizontal scaling suffers from none of the size limitations of vertical scaling.

Having horizontal scaling means you can easily route traffic to another instance of a server.

Top
Reference: AWS Solution Architect Associate Exam Prep

Top 100 AWS Solution Architect Associate Exam Prep Questions and Answers Dump – SAA-C03

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA Exam Prep

For a better mobile experience, download the mobile app below:

AWS SAA App details and features

  • A. CloudWatch
  • B. DynamoDB
  • C. Elastic Load Balancing
  • D. ElastiCache
  • E. Storage Gateway

Reference: AWS Session management


Top

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Q1: A Solutions Architect is designing a critical business application with a relational database that runs on an EC2 instance. It requires a single EBS volume that can support up to 16,000 IOPS. Which Amazon EBS volume type can meet the performance requirements of this application?

  • A. EBS Provisioned IOPS SSD
  • B. EBS Throughput Optimized HDD
  • C. EBS General Purpose SSD
  • D. EBS Cold HDD

Top

  • A. Access the data through an Internet Gateway.
  • B. Access the data through a VPN connection.
  • C. Access the data through a NAT Gateway.
  • D.Access the data through a VPC endpoint for Amazon S3

Top

Q3: An organization is building an Amazon Redshift cluster in their shared services VPC. The cluster will host sensitive data.How can the organization control which networks can access the cluster?

  • A. Run the cluster in a different VPC and connect through VPC peering.
  • B. Create a database user inside the Amazon Redshift cluster only for users on the network.
  • C. Define a cluster security group for the cluster that allows access from the allowed networks.
  • D. Only allow access to networks that connect with the shared services network via VPN.

Top

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Q4: A web application allows customers to upload orders to an S3 bucket. The resulting Amazon S3 events trigger a Lambda function that inserts a message to an SQS queue. A single EC2 instance reads messages from the queue, processes them, and stores them in an DynamoDB table partitioned by unique order ID. Next month traffic is expected to increase by a factor of 10 and a Solutions Architect is reviewing the architecture for possible scaling problems.Which component is MOST likely to need re-architecting to be able to scale to accommodate the new traffic?

  • A. Lambda function
  • B. SQS queue
  • C. EC2 instance
  • D. DynamoDB table

Top

  • A. DynamoDB
  • B. Amazon S3
  • C. Amazon Aurora
  • D. Amazon Redshift

Top

Q6: How can you improve the performance of EFS?

  • A. Use an instance-store backed EC2 instance.
  • B. Provision more throughput than is required.
  • C. Divide your files system into multiple smaller file systems.
  • D. Provision higher IOPs for your EFS.

Top

Q7:If you are designing an application that requires fast (10 – 25Gbps), low-latency connections between EC2 instances, what EC2 feature should you use?

  • A. Snapshots
  • B. Instance store volumes
  • C. Placement groups
  • D. IOPS provisioned instances.

Top

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Q8: A Solution Architect is designing an online shopping application running in a VPC on EC2 instances behind an ELB Application Load Balancer. The instances run in an Auto Scaling group across multiple Availability Zones. The application tier must read and write data to a customer managed database cluster. There should be no access to the database from the Internet, but the cluster must be able to obtain software patches from the Internet.

Which VPC design meets these requirements?

  • A. Public subnets for both the application tier and the database cluster
  • B. Public subnets for the application tier, and private subnets for the database cluster
  • C. Public subnets for the application tier and NAT Gateway, and private subnets for the database cluster
  • D. Public subnets for the application tier, and private subnets for the database cluster and NAT Gateway

Top

Q9: What command should you run on a running instance if you want to view its user data (that is used at launch)?

  • A. curl http://254.169.254.169/latest/user-data
  • B. curl http://localhost/latest/meta-data/bootstrap
  • C. curl http://localhost/latest/user-data
  • D. curl http://169.254.169.254/latest/user-data

Top

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

( Get the SAA-C02 / SAA-C03 Exam Prep for More:  iOS – Android – Windows  )

B. By purchasing more resources very far in advance

C. By purchasing more resources after demand has risen

D. It is not possible to predict demand

A. It’s a set of best practice areas, principles, and concepts that can help you implement effective AWS solutions.

B. It’s a set of best practice areas, principles, and concepts that can help you implement effective solutions tailored to your specific business.

C. It’s a set of best practice areas, principles, and concepts that can help you implement effective solutions from another web host.

D. It’s a set of best practice areas, principles, and concepts that can help you implement effective E-Commerce solutions.

Reference: AWS Well architected Framework

A. Availability Zones are isolated locations within regions

B. Region codes identify specific regions (example: US-EAST-2)

C. All AWS Regions contain the full set of AWS services.

D. An AWS Region is assigned based on the user’s location when creating an AWS account.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

A. Reliability

B. Performance Efficiency

C. Structural Simplicity

D. Security

E. Operational Excellence

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Q17: You lead a team to develop a new online game application in AWS EC2. The application will have a large number of users globally. For a great user experience, this application requires very low network latency and jitter. If the network speed is not fast enough, you will lose customers. Which tool would you choose to improve the application performance? (Select TWO.)

A. AWS VPN

B. AWS Global Accelerator

C. Direct Connect

D. API Gateway

E. CloudFront

Q18: A company has a media processing application deployed in a local data center.  Its file storage is built on a Microsoft Windows file server. The application and file server need to be migrated to AWS. You want to quickly set up the file server in AWS and the application code should continue working to access the file systems. Which method should you choose to create the file server?

A. Create a Windows File Server from Amazon WorkSpaces.

B. Configure a high performance Windows File System in Amazon EFS.

C. Create a Windows File Server in Amazon FSx.

D. Configure a secure enterprise storage through Amazon WorkDocs.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Q19: You are developing an application using AWS SDK to get objects from AWS S3. The objects have big sizes and sometimes there are failures when getting objects especially when the network connectivity is poor. You want to get a specific range of bytes in a single GET request and retrieve the whole object in parts. Which method can achieve this?

A. Enable multipart upload in the AWS SDK.

B. Use the “Range” HTTP header in a GET request to download the specified range bytes of an object.

C. Reduce the retry requests and enlarge the retry timeouts through AWS SDK when fetching S3 objects.

D. Retrieve the whole S3 object through a single GET operation.

Q20: You have an application hosted in an Auto Scaling group and an application load balancer distributes traffic to the ASG. You want to add a scaling policy that keeps the average aggregate CPU utilization of the Auto Scaling group to be 60 percent. The capacity of the Auto Scaling group should increase or decrease based on this target value. Which scaling policy does it belong to?

A. Target tracking scaling policy.

B. Step scaling policy.

C. Simple scaling policy.

D. Scheduled scaling policy.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Q21: You need to launch a number of EC2 instances to run Cassandra. There are large distributed and replicated workloads in Cassandra and you plan to launch instances using EC2 placement groups. The traffic should be distributed evenly across several partitions and each partition should contain multiple instances. Which strategy would you use when launching the placement groups?

A. Cluster placement strategy

B. Spread placement strategy.

C. Partition placement strategy.

D. Network placement strategy.

Q22: To improve the network performance, you launch a C5 EC2 Amazon Linux instance and enable enhanced networking by modifying the instance attribute with “aws ec2 modify-instance-attribute –instance-id instance_id –ena-support”. Which mechanism does the EC2 instance use to enhance the networking capabilities?

A. Intel 82599 Virtual Function (VF) interface.

B. Elastic Fabric Adapter (EFA).

C. Elastic Network Adapter (ENA).

D. Elastic Network Interface (ENI).

Q23: You work for an online retailer where any downtime at all can cause a significant loss of revenue. You have architected your application to be deployed on an Auto Scaling Group of EC2 instances behind a load balancer. You have configured and deployed these resources using a CloudFormation template. The Auto Scaling Group is configured with default settings, and a simple CPU utilization scaling policy. You have also set up multiple Availability Zones for high availability. The Load Balancer does health checks against an html file generated by script. When you begin performing load testing on your application and notice in CloudWatch that the load balancer is not sending traffic to one of your EC2 instances. What could be the problem?

A. The EC2 instance has failed the load balancer health check.

B. The instance has not been registered with CloudWatch.

C. The EC2 instance has failed EC2 status checks.

D. You are load testing at a moderate traffic level and not all instances are needed.

Q24: Your company is using a hybrid configuration because there are some legacy applications which are not easily converted and migrated to AWS. And with this configuration comes a typical scenario where the legacy apps must maintain the same private IP address and MAC address. You are attempting to convert the application to the cloud and have configured an EC2 instance to house the application. What you are currently testing is removing the ENI from the legacy instance and attaching it to the EC2 instance. You want to attempt a cold attach. What does this mean?

A. Attach ENI when it’s stopped.

B. Attach ENI before the public IP address is assigned.

C. Attach ENI to an instance when it’s running.

D. Attach ENI when the instance is being launched.

Q25: Your company has recently converted to a hybrid cloud environment and will slowly be migrating to a fully AWS cloud environment. The AWS side is in need of some steps to prepare for disaster recovery. A disaster recovery plan needs drawn up and disaster recovery drills need to be performed for compliance reasons. The company wants to establish Recovery Time and Recovery Point Objectives. The RTO and RPO can be pretty relaxed. The main point is to have a plan in place, with as much cost savings as possible. Which AWS disaster recovery pattern will best meet these requirements?

A. Warm Standby

B. Backup and restore

C. Multi Site

D. Pilot Light

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Q26: An international travel company has an application which provides travel information and alerts to users all over the world. The application is hosted on groups of EC2 instances in Auto Scaling Groups in multiple AWS Regions. There are also load balancers routing traffic to these instances. In two countries, Ireland and Australia, there are compliance rules in place that dictate users connect to the application in eu-west-1 and ap-southeast-1. Which service can you use to meet this requirement?

A. Use Route 53 weighted routing.

B. Use Route 53 geolocation routing.

C. Configure CloudFront and the users will be routed to the nearest edge location.

D. Configure the load balancers to route users to the proper region.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Q26: You have taken over management of several instances in the company AWS environment. You want to quickly review scripts used to bootstrap the instances at runtime. A URL command can be used to do this. What can you append to the URL http://169.254.169.254/latest/ to retrieve this data?

A. user-data/

B. instance-demographic-data/

C. meta-data/

D. instance-data/

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

Q27: A software company has created an application to capture service requests from users and also enhancement requests. The application is deployed on an Auto Scaling group of EC2 instances fronted by an Application Load Balancer. The Auto Scaling group has scaled to maximum capacity, but there are still requests being lost. The cost of these instances is becoming an issue. What step can the company take to ensure requests aren’t lost?

A. Use larger instances in the Auto Scaling group.

B. Use spot instances to save money.

C. Use an SQS queue with the Auto Scaling group to capture all requests.

D. Use a Network Load Balancer instead for faster throughput.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Q28: A company has an auto scaling group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. The company has a very aggressive Recovery Time Objective (RTO) in case of disaster. How long will a failover typically complete?

A. Under 10 minutes

B. Within an hour

C. Almost instantly

D. one to two minutes

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

Q29: You have two EC2 instances running in the same VPC, but in different subnets. You are removing the secondary ENI from an EC2 instance and attaching it to another EC2 instance. You want this to be fast and with limited disruption. So you want to attach the ENI to the EC2 instance when it’s running. What is this called?

A. hot attach

B. warm attach

C. cold attach

D. synchronous attach

Q30: You suspect that one of the AWS services your company is using has gone down. How can you check on the status of this service?

A. AWS Trusted Advisor

B. Amazon Inspector

C. AWS Personal Health Dashboard

D. AWS Organizations

Q31: You have configured an Auto Scaling Group of EC2 instances fronted by an Application Load Balancer and backed by an RDS database. You want to begin monitoring the EC2 instances using CloudWatch metrics. Which metric is not readily available out of the box?

A. CPU utilization

B. DiskReadOps

C. NetworkIn

D. Memory utilization

Q32: Several instances you are creating have a specific data requirement. The requirement states that the data on the root device needs to persist independently from the lifetime of the instance. After considering AWS storage options, which is the simplest way to meet these requirements?

A. Store your root device data on Amazon EBS.

B. Store the data on the local instance store.

C. Create a cron job to migrate the data to S3.

D. Send the data to S3 using S3 lifecycle rules.

Q33: A company has an Auto Scaling Group of EC2 instances hosting their retail sales application. Any significant downtime for this application can result in large losses of profit. Therefore the architecture also includes an Application Load Balancer and an RDS database in a Multi-AZ deployment. What will happen to preserve high availability if the primary database fails?

A. A Lambda function kicks off a CloudFormation template to deploy a backup database.

B. The CNAME is switched from the primary db instance to the secondary.

C. Route 53 points the CNAME to the secondary database instance.

D. The Elastic IP address for the primary database is moved to the secondary database.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Q56: A travel company has deployed a website which serves travel updates to users all over the world. The traffic this database serves is very read heavy and can have some latency issues at certain times of the year. What can you do to alleviate these latency issues?

A. Place CloudFront in front of the Database.

B. Add read replicas

C. Configure RDS Multi-AZ

D. Configure multi-Region RDS

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Q57: A large financial institution is gradually moving their infrastructure and applications to AWS. The company has data needs that will utilize all of RDS, DynamoDB, Redshift, and ElastiCache. Which description best describes Amazon Redshift?

A. Key-value and document database that delivers single-digit millisecond performance at any scale.

B. Cloud-based relational database.

C. Can be used to significantly improve latency and throughput for many read-heavy application workloads.

D. Near real-time complex querying on massive data sets.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS Azure GCP Certification Prep

A. You will need an Application Load Balancer to meet this requirement.

B. All the AWS load balancers meet the requirement and perform the same.

C. You will select a Network Load Balancer to meet this requirement.

D. You will need a Classic Load Balancer to meet this requirement.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

Q59: An organization of about 100 employees has performed the initial setup of users in IAM. All users except administrators have the same basic privileges. But now it has been determined that 50 employees will have extra restrictions on EC2. They will be unable to launch new instances or alter the state of existing instances. What will be the quickest way to implement these restrictions?

A. Create an IAM Role for the restrictions. Attach it to the EC2 instances.

B. Create the appropriate policy. Place the restricted users in the new policy.

C. Create the appropriate policy. With only 20 users, attach the policy to each user.

D. Create the appropriate policy. Create a new group for the restricted users. Place the restricted users in the new group and attach the policy to the group.

Q60: You are managing S3 buckets in your organization. This management of S3 extends to Amazon Glacier. For auditing purposes you would like to be informed if an object is restored to S3 from Glacier. What is the most efficient way you can do this?

A. Create a CloudWatch event for uploads to S3

B. Create an SNS notification for any upload to S3.

C. Configure S3 notifications for restore operations from Glacier.

D. Create a Lambda function which is triggered by restoration of object from Glacier to S3.

Q61: Your company has gotten back results from an audit. One of the mandates from the audit is that your application, which is hosted on EC2, must encrypt the data before writing this data to storage. Which service could you use to meet this requirement?

A. AWS Cloud HSM

B. Security Token Service

C. EBS encryption

D. AWS KMS

Q62: Recent worldwide events have dictated that you perform your duties as a Solutions Architect from home. You need to be able to manage several EC2 instances while working from home and have been testing the ability to ssh into these instances. One instance in particular has been a problem and you cannot ssh into this instance. What should you check first to troubleshoot this issue?

A. Make sure that the security group for the instance has ingress on port 80 from your home IP address.

B. Make sure that your VPC has a connected Virtual Private Gateway.

C. Make sure that the security group for the instance has ingress on port 22 from your home IP address.

D. Make sure that the Security Group for the instance has ingress on port 443 from your home IP address.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

Q62: A consultant is hired by a small company to configure an AWS environment. The consultant begins working with the VPC and launching EC2 instances within the VPC. The initial instances will be placed in a public subnet. The consultant begins to create security groups. What is true of the default security group?

A. You can delete this group, however, you can’t change the group’s rules.

B. You can delete this group or you can change the group’s rules.

C. You can’t delete this group, nor can you change the group’s rules.

D. You can’t delete this group, however, you can change the group’s rules.

Q63: You are evaluating the security setting within the main company VPC. There are several NACLs and security groups to evaluate and possibly edit. What is true regarding NACLs and security groups?

A. Network ACLs and security groups are both stateful.

B. Network ACLs and security groups are both stateless.

C. Network ACLs are stateless, and security groups are stateful.

D. Network ACLs and stateful, and security groups are stateless.

Q64: Your company needs to deploy an application in the company AWS account. The application will reside on EC2 instances in an Auto Scaling Group fronted by an Application Load Balancer. The company has been using Elastic Beanstalk to deploy the application due to limited AWS experience within the organization. The application now needs upgrades and a small team of subcontractors have been hired to perform these upgrades. What can be used to provide the subcontractors with short-lived access tokens that act as temporary security credentials to the company AWS account?

A. IAM Roles

B. AWS STS

C. IAM user accounts

D. AWS SSO

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Q65: The company you work for has reshuffled teams a bit and you’ve been moved from the AWS IAM team to the AWS Network team. One of your first assignments is to review the subnets in the main VPCs. What are two key concepts regarding subnets?

A. A subnet spans all the Availability Zones in a Region.

B. Private subnets can only hold database.

C. Each subnet maps to a single Availability Zone.

D. Every subnet you create is associated with the main route table for the VPC.

E. Each subnet is associated with one security group.

Q66: Amazon Web Services offers 4 different levels of support. Which of the following are valid support levels? Choose 3

A. Enterprise

B. Developer

C. Corporate

D. Business

E. Free Tier

Q67: You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?

A. While processing a message, a consumer instance can amend the message visibility counter by a fixed amount.

B. When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.

C. When the consumer instance polls for new work the SQS service will allow it to wait a certain time for a message to be available before closing the connection.

D. While processing a message, a consumer instance can reset the message visibility by restarting the preset timeout counter.

E. When the consumer instance polls for new work, the consumer instance will wait a certain time until it has a full workload before closing the connection.

F. When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.

Q68: You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?

A. After a few minutes.

B. Immediately.

C. Straight away, but to the new instances only.

D. Straight away to the new instances, but old instances must be stopped and restarted before the new rules apply.

Q69: Amazon SQS keeps track of all tasks and events in an application.

A. True

B. False

Q70: Your Security Manager has hired a security contractor to audit your network and firewall configurations. The consultant doesn’t have access to an AWS account. You need to provide the required access for the auditing tasks, and answer a question about login details for the official AWS firewall appliance. Which of the following might you do?Choose 2

A. Create an IAM User with a policy that can Read Security Group and NACL settings.

B. Explain that AWS implements network security differently and that there is no such thing as an official AWS firewall appliance. Security Groups and NACLs are used instead.

C. Create an IAM Role with a policy that can Read Security Group and NACL settings.

D. Explain that AWS is a cloud service and that AWS manages the Network appliances.

E. Create an IAM Role with a policy that can Read Security Group and Route settings.

Q71: How many internet gateways can I attach to my custom VPC?

A. 5B. 3C. 2

D. 1

Q72: How long can a message be retained in an SQS Queue?

A. 14 days

B. 1 day

C. 7 days

D. 30 days

Q73: Although your application customarily runs at 30% usage, you have identified a recurring usage spike (>90%) between 8pm and midnight daily. What is the most cost-effective way to scale your application to meet this increased need?

A. Manually deploy Reactive Event-based Scaling each night at 7:45.

B. Deploy additional EC2 instances to meet the demand.

C. Use scheduled scaling to boost your capacity at a fixed interval.

D. Increase the size of the Resource Group to meet demand.

Q74: To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?

A. The EBS volume was not large enough to store your data.

B. The instance failed to connect to the root volume on Monday.

C. The elastic block-level storage service failed over the weekend.

D. The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.

Q75: Select all the true statements on S3 URL styles: Choose 2

A. Virtual hosted-style URLs will be eventually depreciated in favor of Path-Style URLs for S3 bucket access.

B. Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.

C. Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.

D. DNS compliant names are NOT recommended for the URLs to access S3.

Q76: With EBS, I can ____. Choose 2

A. Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.

B. Create an unencrypted volume from an encrypted snapshot.

C. Create an encrypted volume from a snapshot of another encrypted volume.

D. Encrypt an existing volume.

Q77: You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to high availability and cost-effectiveness. The design calls for about 20 instances (c3.2xlarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which configuration should you plan on deploying?

A. Use a 2nd Network Interface to separate the SQS traffic for the storage traffic.

B. Choose a different instance type that better matched the traffic demand.

C.Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance.

D. Deploy as a Cluster Placement Group as the aggregated burst traffic could be around 10 Gbps.

Q78: You are a solutions architect working for a cosmetics company. Your company has a busy Magento online store that consists of a two-tier architecture. The web servers are on EC2 instances deployed across multiple AZs, and the database is on a Multi-AZ RDS MySQL database instance. Your store is having a Black Friday sale in five days, and having reviewed the performance for the last sale you expect the site to start running very slowly during the peak load. You investigate and you determine that the database was struggling to keep up with the number of reads that the store was generating. Which solution would you implement to improve the application read performance the most?

A. Deploy an Amazon ElastiCache cluster with nodes running in each AZ.

B. Upgrade your RDS MySQL instance to use provisioned IOPS.

C. Add an RDS Read Replica in each AZ.

D. Upgrade the RDS MySQL instance to a larger type.

Q79: Which native AWS service will act as a file system mounted on an S3 bucket?

A. Amazon Elastic Block Store

B. File Gateway

C. Amazon S3

D. Amazon Elastic File System

Q80:You have been evaluating the NACLS in your company. Most of the NACLs are configured the same: 100 All Traffic Allow 200 All Traffic Deny ‘*’ All Traffic Deny If a request comes in, how will it be evaluated?

A. The default will deny traffic.

B. The request will be allowed.

C. The highest numbered rule will be used, a deny.

D. All rules will be evaluated and the end result will be Deny.

Q81: You have been given an assignment to configure Network ACLs in your VPC. Before configuring the NACLs, you need to understand how the NACLs are evaluated. How are NACL rules evaluated?

A. NACL rules are evaluated by rule number from lowest to highest and executed immediately when a matching rule is found.

B. NACL rules are evaluated by rule number from highest to lowest, and executed immediately when a matching rule is found.

C. All NACL rules that you configure are evaluated before traffic is passed through.

D. NACL rules are evaluated by rule number from highest to lowest, and all are evaluated before traffic is passed through.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

Q82: Your company has gone through an audit with a focus on data storage. You are currently storing historical data in Amazon Glacier. One of the results of the audit is that a portion of the infrequently-accessed historical data must be able to be accessed immediately upon request. Where can you store this data to meet this requirement?

A. S3 Standard

B. Leave infrequently-accessed data in Glacier.

C. S3 Standard-IA

D. Store the data in EBS

Q84: After an IT Steering Committee meeting, you have been put in charge of configuring a hybrid environment for the company’s compute resources. You weigh the pros and cons of various technologies, such as VPN and Direct Connect, and based on the requirements you have decided to configure a VPN connection. What features and advantages can a VPN connection provide?

Q86: Your company has decided to go to a hybrid cloud environment. Part of this effort will be to move a large data warehouse to the cloud. The warehouse is 50TB, and will take over a month to migrate given the current bandwidth available. What is the best option available to perform this migration considering both cost and performance aspects?

Q87: You have been assigned the review of the security in your company AWS cloud environment. Your final deliverable will be a report detailing potential security issues. One of the first things that you need to describe is the responsibilities of the company under the shared responsibility module. Which measure is the customer’s responsibility?

Q88: You work for a busy real estate company, and you need to protect your data stored on S3 from accidental deletion. Which of the following actions might you take to achieve this? Choose 2

A. Create a bucket policy that prohibits anyone from deleting things from the bucket.B. Enable S3 – Infrequent Access Storage (S3 – IA).C. Enable versioning on the bucket. If a file is accidentally deleted, delete the delete marker.D. Configure MFA-protected API access.

E. Use pre-signed URL’s so that users will not be able to accidentally delete data.

Q89: AWS intends to shut down your spot instance; which of these scenarios is possible? Choose 3

A. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown.

B. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, and you delay it by sending a ‘Delay300’ instruction before the forced shutdown takes effect.

C. AWS sends a notification of termination and you receive it 120 seconds before the intended forced shutdown, but AWS does not action the shutdown.

D. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but you block the shutdown because you used ‘Termination Protection’ when you initialized the instance.

E. AWS sends a notification of termination and you receive it 120 seconds before the forced shutdown, but the defined duration period (also known as Spot blocks) hasn’t ended yet.

F. AWS sends a notification of termination, but you do not receive it within the 120 seconds and the instance is shutdown.

Q90: What does the “EAR” in a policy document stand for?

A. Effects, APIs, RolesB. Effect, Action, ResourceC. Ewoks, Always, Romanticize

D. Every, Action, Reasonable

Q92: You can use _ to build a schema for your data, and _ to query the data that’s stored in S3.

A. Glue, AthenaB. EC2, SQSC. EC2, Glue

D. Athena, Lambda

Q93: What type of work does EMR perform?

A. Data processing information (DPI) jobs.B. Big data (BD) jobs.C. Extract, transform, and load (ETL) jobs.

D. Huge amounts of data (HAD) jobs

Q94: _____ allows you to transform data using SQL as it’s being passed through Kinesis.

A. RDSB. Kinesis Data AnalyticsC. Redshift

D. DynamoDB

Q95 [SAA-C03]: A company runs a public-facing three-tier web application in a VPC across multiple Availability Zones. Amazon EC2 instances for the application tier running in private subnets need to download software patches from the internet. However, the EC2 instances cannot be directly accessible from the internet. Which actions should be taken to allow the EC2 instances to download the needed patches? (Select TWO.)

A. Configure a NAT gateway in a public subnet.B. Define a custom route table with a route to the NAT gateway for internet traffic and associate it with the private subnets for the application tier.C. Assign Elastic IP addresses to the EC2 instances.D. Define a custom route table with a route to the internet gateway for internet traffic and associate it with the private subnets for the application tier.

E. Configure a NAT instance in a private subnet.

Q96 [SAA-C03]: A solutions architect wants to design a solution to save costs for Amazon EC2 instances that do not need to run during a 2-week company shutdown. The applications running on the EC2 instances store data in instance memory that must be present when the instances resume operation. Which approach should the solutions architect recommend to shut down and resume the EC2 instances?

A. Modify the application to store the data on instance store volumes. Reattach the volumes while restarting them.B. Snapshot the EC2 instances before stopping them. Restore the snapshot after restarting the instances.C. Run the applications on EC2 instances enabled for hibernation. Hibernate the instances before the 2- week company shutdown.D. Note the Availability Zone for each EC2 instance before stopping it. Restart the instances in the same Availability Zones after the 2-week company shutdown.

Reference: Hibernating – 

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

Q97 [SAA-C03]: A company plans to run a monitoring application on an Amazon EC2 instance in a VPC. Connections are made to the EC2 instance using the instance’s private IPv4 address. A solutions architect needs to design a solution that will allow traffic to be quickly directed to a standby EC2 instance if the application fails and becomes unreachable. Which approach will meet these requirements?

A) Deploy an Application Load Balancer configured with a listener for the private IP address and register the primary EC2 instance with the load balancer. Upon failure, de-register the instance and register the standby EC2 instance.
B) Configure a custom DHCP option set. Configure DHCP to assign the same private IP address to the standby EC2 instance when the primary EC2 instance fails.
C) Attach a secondary elastic network interface to the EC2 instance configured with the private IP address. Move the network interface to the standby EC2 instance if the primary EC2 instance becomes unreachable.
D) Associate an Elastic IP address with the network interface of the primary EC2 instance. Disassociate the Elastic IP from the primary instance upon failure and associate it with a standby EC2 instance.

Q98 [SAA-C03]: An analytics company is planning to offer a web analytics service to its users. The service will require that the users’ webpages include a JavaScript script that makes authenticated GET requests to the company’s Amazon S3 bucket. What must a solutions architect do to ensure that the script will successfully execute?

A. Enable cross-origin resource sharing (CORS) on the S3 bucket.B. Enable S3 Versioning on the S3 bucket.C. Provide the users with a signed URL for the script.D. Configure an S3 bucket policy to allow public execute privileges.

Reference: Amazon S3 can be configured with CORS – 

Q99 [SAA-C03]: A company’s security team requires that all data stored in the cloud be encrypted at rest at all times using encryption keys stored on premises. Which encryption options meet these requirements? (Select TWO.)

A. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3).
B. Use server-side encryption with AWS KMS managed encryption keys (SSE-KMS).
C. Use server-side encryption with customer-provided encryption keys (SSE-C).
D. Use client-side encryption to provide at-rest encryption.
E. Use an AWS Lambda function invoked by Amazon S3 events to encrypt the data using the customer’s keys.

Reference: Server-side encryption with customer-provided keys (SSE-C) – 

Q100 [SAA-C03]: A company uses Amazon EC2 Reserved Instances to run its data processing workload. The nightly job typically takes 7 hours to run and must finish within a 10-hour time window. The company anticipates temporary increases in demand at the end of each month that will cause the job to run over the time limit with the capacity of the current resources. Once started, the processing job cannot be interrupted before completion. The company wants to implement a solution that would provide increased resource capacity as cost-effectively as possible. What should a solutions architect do to accomplish this?

A) Deploy On-Demand Instances during periods of high demand.
B) Create a second EC2 reservation for additional instances.
C) Deploy Spot Instances during periods of high demand.
D) Increase the EC2 instance size in the EC2 reservation to support the increased workload.

Reference: Spot Instances –  On-Demand instances

Q101 [SAA-C03]: A company runs an online voting system for a weekly live television program. During broadcasts, users submit hundreds of thousands of votes within minutes to a front-end fleet of Amazon EC2 instances that run in an Auto Scaling group. The EC2 instances write the votes to an Amazon RDS database. However, the database is unable to keep up with the requests that come from the EC2 instances. A solutions architect must design a solution that processes the votes in the most efficient manner and without downtime. Which solution meets these requirements?

A. Migrate the front-end application to AWS Lambda. Use Amazon API Gateway to route user requests to the Lambda functions.
B. Scale the database horizontally by converting it to a Multi-AZ deployment. Configure the front-end application to write to both the primary and secondary DB instances.
C. Configure the front-end application to send votes to an Amazon Simple Queue Service (Amazon SQS) queue. Provision worker instances to read the SQS queue and write the vote information to the database.
D. Use Amazon EventBridge (Amazon CloudWatch Events) to create a scheduled event to re-provision the database with larger, memory optimized instances during voting periods. When voting ends, re-provision the database to use smaller instances.

Reference: Decouple – 

Q102 [SAA-C03]: A company has a two-tier application architecture that runs in public and private subnets. Amazon EC2 instances running the web application are in the public subnet and an EC2 instance for the database runs on the private subnet. The web application instances and the database are running in a single Availability Zone (AZ). Which combination of steps should a solutions architect take to provide high availability for this architecture? (Select TWO.)

A. Create new public and private subnets in the same AZ.
B. Create an Amazon EC2 Auto Scaling group and Application Load Balancer spanning multiple AZs for the web application instances.
C. Add the existing web application instances to an Auto Scaling group behind an Application Load Balancer.
D. Create new public and private subnets in a new AZ. Create a database using an EC2 instance in the public subnet in the new AZ. Migrate the old database contents to the new database.
E. Create new public and private subnets in the same VPC, each in a new AZ. Create an Amazon RDS Multi-AZ DB instance in the private subnets. Migrate the old database contents to the new DB instance.

Reference: Auto Scaling group with instances in two AZs behind the load balancer – 

Q103 [SAA-C03]: A website runs a custom web application that receives a burst of traffic each day at noon. The users upload new pictures and content daily, but have been complaining of timeouts. The architecture uses Amazon EC2 Auto Scaling groups, and the application consistently takes 1 minute to initiate upon boot up before responding to user requests. How should a solutions architect redesign the architecture to better respond to changing traffic?

A. Configure a Network Load Balancer with a slow start configuration.
B. Configure Amazon ElastiCache for Redis to offload direct requests from the EC2 instances.
C. Configure an Auto Scaling step scaling policy with an EC2 instance warmup condition.
D. Configure Amazon CloudFront to use an Application Load Balancer as the origin.

Reference: Step scaling policy – 

Q104 [SAA-C03]: An application running on AWS uses an Amazon Aurora Multi-AZ DB cluster deployment for its database. When evaluating performance metrics, a solutions architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database. What should the solutions architect do to separate the read requests from the write requests?

A. Enable read-through caching on the Aurora database.
B. Update the application to read from the Multi-AZ standby instance.
C. Create an Aurora replica and modify the application to use the appropriate endpoints.
D. Create a second Aurora database and link it to the primary database as a read replica.

Question 106: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?

A. Create a file system using Amazon EFS and join it to an Active Directory domain.
B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS.
C. Create a Network File System (NFS) file share using AWS Storage Gateway.
D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.

Question 107: A company plans to migrate its on-premises workload to AWS. The current architecture is composed of a Microsoft SharePoint server that uses a Windows shared file storage. The Solutions Architect needs to use a cloud storage solution that is highly available and can be integrated with Active Directory for access control and authentication. Which of the following options can satisfy the given requirement?

A. Create a file system using Amazon EFS and join it to an Active Directory domain.
B. Create a file system using Amazon FSx for Windows File Server and join it to an Active Directory domain in AWS.
C. Create a Network File System (NFS) file share using AWS Storage Gateway.
D. Launch an Amazon EC2 Windows Server to mount a new S3 bucket as a file volume.

Question 108: A Forex trading platform, which frequently processes and stores global financial data every minute, is hosted in your on-premises data center and uses an Oracle database. Due to a recent cooling problem in their data center, the company urgently needs to migrate their infrastructure to AWS to improve the performance of their applications. As the Solutions Architect, you are responsible in ensuring that the database is properly migrated and should remain available in case of database server failure in the future. Which of the following is the most suitable solution to meet the requirement?

A. Create an Oracle database in RDS with Multi-AZ deployments.
B. Launch an Oracle database instance in RDS with Recovery Manager (RMAN) enabled.
C. Launch an Oracle Real Application Clusters (RAC) in RDS.
D. Convert the database schema using the AWS Schema Conversion Tool and AWS Database Migration Service. Migrate the Oracle database to a non-cluster Amazon Aurora with a single instance.

Question 109: A data analytics company, which uses machine learning to collect and analyze consumer data, is using Redshift cluster as their data warehouse. You are instructed to implement a disaster recovery plan for their systems to ensure business continuity even in the event of an AWS region outage. Which of the following is the best approach to meet this requirement?

A. Do nothing because Amazon Redshift is a highly available, fully-managed data warehouse which can withstand an outage of an entire AWS region.
B. Enable Cross-Region Snapshots Copy in your Amazon Redshift Cluster.
C. Create a scheduled job that will automatically take the snapshot of your Redshift Cluster and store it to an S3 bucket. Restore the snapshot in case of an AWS region outage.
D. Use Automated snapshots of your Redshift Cluster.

Question 109: A start-up company has an EC2 instance that is hosting a web application. The volume of users is expected to grow in the coming months and hence, you need to add more elasticity and scalability in your AWS architecture to cope with the demand. Which of the following options can satisfy the above requirement for the given scenario? (Select TWO.)

A. Set up two EC2 instances and then put them behind an Elastic Load balancer (ELB).
B. Set up two EC2 instances deployed using Launch Templates and integrated with AWS Glue.
C. Set up an S3 Cache in front of the EC2 instance.
D. Set up two EC2 instances and use Route 53 to route traffic based on a Weighted Routing Policy.
E. Set up an AWS WAF behind your EC2 Instance.

Question 110: A company plans to deploy a Docker-based batch application in AWS. The application will be used to process both mission-critical data as well as non-essential batch jobs. Which of the following is the most cost-effective option to use in implementing this architecture?

A. Use ECS as the container management service then set up Reserved EC2 Instances for processing both mission-critical and non-essential batch jobs.
B. Use ECS as the container management service then set up a combination of Reserved and Spot EC2 Instances for processing mission-critical and non-essential batch jobs respectively.
C. Use ECS as the container management service then set up On-Demand EC2 Instances for processing both mission-critical and non-essential batch jobs.
D. Use ECS as the container management service then set up Spot EC2 Instances for processing both mission-critical and non-essential batch jobs.

Question 112: An online stocks trading application that stores financial data in an S3 bucket has a lifecycle policy that moves older data to Glacier every month. There is a strict compliance requirement where a surprise audit can happen at anytime and you should be able to retrieve the required data in under 15 minutes under all circumstances. Your manager instructed you to ensure that retrieval capacity is available when you need it and should handle up to 150 MB/s of retrieval throughput. Which of the following should you do to meet the above requirement? (Select TWO.)

A. Retrieve the data using Amazon Glacier Select.
B. Use Bulk Retrieval to access the financial data.
C. Purchase provisioned retrieval capacity.
D. Use Expedited Retrieval to access the financial data.
E. Specify a range, or portion, of the financial data archive to retrieve.

Question 113: An organization stores and manages financial records of various companies in its on-premises data center, which is almost out of space. The management decided to move all of their existing records to a cloud storage service. All future financial records will also be stored in the cloud. For additional security, all records must be prevented from being deleted or overwritten. Which of the following should you do to meet the above requirement?
A. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon S3 and enable object lock.
B. Use AWS DataSync to move the data. Store all of your data in Amazon EFS and enable object lock.
C. Use AWS Storage Gateway to establish hybrid cloud storage. Store all of your data in Amazon EBS and enable object lock.
D. Use AWS DataSync to move the data. Store all of your data in Amazon S3 and enable object lock.

Question 114: A solutions architect is designing a solution to run a containerized web application by using Amazon Elastic Container Service (Amazon ECS). The solutions architect wants to minimize cost by running multiple copies of a task on each container instance. The number of task copies must scale as the load increases and decreases. Which routing solution distributes the load to the multiple tasks?

A. Configure an Application Load Balancer to distribute the requests by using path-based routing.
B. Configure an Application Load Balancer to distribute the requests by using dynamic host port mapping.
C. Configure an Amazon Route 53 alias record set to distribute the requests with a failover routing policy.
D. Configure an Amazon Route 53 alias record set to distribute the requests with a weighted routing policy.

Question 115: Question: A Solutions Architect needs to deploy a mobile application that can collect votes for a popular singing competition. Millions of users from around the world will submit votes using their mobile phones. These votes must be collected and stored in a highly scalable and highly available data store which will be queried for real-time ranking. Which of the following combination of services should the architect use to meet this requirement?
A. Amazon Redshift and AWS Mobile Hub
B. Amazon DynamoDB and AWS AppSync
C. Amazon Relational Database Service (RDS) and Amazon MQ
D. Amazon Aurora and Amazon Cognito

Question 116: The usage of a company’s image-processing application is increasing suddenly with no set pattern. The application’s processing time grows linearly with the size of the image. The processing can take up to 20 minutes for large image files. The architecture consists of a web tier, an Amazon Simple Queue Service (Amazon SQS) standard queue, and message consumers that process the images on Amazon EC2 instances. When a high volume of requests occurs, the message backlog in Amazon SQS increases. Users are reporting the delays in processing. A solutions architect must improve the performance of the application in compliance with cloud best practices. Which solution will meet these requirements?

A. Purchase enough Dedicated Instances to meet the peak demand. Deploy the instances for the consumers.
B. Convert the existing SQS standard queue to an SQS FIFO queue. Increase the visibility timeout.
C. Configure a scalable AWS Lambda function as the consumer of the SQS messages.
D. Create a message consumer that is an Auto Scaling group of instances. Configure the Auto Scaling group to scale based upon the ApproximateNumberOfMessages Amazon CloudWatch metric.

Question 117: An application is hosted on an EC2 instance with multiple EBS Volumes attached and uses Amazon Neptune as its database. To improve data security, you encrypted all of the EBS volumes attached to the instance to protect the confidential data stored in the volumes. Which of the following statements are true about encrypted Amazon Elastic Block Store volumes? (Select TWO.)

Question 118: A reporting application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. For complex reports, the application can take up to 15 minutes to respond to a request. A solutions architect is concerned that users will receive HTTP 5xx errors if a report request is in process during a scale-in event. What should the solutions architect do to ensure that user requests will be completed before instances are terminated?

A. Enable sticky sessions (session affinity) for the target group of the instances.
B. Increase the instance size in the Application Load Balancer target group.
C. Increase the cooldown period for the Auto Scaling group to a greater amount of time than the time required for the longest running responses.
D. Increase the deregistration delay timeout for the target group of the instances to greater than 900 seconds.

Question 119: Question: A company used Amazon EC2 Spot Instances for a demonstration that is now complete. A solutions architect must remove the Spot Instances to stop them from incurring cost. What should the solutions architect do to meet this requirement?

A. Cancel the Spot request only.
B. Terminate the Spot Instances only.
C. Cancel the Spot request. Terminate the Spot Instances.
D. Terminate the Spot Instances. Cancel the Spot request.

Question 120: Question: Which components are required to build a site-to-site VPN connection on AWS? (Select TWO.)
A. An Internet Gateway
B. A NAT gateway
C. A customer Gateway
D. A Virtual Private Gateway
E. Amazon API Gateway

Question 121: A company runs its website on Amazon EC2 instances behind an Application Load Balancer that is configured as the origin for an Amazon CloudFront distribution. The company wants to protect against cross-site scripting and SQL injection attacks. Which approach should a solutions architect recommend to meet these requirements?

A. Enable AWS Shield Advanced. List the CloudFront distribution as a protected resource.
B. Define an AWS Shield Advanced policy in AWS Firewall Manager to block cross-site scripting and SQL injection attacks.
C. Set up AWS WAF on the CloudFront distribution. Use conditions and rules that block cross-site scripting and SQL injection attacks.
D. Deploy AWS Firewall Manager on the EC2 instances. Create conditions and rules that block cross-site scripting and SQL injection attacks.

Question 122: A media company is designing a new solution for graphic rendering. The application requires up to 400 GB of storage for temporary data that is discarded after the frames are rendered. The application requires approximately 40,000 random IOPS to perform the rendering. What is the MOST cost-effective storage option for this rendering application?
A. A storage optimized Amazon EC2 instance with instance store storage
B. A storage optimized Amazon EC2 instance with a Provisioned IOPS SSD (io1 or io2) Amazon Elastic Block Store (Amazon EBS) volume
C. A burstable Amazon EC2 instance with a Throughput Optimized HDD (st1) Amazon Elastic Block Store (Amazon EBS) volume
D. A burstable Amazon EC2 instance with Amazon S3 storage over a VPC endpoint

Question 123: A company is deploying a new application that will consist of an application layer and an online transaction processing (OLTP) relational database. The application must be available at all times. However, the application will have periods of inactivity. The company wants to pay the minimum for compute costs during these idle periods. Which solution meets these requirements MOST cost-effectively?
A. Run the application in containers with Amazon Elastic Container Service (Amazon ECS) on AWS Fargate. Use Amazon Aurora Serverless for the database.
B. Run the application on Amazon EC2 instances by using a burstable instance type. Use Amazon Redshift for the database.
C. Deploy the application and a MySQL database to Amazon EC2 instances by using AWS CloudFormation. Delete the stack at the beginning of the idle periods.
D. Deploy the application on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. Use Amazon RDS for MySQL for the database.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — AWS Well-Architected provides a consistent approach for customers and partners to evaluate architectures, and implement designs that can scale over time.

1. Operational Excellence

The operational excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. You can find prescriptive guidance on implementation in the Operational Excellence Pillar whitepaper.

2. Security
The security pillar includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies. You can find prescriptive guidance on implementation in the Security Pillar whitepaper.

3. Reliability
The reliability pillar includes the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. You can find prescriptive guidance on implementation in the Reliability Pillar whitepaper.

4. Performance Efficiency
The performance efficiency pillar includes the ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve. You can find prescriptive guidance on implementation in the Performance Efficiency Pillar whitepaper.

5. Cost Optimization
The cost optimization pillar includes the ability to avoid or eliminate unneeded cost or suboptimal resources. You can find prescriptive guidance on implementation in the Cost Optimization Pillar whitepaper.

6. Sustainability

    • The ability to increase efficiency across all components of a workload by maximizing the benefits from the provisioned resources.
    • There are six best practice areas for sustainability in the cloud:
      • Region Selection – AWS Global Infrastructure
      • User Behavior Patterns – Auto Scaling, Elastic Load Balancing
      • Software and Architecture Patterns – AWS Design Principles
      • Data Patterns – Amazon EBS,  Amazon EFS, Amazon FSx, Amazon S3
      • Hardware Patterns – Amazon EC2, AWS Elastic Beanstalk
      • Development and Deployment Process – AWS CloudFormation
    • Key AWS service:

Source: 6 pillards of AWs Well architected Framework


The AWS Well-Architected Framework provides architectural best practices across the five pillars for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud.
The framework provides a set of questions that allows you to review an existing or proposed architecture. It also provides a set of AWS best practices for each pillar.
Using the Framework in your architecture helps you produce stable and efficient systems, which allows you to focus on functional requirements.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

The reality, of course, today is that if you come up with a great idea you don’t get to go quickly to a successful product. There’s a lot of undifferentiated heavy lifting that stands between your idea and that success. The kinds of things that I’m talking about when I say undifferentiated heavy lifting are things like these: figuring out which servers to buy, how many of them to buy, what time line to buy them.

Eventually you end up with heterogeneous hardware and you have to match that. You have to think about backup scenarios if you lose your data center or lose connectivity to a data center. Eventually you have to move facilities. There’s negotiations to be done. It’s a very complex set of activities that really is a big driver of ultimate success.

But they are undifferentiated from, it’s not the heart of, your idea. We call this muck. And it gets worse because what really happens is you don’t have to do this one time. You have to drive this loop. After you get your first version of your idea out into the marketplace, you’ve done all that undifferentiated heavy lifting, you find out that you have to cycle back. Change your idea. The winners are the ones that can cycle this loop the fastest.

On every cycle of this loop you have this undifferentiated heavy lifting, or muck, that you have to contend with. I believe that for most companies, and it’s certainly true at Amazon, that 70% of your time, energy, and dollars go into the undifferentiated heavy lifting and only 30% of your energy, time, and dollars gets to go into the core kernel of your idea.

I think what people are excited about is that they’re going to get a chance they see a future where they may be able to invert those two. Where they may be able to spend 70% of their time, energy and dollars on the differentiated part of what they’re doing.

— Jeff Bezos, 2006

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

AWS Certified Solutions Architect Associate

So my exam was yesterday and I got the results in 24 hours. I think that’s how they review all saa exams, not showing the results right away anymore.

I scored 858. Was practicing with Stephan’s udemy lectures and Bonso exam tests. My test results were as follows Test 1. 63%, 93% Test 2. 67%, 87% Test 3. 81 % Test 4. 72% Test 5. 75 % Test 6. 81% Stephan’s test. 80%

I was reading all question explanations (even the ones I got correct)

The actual exam was pretty much similar to these. The topics I got were:

  1. A lot of S3 (make sure you know all of it from head to toes)

  2. VPC peering

  3. DataSync and Database Migration Service in same questions. Make sure you know the difference

  4. One EKS question

  5. 2-3 KMS questions

  6. Security group question

  7. A lot of RDS Multi-AZ

  8. SQS + SNS fan out pattern

  9. ECS microservice architecture question

  10. Route 53

  11. NAT gateway

And that’s all I can remember)

I took extra 30 minutes, because English is not my native language and I had plenty of time to think and then review flagged questions.

Good luck with your exams guys!

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS Certified Solutions Architect Associate

Hey guys, just giving my update so all of you guys working towards your certs can stay motivated as these success stories drove me to reach this goal.

Background: 12 years of military IT experience, never worked with the cloud. I’ve done 7 deployments (that is a lot in 12 years), at which point I came home from the last one burnt out with a family that barely knew me. I knew I needed a change, but had no clue where to start or what I wanted to do. I wasn’t really interested in IT but I knew it’d pay the bills. After seeing videos about people in IT working from home(which after 8+ years of being gone from home really appealed to me), I stumbled across a video about a Solutions Architect’s daily routine working from home and got me interested in AWS.

It took me 68 days straight of hard work to pass this exam with confidence. No rest days, more than 120 pages of hand-written notes and hundreds and hundreds of flash cards.

In the beginning, I hopped on Stephane Maarek’s course for the CCP exam just to see if it was for me. I did the course in about a week and then after doing some research on here, got the CCP Practice exams from tutorialsdojo.com Two weeks after starting the Udemy course, I passed the exam. By that point, I’d already done lots of research on the different career paths and the best way to study, etc.

Cantrill(10/10) – That same day, I hopped onto Cantrill’s course for the SAA and got to work. Somebody had mentioned that by doing his courses you’d be over-prepared for the exam. While I think a combination of material is really important for passing the certification with confidence, I can say without a doubt Cantrill’s courses got me 85-90% of the way there. His forum is also amazing, and has directly contributed to me talking with somebody who works at AWS to land me a job, which makes the money I spent on all of his courses A STEAL. As I continue my journey (up next is SA Pro), I will be using all of his courses.

Neal Davis(8/10) – After completing Cantrill’s course, I found myself needing a resource to reinforce all the material I’d just learned. AWS is an expansive platform and the many intricacies of the different services can be tricky. For this portion, I relied on Neal Davis’s Training Notes series. These training notes are a very condensed version of the information you’ll need to pass the exam, and with the proper context are very useful to find the things you may have missed in your initial learnings. I will be using his other Training Notes for my other exams as well.

TutorialsDojo(10/10) – These tests filled in the gaps and allowed me to spot my weaknesses and shore them up. I actually think my real exam was harder than these, but because I’d spent so much time on the material I got wrong, I was able to pass the exam with a safe score.

As I said, I was surprised at how difficult the exam was. A lot of my questions were related to DBs, and a lot of them gave no context as to whether the data being loaded into them was SQL or NoSQL which made the choice selection a little frustrating. A lot of the questions have 2 VERY SIMILAR answers, and often time the wording of the answers could be easy to misinterpret (such as when you are creating a Read Replica, do you attach it to the primary application DB that is slowing down because of read issues or attach it to the service that is causing the primary DB to slow down). For context, I was scoring 95-100% on the TD exams prior to taking the test and managed a 823 on the exam so I don’t know if I got unlucky with a hard test or if I’m not as prepared as I thought I was (i.e. over-thinking questions).

Anyways, up next is going back over the practical parts of the course as I gear up for the SA Pro exam. I will be taking my time with this one, and re-learning the Linux CLI in preparation for finding a new job.

PS if anybody on here is hiring, I’m looking! I’m the hardest worker I know and my goal is to make your company as streamlined and profitable as possible. 🙂

AWS SAA App details and features

Practical knowledge is 30% important and rest is Jayendra blog and Dumps.

Buying udemy courses doesn’t make you pass, I can tell surely without going to dumps and without going to jayendra’s blog not easy to clear the certification.

Read FAQs of S3, IAM, EC2, VPC, SQS, Autoscaling, Elastic Load Balancer, EBS, RDS, Lambda, API Gateway, ECS.

Read the Security Whitepaper and Shared Responsibility model.

The most important thing is basic questions from the last introduced topics to the exam is very important like Amazon Kinesis, etc…

– ACloudGuru course with practice test’s

– Created my own cheat sheet in excel

– Practice questions on various website

– Few AWS services FAQ’s

Exam feedback:

– Some questions were your understanding about which service to pick for the use case.

– many questions on VPC

– a couple of unexpected question on AWS CloudHSM, AWS systems manager, aws athena

– encryption at rest and in transit services

– migration from on-premise to AWS

– backup data in az vs regional

I believe the time was sufficient.

Overall I feel AWS SAA was more challenging in theory than GCP Associate CE.

some resources I bookmarked:

Whitepapers are the important information about each services that are published by Amazon in their website. If you are preparing for the AWS certifications, it is very important to use the some of the most recommended whitepapers to read before writing the exam.

The following are the list of whitepapers that are useful for preparing solutions architectexam. Also you will be able to find the list of whitepapers in the exam blueprint.

Data Security questions could be the more challenging and it’s worth noting that you need to have a good understanding of security processes described in the whitepaper titled “Overview of Security Processes”.

In the above list, most important whitepapers are Overview of Security Processes and Storage Options in the Cloud. Read more here…

Big thanks to /u/acantril for his amazing course – AWS Certified Solutions Architect – Associate (SAA-C02) – the best IT course I’ve ever had – and I’ve done many on various other platforms:

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

If you’re on the fence with buying one of his courses, stop thinking and buy it, I guarantee you won’t regret it! Other materials used for study:

Study duration approximately ~3 months with the following regimen:

  • Daily study from 30min to 2hrs

    • Usually early morning before work

    • Sometimes on the train when commuting from/to work

    • Sometimes in the evening

    • Due to being a father/husband, study wasn’t always possible

  • All learned topics reviewed weekly

AWS SAA App details and features

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS Certified Solutions Architect Associate

I’ve been following this subreddit for awhile and gotten some helpful tips, so I’d like to give back with my two cents. FYI I passed the exam 788

The exam materials that I used were the following:

  • AWS Certified Solutions Architect Associate All-in-One Exam Guide (Banerjee)

  • Stephen Maarek’s Udemy course, and his 6 exam practices

  • Adrian Cantrill’s online course (about `60% done)

  • TutorialDojo’s exams

(My company has udemy business account so I was able to use Stephen’s course/exam)

I scheduled my exam at the end of March, and started with Adrian’s. But I was dumb thinking that I could go through his course within 3 weeks… I stopped around 12% of his course and went to the textbook and finished reading the all-in-one exam guide within a weekend. Then I started going through Stephen’s course. While learning the course, I pushed back the exam to end of April, because I knew I wouldn’t be ready by the exam comes along.

Five days before the exam, I finished Stephen’s course, and then did his final exam on the course. I failed miserably (around 50%). So I did one of Stephen’s practice exam and did worse (42%). I thought maybe it might be his exams that are slightly difficult, so I went and bought Jon Bonso’s exam and got 60% on his first one. And then I realized based on all the questions on the exams, I was definitely lacking some fundamentals. I went back to Adrian’s course and things were definitely sticking more – I think it has to do with his explanations + more practical stuff. Unfortunately, I could not finish his course before the exam (because I was cramming), and on the day of the exam, I could only do Bonso’s four of six exams, with barely passing one of them.

Please, don’t do what I did. I was desperate to get this thing over with it. I wanted to move on and work on other things for job search, but if you’re not in this situation, please don’t do this. I can’t for love of god tell you about OAI and Cloudfront and why that’s different than S3 URL. The only thing that I can remember is all the practical stuff that I did with Adrian’s course. I’ll never forget how to create VPC, because he make you manually go through it. I’m not against Stephen’s course – they are different on its own way (see the tips below).

So here’s what I recommend doing before writing for aws exam:

  1. Don’t schedule your exam beforehand. Go through the materials that you are doing, and make sure you get at least 80% on all of the Jon Bonso’s exam (I’d recommend maybe 90% or higher)

  2. If you like to learn things practically, I do recommend Adrian’s course. If you like to learn things conceptually, go with Stephen Maarek’s course. I find Stephen’s course more detailed when going through different architectures, but I can’t really say that because I didn’t really finish Adrian’s course

  3. Jon Bonso’s exam was about the same difficulty as the actual exam. But they’re slightly more tricky. For example, many of the questions will give you two different situation and you really have to figure out what they are asking for because they might contradict to each other, but the actual question is asking one specific thing. However, there were few questions that were definitely obvious if you knew the service.

I’m upset that even though I passed the exam, I’m still lacking some practical stuff, so I’m just going to go through Adrian’s Developer exam but without cramming this time. If you actually learn the materials and practice them, they are definitely useful in the real world. I hope this will help you passing and actually learning the stuff.

P.S I vehemently disagree with Adrian in one thing in his course. doggogram.io is definitely better than catagram.io, although his cats are pretty cool

Testimonial: I passed the SAA-C02 exam!

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

I sat the exam at a PearsonVUE test centre and scored 816.

The exam had lots of questions around S3, RDS and storage. To be honest it was a bit of a blur but they are the ones I remember.

I was a bit worried before sitting the exam as I was only hit 76% in the official AWS practice exam the night before but it turned out alright in the end!

I have around 8 years of experience in IT but AWS was relatively new to me around 5 weeks ago.

Training Material Used

Firstly I ran through the u/stephanemaarek course which I found to pretty much cover all that was required!

I then used the u/Tutorials_Dojo practice exams. I took one before starting Stephane’s course to see where I was at with no training. I got 46% but I suppose a few of them were lucky guesses!

I then finished the course and took another test and hit around 65%, TD was great as they gave explanations on the answers. I then used this go back to the course to go over my weak areas again.

I then seemed to not be able to get higher than the low 70% on the exams so I went through u/neal-davis course, this was also great as it had an “Exam Cram” video at the end of each topic.

I also set up flashcards on BrainScape which helped me remember AWS services and what their function is.

All in all it was a great learning experience and I look forward to putting my skills into action!

AWS SAA App details and features

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

Many FSx / EFS / Lustre questions

S3 Use cases, storage tiers, cloudfront were pretty prominent too

Only got one “figure out what’s wrong with this IAM policy” question

A handful of dynamodb questions and a handful for picking use cases between different database types or caching layers.

Other typical tips: When you’re unclear on what answer you should pick, or if they seem very similar – work on eliminating answers first. “It can’t be X because oy Y” and that can help a lot.

Testimonial: Passed the AWS Solutions Architect Associate exam!I prepared mostly from freely available resources as my basics were strong. Bought Jon Bonso’s tests on Udemy and they turned out to be super important while preparing for those particular type of questions (i.e. the questions which feel subjective, but they aren’t), understanding line of questioning and most suitable answers for some common scenarios.

Created a Notion notebook to note down those common scenarios, exceptions, what supports what, integrations etc. Used that notebook and cheat sheets on Tutorials Dojo website for revision on final day.

Found the exam was little tougher than Jon Bonso’s, but his practice tests on Udemy were crucial. Wouldn’t have passed it without them.

Piece of advice for upcoming test aspirants: Get your basics right, especially networking. Understand properly how different services interact in VPC. Focus more on the last line of the question. It usually gives you a hint upon what exactly is needed. Whether you need cost optimization, performance efficiency or high availability. Little to no operational effort means serverless. Understand all serverless services thoroughly.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

Testimonial:  Passed Solutions Architect Associate (SAA-C02) Today!

I have almost no experience with AWS, except for completing the Certified Cloud Practitioner earlier this year. My work is pushing all IT employees to complete some cloud training and certifications, which is why I chose to do this.

How I Studied:
My company pays for acloudguru subscriptions for its employees, so I used that for the bulk of my learning. I took notes on 3×5 notecards on the key terms and concepts for review.

Once I scored passing grades on the ACG practice tests, I took the Jon Bonso tests on Udemy, which are much more difficult and fairly close to the difficulty of the actual exam. I scored 45%-74% on every Bonso practice test, and spent 1-2 hours after each test reviewing what I missed, supplementing my note cards, and taking time to understand my weak spots. I only took these tests once each, but in between each practice test, I would review all my note cards until I had the content largely memorized.

The Test:
This was one of the most difficult certification tests I’ve ever done. The exam was remote proctored with PearsonVUE (I used PSI for the CCP and didn’t like it as much) I felt like I was failing half the time. I marked about 25% of the questions for review, and I used up the entire allotted time. The questions are mostly about understanding which services interact with which other services, or which services are incompatible with the scenario. It was important for me to read through each response and eliminate the ones that don’t make sense. A lot of the responses mentioned a lot of AWS services that sound good but don’t actually work together (i.e. if it doesn’t make sense to have service X querying database Y, so that probably isn’t the right answer). I can’t point to one domain that really needs to be studied more than any other. You need to know all of the content for the exam.

Final Thoughts:
The ACG practice tests are not a good metric for success for the actual SAA exam, and I would not have passed without Bonso’s tests showing me my weak spots. PearsonVUE is better than PSI. Make sure to study everything thoroughly and review excessively. You don’t necessarily need 5 different study sources and years of experience to be able to pass (although both of those definitely help) and good luck to anyone that took the time to read!

AWS SAA App details and features

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

AWS Certified Solutions Architect Associate
So glad to pass my first AWS certification after 6 weeks of preparation.

My Preparation:

After a series of trial of error in regards to picking the appropriate learning content. Eventually, I went with the community’s advice, and took the course presented by the amazing u/stephanemaarek, in addition to the practice exams by Jon Bonso.
At this point, I can’t say anything that hasn’t been said already about how helpful they are. It’s a great combination of learning material, I appreciate the instructor’s work, and the community’s help in this sub.

Review:

Throughout the course I noted down the important points, and used the course slides as a reference in the first review iteration.Before resorting to Udemy’s practice exams, I purchased a practice exam from another website, that I regret (not to defame the other vendor, I would simply recommend Udemy).Udemy’s practice exams were incredible, in that they made me aware of the points I hadn’t understood clearly. After each exam, I would go both through the incorrect answers, as well as the questions I marked for review, wrote down the topic for review, and read the explanation thoroughly. The explanations point to the respective documentation in AWS, which is a recommended read, especially if you don’t feel confident with the service.What I want to note, is that I didn’t get satisfying marks on the first go at the practice exams (I got an average of ~70%).Throughout the 6 practice exams, I aggregated a long list of topics to review, went back to the course slides and practice-exams explanations, in addition to the AWS documentation for the respective service.

On the second go I averaged 85%. The second attempt at the exams was important as a confidence boost, as I made sure I understood the services more clearly.

The take away:

Don’t feel disappointed if you get bad results at your practice-exams. Make sure to review the topics and give it another shot.

The AWS documentation is your friend! It is vert clear and concise. My only regret is not having referenced the documentation enough after learning new services.

The exam:

I scheduled the exam using PSI.I was very confident going into the exam. But going through such an exam environment for the first time made me feel under pressure. Partly, because I didn’t feel comfortable being monitored (I was afraid to get eliminated if I moved or covered my mouth), but mostly because there was a lot at stake from my side, and I had to pass it in the first go.The questions were harder than expected, but I tried analyze the questions more, and eliminate the invalid answers.

I was very nervous and kept reviewing flagged questions up to the last minute. Luckily, I pulled through.

The take away:

The proctors are friendly, just make sure you feel comfortable in the exam place, and use the practice exams to prepare for the actual’s exam’s environment. That includes sitting in a straight posture, not talking/whispering, or looking away.

Make sure to organize the time dedicated for each questions well, and don’t let yourself get distracted by being monitored like I did.

Don’t skip the question that you are not sure of. Try to select the most probable answer, then flag the question. This will make the very-stressful, last-minute review easier.

Spread the Instances over multiple AZs to minimize the traffic concentration and maximize fault-tolerance. With a multi-AZ configuration, an additional reliability point is scored as the entire Availability Zone itself is ruled out as a single point of failure. This ensures high availability. Wherever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions

To save money, you quickly stored some data in one of the attached volumes of an EC2 instance and stopped it for the weekend. When you returned on Monday and restarted your instance, you discovered that your data was gone. Why might that be?

The volume was ephemeral, block-level storage. Data on an instance store volume is lost if an instance is stopped.

The most likely answer is that the EC2 instance had an instance store volume attached to it. Instance store volumes are ephemeral, meaning that data in attached instance store volumes is lost if the instance stops.

Reference: Instance store lifetime

Your company likes the idea of storing files on AWS. However, low-latency service of the last few days of files is important to customer service. Which Storage Gateway configuration would you use to achieve both of these ends?

A file gateway simplifies file storage in Amazon S3, integrates to existing applications through industry-standard file system protocols, and provides a cost-effective alternative to on-premises storage. It also provides low-latency access to data through transparent local caching.

Cached volumes allow you to store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data.

You’ve been commissioned to develop a high-availability application with a stateless web tier. Identify the most cost-effective means of reaching this end.

Use an Elastic Load Balancer, a multi-AZ deployment of an Auto-Scaling group of EC2 Spot instances (primary) running in tandem with an Auto-Scaling group of EC2 On-Demand instances (secondary), and DynamoDB.

With proper scripting and scaling policies, running EC2 On-Demand instances behind the Spot instances will deliver the most cost-effective solution because On-Demand instances will only spin up if the Spot instances are not available. DynamoDB lends itself to supporting stateless web/app installations better than RDS .

You are building a NAT Instance in an m3.medium using the AWS Linux2 distro with amazon-linux-extras installed. Which of the following do you need to set?

Ensure that “Source/Destination Checks” is disabled on the NAT instance. With a NAT instance, the most common oversight is forgetting to disable Source/Destination Checks. TNote: This is a legacy topic and while it may appear on the AWS exam it will only do so infrequently.

You are reviewing Change Control requests and you note that there is a proposed change designed to reduce errors due to SQS Eventual Consistency by updating the “DelaySeconds” attribute. What does this mean?

When a new message is added to the SQS queue, it will be hidden from consumer instances for a fixed period.

Delay queues let you postpone the delivery of new messages to a queue for a number of seconds, for example, when your consumer application needs additional time to process messages. If you create a delay queue, any messages that you send to the queue remain invisible to consumers for the duration of the delay period. The default (minimum) delay for a queue is 0 seconds. The maximum is 15 minutes. To set delay seconds on individual messages, rather than on an entire queue, use message timers to allow Amazon SQS to use the message timer’s DelaySeconds value instead of the delay queue’s DelaySeconds value. Reference: Amazon SQS delay queues.

Amazon SQS keeps track of all tasks and events in an application: True or False?

False. Amazon SWF (not Amazon SQS) keeps track of all tasks and events in an application. Amazon SQS requires you to implement your own application-level tracking, especially if your application uses multiple queues. Amazon SWF FAQs.

You work for a company, and you need to protect your data stored on S3 from accidental deletion. Which  actions might you take to achieve this?

Allow versioning on the bucket and to protect the objects by configuring MFA-protected API access.

AWS has removed the Firewall appliance from the hub of the network and implemented the firewall functionality as stateful Security Groups, and stateless subnet NACLs. This is not a new concept in networking, but rarely implemented at this scale.

Create an IAM user for the auditor and explain that the firewall functionality is implemented as stateful Security Groups, and stateless subnet NACLs

Amazon ElastiCache can fulfill a number of roles. Which  operations  can be implemented using ElastiCache for Redis.

Amazon ElastiCache offers a fully managed Memcached and Redis service. Although the name only suggests caching functionality, the Redis service in particular can offer a number of operations such as Pub/Sub, Sorted Sets and an In-Memory Data Store. However, Amazon ElastiCache for Redis doesn’t support multithreaded architectures.

You have been asked to deploy an application on a small number of EC2 instances. The application must be placed across multiple Availability Zones and should also minimize the chance of underlying hardware failure. Which actions would provide this solution?

Deploy the EC2 servers in a Spread Placement Group.

Spread Placement Groups are recommended for applications that have a small number of critical instances which need to be kept separate from each other. Launching instances in a Spread Placement Group reduces the risk of simultaneous failures that might occur when instances share the same underlying hardware. Spread Placement Groups provide access to distinct hardware, and are therefore suitable for mixing instance types or launching instances over time. In this case, deploying the EC2 instances in a Spread Placement Group is the only correct option.

You manage a NodeJS messaging application that lives on a cluster of EC2 instances. Your website occasionally experiences brief, strong, and entirely unpredictable spikes in traffic that overwhelm your EC2 instances’ resources and freeze the application. As a result, you’re losing recently submitted messages from end-users. You use Auto Scaling to deploy additional resources to handle the load during spikes, but the new instances don’t spin-up fast enough to prevent the existing application servers from freezing. Can you provide the most cost-effective solution in preventing the loss of recently submitted messages?

Use Amazon SQS to decouple the application components and keep the messages in queue until the extra Auto-Scaling instances are available.

Neither increasing the size of your EC2 instances nor maintaining additional EC2 instances is cost-effective, and pre-warming an ELB signifies that these spikes in traffic are predictable. The cost-effective solution to the unpredictable spike in traffic is to use SQS to decouple the application components.

True statements on S3 URL styles

Virtual-host-style URLs (such as: https://bucket-name.s3.Region.amazonaws.com/key name) are supported by AWS.

Path-Style URLs (such as https://s3.Region.amazonaws.com/bucket-name/key name) are supported by AWS.

Using a Curl or Get Command to get the latest meta-data from http://169.254.169.254/latest/meta-data/

What data formats are used to create CloudFormation templates?

JSOn and YAML

You have launched a NAT instance into a public subnet, and you have configured all relevant security groups, network ACLs, and routing policies to allow this NAT to function. However, EC2 instances in the private subnet still cannot communicate out to the internet. What troubleshooting steps should you take to resolve this issue?

Disable the Source/Destination Check on your NAT instance.

A NAT instance sends and retrieves traffic on behalf of instances in a private subnet. As a result, source/destination checks on the NAT instance must be disabled to allow the sending and receiving traffic for the private instances. Route 53 resolves DNS names, so it would not help here. Traffic that is originating from your NAT instance will not pass through an ELB. Instead, it is sent directly from the public IP address of the NAT Instance out to the Internet.

You need a storage service that delivers the lowest-latency access to data for a database running on a single EC2 instance. Which of the following AWS storage services is suitable for this use case?

Amazon EBS is a block level storage service for use with Amazon EC2. Amazon EBS can deliver performance for workloads that require the lowest-latency access to data from a single EC2 instance. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.

What are DynamoDB  use cases?

Use cases include storing JSON data, BLOB data and storing web session data. 

You are reviewing Change Control requests, and you note that there is a change designed to reduce costs by updating the Amazon SQS “WaitTimeSeconds” attribute. What does this mean?

When the consumer instance polls for new work, the SQS service will allow it to wait a certain time for one or more messages to be available before closing the connection.

Poor timing of SQS processes can significantly impact the cost effectiveness of the solution.

Long polling helps reduce the cost of using Amazon SQS by eliminating the number of empty responses (when there are no messages available for a ReceiveMessage request) and false empty responses (when messages are available but aren’t included in a response).

Reference: Here

You have been asked to decouple an application by utilizing SQS. The application dictates that messages on the queue CAN be delivered more than once, but must be delivered in the order they have arrived while reducing the number of empty responses. Which  option is most suitable?

Configure a FIFO SQS queue and enable long polling.

You are a security architect working for a large antivirus company. The production environment has recently been moved to AWS and is in a public subnet. You are able to view the production environment over HTTP. However, when your customers try to update their virus definition files over a custom port, that port is blocked. You log in to the console and you allow traffic in over the custom port. How long will this take to take effect?

Immediately.

You need to restrict access to an S3 bucket. Which  methods can you use to do so?

There are two ways of securing S3, using either Access Control Lists (Permissions) or by using bucket Policies.

You are reviewing Change Control requests, and you note that there is a change designed to reduce wasted CPU cycles by increasing the value of your Amazon SQS “VisibilityTimeout” attribute. What does this mean?

When a consumer instance retrieves a message, that message will be hidden from other consumer instances for a fixed period.

Poor timing of SQS processes can significantly impact the cost effectiveness of the solution. To prevent other consumers from processing the message again, Amazon SQS sets a visibility timeout, a period of time during which Amazon SQS prevents other consumers from receiving and processing the message. The default visibility timeout for a message is 30 seconds. The minimum is 0 seconds. The maximum is 12 hours.

With EBS, I can ____.

Create an encrypted volume from a snapshot of another encrypted volume.

Create an encrypted snapshot from an unencrypted snapshot by creating an encrypted copy of the unencrypted snapshot.

You can create an encrypted volume from a snapshot of another encrypted volume.

Although there is no direct way to encrypt an existing unencrypted volume or snapshot, you can encrypt them by creating either a volume or a snapshot. Reference: Encrypting unencrypted resources.

Following advice from your consultant, you have configured your VPC to use dedicated hosting tenancy. Your VPC has an Amazon EC2 Auto Scaling designed to launch or terminate Amazon EC2 instances on a regular basis, in order to meet workload demands. A subsequent change to your application has rendered the performance gains from dedicated tenancy superfluous, and you would now like to recoup some of these greater costs. How do you revert your instance tenancy attribute of a VPC to default for new launched EC2 instances?​

Modify the instance tenancy attribute of your VPC from dedicated to default using the AWS CLI, an AWS SDK, or the Amazon EC2 API.

You can change the instance tenancy attribute of a VPC from dedicated to default. Modifying the instance tenancy of the VPC does not affect the tenancy of any existing instances in the VPC. The next time you launch an instance in the VPC, it has a tenancy of default, unless you specify otherwise during launch. You can modify the instance tenancy attribute of a VPC using the AWS CLI, an AWS SDK, or the Amazon EC2 API only. Reference: Change the tenancy of a VPC.

How do DynamoDB indices work?

What is Amazon DynamoDB?

Amazon DynamoDB is a fast, fully managed NoSQL database service. DynamoDB makes it simple and cost-effective to store and retrieve any amount of data and serve any level of request traffic.

DynamoDB is used to create tables that store and retrieve any level of data.

  • DynamoDB uses SSD’s to store data.
  • Provides Automatic and synchronous data.
  • Maximum item size is 400KB
  • Supports cross-region replication.

DynamoDB Core Concepts:

  • The fundamental concepts around DynamoDB are:
    • Tables-which is a collection of data.
    • Items- They are the individual entries in the table.
    • Attributes- These are the properties associated with the entries.
  • Primary Keys.
  • Secondary Indexes.
  • DynamoDB streams.

Secondary Indexes:

  • The Secondary index is a data structure that contains a subset of attributes from the table, along with an alternate key that supports Query operations.
  • Every secondary index is related to only one table, from where it obtains data. This is called base table of the index.
  • When you create an index you create an alternate key for the index i.e. Partition Key and Sort key, DynamoDB creates a copy of the attributes into the index, including primary key attributes derived from the table.
  • After this is done, you use the query/scan in the same way as you would use a query on a table.

Every secondary index is instinctively maintained by DynamoDB.

DynamoDB Indexes: DynamoDB supports two indexes:

  1. Local Secondary Index (LSI)- The index has the same partition key as the base table but a different sort key,
  2. Global Secondary index (GSI)- The index has a partition key and sort key are different from those on the base table.

While creating more than one table using secondary table , you must do it in a sequence. Create table one after the another. When you create the first table wait for it to be active.

Once that table is active, create another table and wait for it to get active and so on. If you try to create one or more tables continuously DynamoDB will return a LimitExceededException.

You must specify the following, for every secondary index:

  • Type- You must mention the type of index you are creating whether it is a Global Secondary Index or a Local Secondary index.
  • Name- You must specify the name for the index. The rules for naming the indexes are the same as that for the table it is connected with. You can use the same name for the indexes that are connected with the different base table.
  • Key- The key schema for the index states that every attribute in the index must be of the top level attribute of type-string, number, or binary. Other data types which include documents and sets are not allowed. Other requirements depend on the type of index you choose.
    • For GSI- The partitions key can be any scalar attribute of the base table.

Sort key is optional and this too can be any scalar attribute of the base table.

  • For LSI- The partition key must be the same as the base table’s partition key.

The sort key must be a non-key table attribute.

  • Additional Attributes: The additional attributes are in addition to the tables key attributes. They are automatically projected into every index. You can use attributes for any data type, including scalars, documents and sets.
  • Throughput: The throughput settings for the index if necessary are:
    • GSI: Specify read and write capacity unit settings. These provisioned throughput settings are not dependent on the base tables settings.
    • LSI- You do not need to specify read and write capacity unit settings. Any read and write operations on the local secondary index are drawn from the provisioned throughput settings of the base table.

You can create upto 5 Global and 5 Local Secondary Indexes per table. With the deletion of a table all the indexes are connected with the table are also deleted.

You can use the Scan or Query operation to fetch the data from the table. DynamoDB will give you the results in descending or ascending order.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

(Source)

What is NLB in AWS?

An NLB is a Network Load Balancer.

Network Load Balancer Overview: A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration. When you enable an Availability Zone for the load balancer, Elastic Load Balancing creates a load balancer node in the Availability Zone. By default, each load balancer node distributes traffic across the registered targets in its Availability Zone only. If you enable cross-zone load balancing, each load balancer node distributes traffic across the registered targets in all enabled Availability Zones. It is designed to handle tens of millions of requests per second while maintaining high throughput at ultra low latency, with no effort on your part. The Network Load Balancer is API-compatible with the Application Load Balancer, including full programmatic control of Target Groups and Targets. Here are some of the most important features:

  • Static IP Addresses – Each Network Load Balancer provides a single IP address for each Availability Zone in its purview. If you have targets in us-west-2a and other targets in us-west-2c, NLB will create and manage two IP addresses (one per AZ); connections to that IP address will spread traffic across the instances in all the VPC subnets in the AZ. You can also specify an existing Elastic IP for each AZ for even greater control. With full control over your IP addresses, a Network Load Balancer can be used in situations where IP addresses need to be hard-coded into DNS records, customer firewall rules, and so forth.
  • Zonality – The IP-per-AZ feature reduces latency with improved performance, improves availability through isolation and fault tolerance, and makes the use of Network Load Balancers transparent to your client applications. Network Load Balancers also attempt to route a series of requests from a particular source to targets in a single AZ while still providing automatic failover should those targets become unavailable.
  • Source Address Preservation – With Network Load Balancer, the original source IP address and source ports for the incoming connections remain unmodified, so application software need not support X-Forwarded-For, proxy protocol, or other workarounds. This also means that normal firewall rules, including VPC Security Groups, can be used on targets.
  • Long-running Connections – NLB handles connections with built-in fault tolerance, and can handle connections that are open for months or years, making them a great fit for IoT, gaming, and messaging applications.
  • Failover – Powered by Route 53 health checks, NLB supports failover between IP addresses within and across regions.

How many types of VPC endpoints are available?

There are two types of VPC endpoints: (1) interface endpoints and (2) gateway endpoints. Interface endpoints enable connectivity to services over AWS PrivateLink.

What is the purpose of key pair with Amazon AWS EC2?

Amazon AWS uses key pair to encrypt and decrypt login information.

A sender uses a public key to encrypt data, which its receiver then decrypts using another private key. These two keys, public and private, are known as a key pair.

You need a key pair to be able to connect to your instances. The way this works on Linux and Windows instances is different.

First, when you launch a new instance, you assign a key pair to it. Then, when you log in to it, you use the private key.

The difference between Linux and Windows instances is that Linux instances do not have a password already set and you must use the key pair to log in to Linux instances. On the other hand, on Windows instances, you need the key pair to decrypt the administrator password. Using the decrypted password, you can use RDP and then connect to your Windows instance.

Amazon EC2 stores only the public key, and you can either generate it inside Amazon EC2 or you can import it. Since the private key is not stored by Amazon, it’s advisable to store it in a secure place as anyone who has this private key can log in on your behalf.

AWS PrivateLink provides private connectivity between VPCs and services hosted on AWS or on-premises, securely on the Amazon network. By providing a private endpoint to access your services, AWS PrivateLink ensures your traffic is not exposed to the public internet.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

There are two types of Security Groups based on where you launch your instance. When you launch your instance on EC2-Classic, you have to specify an EC2-Classic Security Group . On the other hand, when you launch an instance in a VPC, you will have to specify an EC2-VPC Security Group. Now that we have a clear understanding what we are comparing, lets see their main differences:

EC2-Classic Security Group

  • When the instance is launched, you can only choose a Security Group that resides in the same region as the instance.
  • You cannot change the Security Group after the instance has launched (you may edit the rules)
  • They are not IPv6 Capable

EC2-VPC Security Group

  • You can change the Security Group after the instance has launched
  • They are IPv6 Capable

Generally speaking, they are not interchangeable and there are more capabilities on the EC2-VPC SGs. You may read more about them on Differences Between Security Groups for EC2-Classic and EC2-VPC

I think this is historical in nature. S3 and DynamoDB were the first services to support VPC endpoints. The release of those VPC endpoint features pre-dates two important services that subsequently enabled interface endpoints: Network Load Balancer and AWS PrivateLink.

  • Separate the Lambda handler from your core logic.
  • Take advantage of execution context reuse to improve the performance of your function. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the /tmp directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves execution time and avoid potential data leaks across invocations, don’t use the execution context to store user data, events, or other information with security implications. If your function relies on a mutable state that can’t be stored in memory within the handler, consider creating a separate function or separate versions of a function for each user.
  • Use AWS Lambda Environment Variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable.

You can use VPC Flow Logs. The steps would be the following:

  • Enable VPC Flow Logs for the VPC your EC2 instance lives in. You can do this from the VPC console
  • Having VPC Flow Logs enabled will create a CloudWatch Logs log group
  • Find the Elastic Network Interface assigned to your EC2 instance. Also, get the private IP of your EC2 instance. You can do this from the EC2 console.
  • Find the CloudWatch Logs log stream for that ENI.
  • Search the log stream for records where your Windows instance’s IP is the destination IP, make sure the port is the one you’re looking for. You’ll see records that tell you if someone has been connecting to your EC2 instance. For example, there are bytes transferred, status=ACCEPT, log-status=OK. You will also know the source IP that connected to your instance.

I recommend using CloudWatch Logs Metric Filters, so you don’t have to do all this manually. Metric Filters will find the patterns I described in your CloudWatch Logs entries and will publish a CloudWatch metric. Then you can trigger an alarm that notifies you when someone logs in to your instance.

Here are more details from the AWS Official Blog and the AWS documentation for VPC Flow Logs records:

VPC Flow Logs – Log and View Network Traffic Flows

Amazon Virtual Private Cloud

Also, there are 3rd-party tools that simplify all these steps for you and give you very nice visibility and alerts into what’s happening in your AWS network resources. I’ve tried Observable Networks and it’s great: Observable Networks

While enabling ports on AWS NAT gateway when you allow inbound traffic on port 80/443 , do you need to allow outbound traffic on the same ports or is it sufficient to allow outbound traffic on ephemeral ports (1024-65535)?

Typically outbound traffic is not blocked by NAT on any port, so you would not need to explicitly allow those, since they should already be allowed. Your firewall generally would have a rule to allow return traffic that was initiated outbound from inside your office.

According to Amazon’s documentation, it is impossible for one instance to sniff traffic bound for a different instance.

https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf

  • Packet sniffing by other tenants. It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While you can place your interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic. Attacks such as ARP cache poisoning do not work within Amazon EC2 and Amazon VPC. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another’s data, as a standard practice you should encrypt sensitive traffic.

But as you can see, they still recommend that you should maintain encryption inside your network. We have taken the approach of terminating SSL at the external interface of the ELB, but then initiating SSL from the ELB to our back-end servers, and even further, to our (RDS) databases. It’s probably belt-and-suspenders, but in my industry it’s needed. Heck, we have some interfaces that require HTTPS and a VPN.

What’s the use case for S3 Pre-signed URL for uploading objects?

I get the use-case to allow access to private/premium content in S3 using Presigned-url that can be used to view or download the file until the expiration time set, But what’s a real life scenario in which a Webapp would have the need to generate URI to give users temporary credentials to upload an object, can’t the same be done by using the SDK and exposing a REST API at the backend.

Asking this since I want to build a POC for this functionality in Java, but struggling to find a real-world use-case for the same

Pre-signed URLs are used to provide short-term access to a private object in your S3 bucket. They work by appending an AWS Access Key, expiration time, and Sigv4 signature as query parameters to the S3 object. There are two common use cases when you may want to use them: 

  • Simple, occasional sharing of private files.
  • Frequent, programmatic access to view or upload a file in an application.

Imagine you may want to share a confidential presentation with a business partner, or you want to allow a friend to download a video file you’re storing in your S3 bucket. In both situations, you could generate a URL, and share it to allow the recipient short-term access.

There are a couple of different approaches for generating these URLs in an ad-hoc, one-off fashion, including:

  • Using the AWS Tools for Powershell.
  • Using the AWS CLI.

Source: Here

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS  SAA-C03 Exam Prep

FROM AWS:REINVENT 2021:

AWS on Air

Peter DeSantis Keynote

Join Peter DeSantis, Senior Vice President, Utility Computing and Apps, to learn how AWS has optimized its cloud infrastructure to run some of the world’s most demanding workloads and give your business a competitive edge.

Werner Vogels Keynote

Join Dr. Werner Vogels, CTO, Amazon.com, as he goes behind the scenes to show how Amazon is solving today’s hardest technology problems. Based on his experience working with some of the largest and most successful applications in the world, Dr. Vogels shares his insights on building truly resilient architectures and what that means for the future of software development.

Accelerating innovation with AI and ML

Applied artificial intelligence (AI) solutions, such as contact center intelligence (CCI), intelligent document processing (IDP), and media intelligence (MI), have had a significant market and business impact for customers, partners, and AWS. This session details how partners can collaborate with AWS to differentiate their products and solutions with AI and machine learning (ML). It also shares partner and customer success stories and discusses opportunities to help customers who are looking for turnkey solutions.

Application integration patterns for microservices

An implication of applying the microservices architectural style is that a lot of communication between components is done over the network. In order to achieve the full capabilities of microservices, this communication needs to happen in a loosely coupled manner. In this session, explore some fundamental application integration patterns based on messaging and connect them to real-world use cases in a microservices scenario. Also, learn some of the benefits that asynchronous messaging can have over REST APIs for communication between microservices.

Maintain application availability and performance with Amazon CloudWatch

Avoiding unexpected user behavior and maintaining reliable performance is crucial. This session is for application developers who want to learn how to maintain application availability and performance to improve the end user experience. Also, discover the latest on Amazon CloudWatch.

How Amazon.com transforms customer experiences through AI/ML

Amazon is transforming customer experiences through the practical application of AI and machine learning (ML) at scale. This session is for senior business and technology decision-makers who want to understand Amazon.com’s approach to launching and scaling ML-enabled innovations in its core business operations and toward new customer opportunities. See specific examples from various Amazon businesses to learn how Amazon applies AI/ML to shape its customer experience while improving efficiency, increasing speed, and lowering cost. Also hear the lessons the Amazon teams have learned from the cultural, process, and technical aspects of building and scaling ML capabilities across the organization.

Accelerating data-led migrations

Data has become a strategic asset. Customers of all sizes are moving data to the cloud to gain operational efficiencies and fuel innovation. This session details how partners can create repeatable and scalable solutions to help their customers derive value from their data, win new customers, and grow their business. It also discusses how to drive partner-led data migrations using AWS services, tools, resources, and programs, such as the AWS Migration Acceleration Program (MAP). Also, this session shares customer success stories from partners who have used MAP and other resources to help customers migrate to AWS and improve business outcomes.

Accelerate front-end web and mobile development with AWS Amplify

User-facing web and mobile applications are the primary touchpoint between organizations and their customers. To meet the ever-rising bar for customer experience, developers must deliver high-quality apps with both foundational and differentiating features. AWS Amplify helps front-end web and mobile developers build faster front to back. In this session, review Amplify’s core capabilities like authentication, data, and file storage and explore new capabilities, such as Amplify Geo and extensibility features for easier app customization with AWS services and better integration with existing deployment pipelines. Also learn how customers have been successful using Amplify to innovate in their businesses.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS Amplify: Build, deploy and scale web apps

AWS Amplify is a set of tools and services that makes it quickand easy for front-end web and mobile developers to build full-stack applications on AWS

Amplify DataStore provides a  programming model for leveraging shared and distributed data without writing additional code for offline and online scenarios, which makes working
with distributed, cross-user data just as simple as working with local-only data

AWS AppSync is a managed GraphQL API service

Amazon DynamoDB is a  serverless key-value and document database that’s highly scalable

Amazon S3 allows you to store static assets

DevOps revolution

While DevOps has not changed much, the industry has fundamentally transformed over the last decade. Monolithic architectures have evolved into microservices. Containers and serverless have become the default. Applications are distributed on cloud infrastructure across the globe. The technical environment and tooling ecosystem has changed radically from the original conditions in which DevOps was created. So, what’s next? In this session, learn about the next phase of DevOps: a distributed model that emphasizes swift development, observable systems, accountable engineers, and resilient applications.

Innovation Day

Innovation Day is a virtual event that brings together organizations and thought leaders from around the world to share how cloud technology has helped them capture new business opportunities, grow revenue, and solve the big problems facing us today, and in the future. Featured topics include building the first human basecamp on the moon, the next generation F1 car, manufacturing in space, the Climate Pledge from Amazon, and building the city of the future at the foot of Mount Fuji.

Latest AWS Products and Services announced at re:invent 2021

Graviton 3:  AWS today announced the newest generation of its Arm-based Graviton processors: the Graviton 3. The company promises that the new chip will be 25 percent faster than the last-generation chips, with 2x faster floating-point performances and a 3x speedup for machine-learning workloads. AWS also promises that the new chips will use 60 percent less power.

Trn1 to train models for various applications

AWS Mainframe Modernization: Cut mainframe migration time by 2/3

AWS Private 5G: Deploy and manage your own private 5G network (Set up and scale a private mobile network in days)

Transaction for Governed tables in Lake Formation: Automatically manages conflicts and error

Serverless and On-Demand Analytics for Redshift, EMAR, MSK, Kinesis: 

Amazon Sagemaker Canvas: Create ML predictions without any ML experience or writing any code

AWS IoT TwinMaker: Real Time system that makes it easy to create and use digital twins of real-world systems.

Amazon DevOps Guru for RDS: Automatically detect, diagnose, and resolve hard-to-find database issues.

Amazon DynamoDB Standard-Infrequent Access table class: Reduce costs by up to 60%. Maintain the same performance, durability, scaling. and availability as Standard

AWS Database Migration Service Fleet Advisor: Accelerate database migration with automated inventory and migration: This service  makes it easier and faster to get your data to the cloud and match it with the correct database service. “DMS Fleet Advisor automatically builds an inventory of your on-prem database and analytics service by streaming data from on prem to Amazon S3. From there, we take it over. We analyze [the data] to match it with the appropriate amount of AWS Datastore and then provide customized migration plans.

Amazon Sagemaker Ground Truth Plus: Deliver high-quality training datasets fast, and reduce data labeling cost.

Amazon SageMaker Training Compiler: Accelerate model training by 50%

Amazon SageMaker Inference Recommender: Reduce time to deploy from weeks to hours

Amazon SageMaker Serverless Inference: Lower cost of ownership with pay-per-use pricing

Amazon Kendra Experience Builder: Deploy Intelligent search applications powered by Amazon Kendra with a few clicks.

Amazon Lex Automated Chatbot Designer: Drastically Simplifies bot design with advanced natural language understanding

Amazon SageMaker Studio Lab: A no cost, no setup access to powerful machine learning technology

AWS Cloud WAN: Build, manage and monitor global wide area networks

AWS Amplify Studio: Visually build complete, feature-rich apps in hours instead of weeks, with full control over the application code.

AWS Carbon Footprint Tool: Don’t forget to turn off the lights.

AWS Well-Architected Sustainability Pillar: Learn, measure, and improve  your workloads using environmental  best practices in cloud computing

AWS re:Post: Get Answers from AWS experts. A Reimagined Q&A Experience for the AWS Community

How do you build something completely new?

FROM AWS:REINVENT 2020:

Automate anything with AWS Systems Manager

You can automate any task that involves interaction with AWS and on-premises resources, including in multi-account and multi-Region environments, with AWS Systems Manager. In this session, learn more about three new Systems Manager launches at re:Invent—Change Manager, Fleet Manager, and Application Manager. In addition, learn how Systems Manager Automation can be used across multiple Regions and accounts, integrate with other AWS services, and extend to on-premises. This session takes a deep dive into how to author a custom runbook using an automation document, and how to execute automation anywhere.

Deliver cloud operations at scale with AWS Managed Services

Learn how you can quickly build scaled AWS operations tooling to meet some of the most complex and compliant operations system requirements.

Turbocharging query execution on Amazon EMR

Learn about the performance improvements made in Amazon EMR for Apache Spark and Presto, giving Amazon EMR one of the fastest runtimes for analytics workloads in the cloud. This session dives deep into how AWS generates smart query plans in the absence of accurate table statistics. It also covers adaptive query execution—a technique to dynamically collect statistics during query execution—and how AWS uses dynamic partition pruning to generate query predicates for speeding up table joins. You also learn about execution improvements such as data prefetching and pruning of nested data types.

Detect machine learning (ML) model drift in production

 Explore how state-of-the-art algorithms built into Amazon SageMaker are used to detect declines in machine learning (ML) model quality. One of the big factors that can affect the accuracy of models is the difference in the data used to generate predictions and what was used for training. For example, changing economic conditions could drive new interest rates affecting home purchasing predictions. Amazon SageMaker Model Monitor automatically detects drift in deployed models and provides detailed alerts that help you identify the source of the problem so you can be more confident in your ML applications.

Amazon Lightsail: The easiest way to get started on AWS

Amazon Lightsail is AWS’s simple, virtual private server. In this session, learn more about Lightsail and its newest launches. Lightsail is designed for simple web apps, websites, and dev environments. This session reviews core product features, such as preconfigured blueprints, managed databases, load balancers, networking, and snapshots, and includes a demo of the most recent launches. Attend this session to learn more about how you can get up and running on AWS in the easiest way possible.

Deep dive into AWS Lambda security: Function isolation

This session dives into the security model behind AWS Lambda functions, looking at how you can isolate workloads, build multiple layers of  protection, and leverage fine-grained authorization. You learn about the  implementation, the open-source Firecracker technology that provides one of  the most important layers, and what this means for how you build on Lambda. You also see how AWS Lambda securely runs your functions packaged and  deployed as container images. Finally, you learn about SaaS, customization, and safe patterns for running your own customers’ code in your Lambda functions.

Red team vs. blue team in AWS: Learn to defend your cloud applications (sponsored by Check Point Software)

Unauthorized users and financially motivated third parties also have access to advanced cloud capabilities. This causes concerns and creates challenges for customers responsible for the security of their cloud assets. Join us as Roy Feintuch, chief technologist of cloud products, and Maya Horowitz, director of threat intelligence and research, face off in an epic battle of defense against unauthorized cloud-native attacks. In this session, Roy uses security analytics, threat hunting, and cloud intelligence solutions to dissect and analyze some sneaky cloud breaches so you can strengthen your cloud defense. This presentation is brought to you by Check Point Software, an AWS Partner.

Best practices for security governance in serverless applications

AWS provides services and features that your organization can  leverage to improve the security of a serverless application. However, as organizations grow and developers deploy more serverless applications, how do  you know if all of the applications are in compliance with your organization’s security policies? This session walks you through serverless security, and you learn about protections and guardrails that you can build  to avoid misconfigurations and catch potential security risks.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA-C02 SAA-C03 Exam Prep

How Amazon.com automates cash identification & matching with AWS AI/ML

The Amazon Cash application service matches incoming customer payments with accounts and open invoices, while an email ingestion service (EIS) processes more than 1 million semi-structured and unstructured remittance emails monthly. In this session, learn how this EIS classifies the emails, extracts invoice data from the emails, and then identifies the right invoices to close on Amazon financial platforms. Dive deep on how these services automated 89.5% of cash applications using AWS AI & ML services. Hear about how these services will eliminate the manual effort of 1000 cash application analysts in the next 10 years.

Understanding AWS Lambda streaming events

Dive into the details of using Amazon Kinesis Data Streams and Amazon DynamoDB Streams as event sources for AWS Lambda. This session walks you through how AWS Lambda scales along with these two event sources. It also covers best practices and challenges, including how to tune streaming sources for optimum performance and how to effectively monitor them.

Building real-time applications using Apache Flink

Build real-time applications using Apache Flink with Apache Kafka and Amazon Kinesis Data Streams. Apache Flink is a framework and engine for building streaming applications for use cases such as real-time analytics and complex event processing. This session covers best practices for building low-latency applications with Apache Flink when reading data from either Amazon MSK or Amazon Kinesis Data Streams. It also covers best practices for running low-latency Apache Flink applications using Amazon Kinesis Data Analytics and discusses AWS’s open-source contributions to this use case.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS SAA Exam Prep AWS SAA-C02 SAA-C03 Exam Prep

App modernization on AWS with Apache Kafka and Confluent Cloud

Learn how you can accelerate application modernization and benefit from the open-source Apache Kafka ecosystem by connecting your legacy, on-premises systems to the cloud. In this session, hear real customer stories about timely insights gained from event-driven applications built on an event streaming platform from Confluent Cloud running on AWS, which stores and processes historical data and real-time data streams. Confluent makes Apache Kafka enterprise-ready using infinite Kafka storage with Amazon S3 and multiple private networking options including AWS PrivateLink, along with self-managed encryption keys for storage volume encryption with AWS Key Management Service (AWS KMS).

BI at hyperscale: Quickly build and scale dashboards with Amazon QuickSight

Data-driven business intelligence (BI) decision making is more important than ever in this age of remote work. An increasing number of organizations are investing in data transformation initiatives, including migrating data to the cloud, modernizing data warehouses, and building data lakes. But what about the last mile—connecting the dots for end users with dashboards and visualizations? Come to this session to learn how Amazon QuickSight allows you to connect to your AWS data and quickly build rich and interactive dashboards with self-serve and advanced analytics capabilities that can scale from tens to hundreds of thousands of users, without managing any infrastructure and only paying for what you use.

Is there an Updated SAA-C03 Practice Exam?

As of this writing, the official SAA-C02 practice exam is not yet available. It would probably take about 3 more months before AWS finally releases the official version of the SAA-C03 practice exam for the new AWS Certified Solutions Architect Associate. In the meantime, you can try the new SAA-C03 sample exam so you can have a better idea of what will be the topic coverage would be, and how the scenarios will be presented.
This sample SAA-C03 sample exam PDF file can provide you with a hint of what the real SAA-C03 exam will look like in your upcoming test. In addition, the SAA-C03 sample questions also contain the necessary explanation and reference links that you can study.

Top-paying Cloud certifications:

AWS SAA App details and features

Tutorial by Neal Davis

In this AWS tutorial, we are going to discuss how we can make the best use of AWS services to build a highly scalable, and fault tolerant configuration of EC2 instances. The use of Load Balancers and Auto Scaling Groups falls under a number of best practices in AWS, including Performance Efficiency, Reliability and high availability.

Before we dive into this hands-on tutorial on how exactly we can build this solution, let’s have a brief recap on what an Auto Scaling group is, and what a Load balancer is.

Autoscaling group (ASG)

An Autoscaling group (ASG) is a logical grouping of instances which can scale up and scale down depending on pre-configured settings. By setting Scaling policies of your ASG, you can choose how many EC2 instances are launched and terminated based on your application’s load. You can do this based on manual, dynamic, scheduled or predictive scaling.

Elastic Load Balancer (ELB)

An Elastic Load Balancer (ELB) is a name describing a number of services within AWS designed to distribute traffic across multiple EC2 instances in order to provide enhanced scalability, availability, security and more. The particular type of Load Balancer we will be using today is an Application Load Balancer (ALB). The ALB is a Layer 7 Load Balancer designed to distribute HTTP/HTTPS traffic across multiple nodes – with added features such as TLS termination, Sticky Sessions and Complex routing configurations.

Getting Started

First of all, we open our AWS management console and head to the EC2 management console.

We scroll down on the left-hand side and select ‘Launch Templates’. A Launch Template is a configuration template which defines the settings for EC2 instances launched by the ASG.

Under Launch Templates, we will select “Create launch template”.

We specify the name ‘MyTestTemplate’ and use the same text in the description.

Under the ‘Auto Scaling guidance’ box, tick the box which says ‘Provide guidance to help me set up a template that I can use with EC2 Auto Scaling’ and scroll down to launch template contents.

When it comes to choosing our AMI (Amazon Machine Image) we can choose the Amazon Linux 2 under ‘Quick Start’.

The Amazon Linux 2 AMI is free tier eligible, and easy to use for our demonstration purposes.

Next, we select the ‘t2.micro’ under instance types, as this is also free tier eligible.

Under Network Settings, we create a new Security Group called ExampleSG in our default VPC, allowing HTTP access to everyone. It should look like this.

We can then add our IAM Role we created earlier. Under Advanced Details, select your IAM instance profile.

Then we need to include some user data which will load a simple web server and web page onto our Launch Template when the EC2 instance launches.

Under ‘advanced details’, and in ‘User data’ paste the following code in the box.

#!/bin/bash yum update -y yum install -y httpd.x86_64 systemctl start httpd.service systemctl enable httpd.service echo “Hello World from $(hostname -f)” > /var/www/html/index.html

Then simply click ‘Create Launch Template’ and we are done!

We are now able to build an Auto Scaling Group from our launch template.

On the same console page, select ‘Auto Scaling Groups’, and Create Auto Scaling Group.

We will call our Auto Scaling Group ‘ExampleASG’, and select the Launch Template we just created, then select next.

On the next page, keep the default VPC and select any default AZ and Subnet from the list and click next.

Under ‘Configure Advanced Options’ select ‘Attach to a new load balancer’ .

You will notice the settings below will change and we will now build our load balancer directly on the same page.

Select the Application Load Balancer, and leave the default Load Balancer name.

Choose an ‘Internet Facing’ Load balancer, select another AZ and leave all of the other defaults the same. It should look something like the following.

Under ‘Listeners and routing’, select ‘Create a target group’ and select the target group which was just created. It will be called something like ‘ExampleASG-1’. Click next.

Now we get to Group Size. This is where we specify the desired, minimum and maximum capacity of our Auto Scaling Group.

Set the capacities as follows:

Click ‘skip to review’, and click ‘Create Auto Scaling Group’.

You will now see the Auto Scaling Group building, and the capacity is updating.

After a short while, navigate to the EC2 Dashboard, and you will see that two EC2 instances have been launched!

To make sure our Auto Scaling group is working as it should – select any instance, and terminate the instance. After one instance has been terminated you should see another instance pending and go into a running state – bringing capacity back to 2 instances (as per our desired capacity).

If we also head over to the Load Balancer console, you will find our Application Load Balancer has been created.

If you select the load balancer, and scroll down, you will find the DNS name of your ALB – it will look something like ‘ ExampleASG-1-1435567571.us-east-1.elb.amazonaws.com’.

If you enter the DNS name into our URL, you should get the following page show up:

The message will display a ‘Hello World’ message including the IP address of the EC2 instance which is serving up the webpage behind the load balancer.

If you refresh the page a few times, you should see that the IP address listed will change. This is because the load balancer is routing you to the other EC2 instance, validating that our simple webpage is being served from behind our ALB.

The final step Is to make sure you delete all of the resources you configured! Start by deleting the Auto Scaling Group – and ensure you delete your load balancer also – this will ensure you don’t incur any charges.

Architectural Diagram

Below, you’ll find the architectural diagram of what we have built.

Learn how to Master AWS Cloud

Ultimate Training Packages – Our popular training bundles (on-demand video course + practice exams + ebook) will maximize your chances of passing your AWS certification the first time.

Membership – For unlimited access to our cloud training catalog, enroll in our monthly or annual membership program.

Challenge Labs – Build hands-on cloud skills in a secure sandbox environment. Learn, build, test and fail forward without risking unexpected cloud bills.

This post originally appeared on: https://digitalcloud.training/load-balancing-ec2-instances-in-an-autoscaling-group/

Download AWS Solution Architect Associate Exam Prep Quiz App for:

All Platforms (PWA) –  Android –  iOS – Windows 10  – Amazon Android

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

AWS Cloud Certifications Breaking News –  Testimonials – AWS Top Stories

Download AWS Solution Architect Associate Exam Prep Pro App (No Ads, Full version with answers) for:

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

Testimonial – I passed aws saa exam using this app

Android –  iOS – Windows 10 – Amazon Android

What are AWS STEP FUNCTIONS?

AWS Certification & Training | LinkedIn

There are many trends within the current cloud computing industry that have a sway on the conversations which take place throughout the market. One of these key areas of discussion is ‘Serverless’.

Serverless application deployment is a way of provisioning infrastructure in a managed way, without having to worry about any building and maintenance of servers – you launch the service and it works. Scaling, high availability, and automotive processes are looked after using managed AWS Serverless service. AWS Step Functions provides us a useful way to coordinate the components of distributed applications and microservices using visual workflows.

What is AWS Step Functions?

AWS Step Functions let developers build distributed applications, automate IT and business processes, and build data and machine learning pipelines by using AWS services.

Using Step Functions workflows, developers can focus on higher-value business logic instead of worrying about failures, retries, parallelization, and service integrations. In other words, AWS Step Functions is serverless workload orchestration service which can make developers’ lives much easier.

Components and Integrations

AWS Step Functions consist of a few components, the first being a State Machine.

What is a state machine?

The State Machine model uses given states and transitions to complete the tasks at hand. It is an abstract machine (system) that can be in one state at a time, but it can also switch between them. As a result, it doesn’t allow infinity loops, which removes one source of errors entirely, which is often costly.

With AWS Step Functions, you can define workflows as state machines, which simplify complex code into easy-to-understand statements and diagrams. The process of building applications and confirming they work as expected is actually much faster and easier.

State

In a state machine, a state is referred to by its name, which can be any string, but must be unique within the state machine. State instances exist until their execution is complete.

An individual component of your state machine can be in any of the following 8 types of states:

  • Task state – Do some work in your state machine. From a task state, Amazon Step Functions can call Lambda functions directly
  • Choice state – Make a choice between different branches of execution
  • Fail state – Stops execution and marks it as failure
  • Succeed state – Stops execution and marks it as a success
  • Pass state – Simply pass its input to its output or inject some fixed data
  • Wait state – Provide a delay for a certain amount of time or until a specified time/date
  • Parallel state – Begin parallel branches of execution
  • Map state – Adds a for-each loop condition

Limits

There are some limits which you need to be aware of when you are using AWS Step Functions. This table will break down the limits:

Use Cases and Examples

If you need to build workflows across multiple Amazon services, then AWS Step Functions are a great tool for you. Serverless microservices can be orchestrated with Step Functions, data pipelines can be built, and security incidents can be handled with Step Functions. It is possible to use Step Functions both synchronously and asynchronously.

Instead of manually orchestrating long-running, multiple ETL jobs or maintaining a separate application, Step Functions can ensure that these jobs are executed in order and complete successfully.

As a third feature, Step Functions are a great way to automate recurring tasks, such as updating patches, selecting infrastructure, and synchronizing data, and Step Functions will scale automatically, respond to timeouts, and retry missed tasks when they fail.

With Step Functions, you can create responsive serverless applications and microservices with multiple AWS Lambda functions without writing code for workflow logic, parallel processes, error handling, or timeouts.

Additionally, services and data can be orchestrated that run on Amazon EC2 instances, containers, or on-premises servers.

Pricing

Each time you perform a step in your workflow, Step Functions counts a state transition. State transitions, including retries, are charged across all state machines.

There is a Free Tier for AWS Step Functions of 4000 State Transitions per month.

With AWS Step Functions, you pay for the number state transitions you use per month.

Step Functions count a state transition each time a step of your workflow is executed. You are charged for the total number of state transitions across all your state machines, including retries.

State Transitions cost a flat rate of $0.000025 per state transition thereafter.

Summary

In summary, Step Functions are a powerful tool which you can use to improve the application development and productivity of your developers. By migrating your logic workflows into the cloud you will benefit from lower cost, rapid deployment. As this is a serverless service, you will be able to remove any undifferentiated heavy lifting from the application development process.

Interview Questions

Q: How does AWS Step Function create a State Machine?

A: A state machine is a collection of states which allows you to perform tasks in the form of lambda functions, or another service, in sequence, passing the output of one task to another. You can add branching logic based on the output of a task to determine the next state.

Q: How can we share data in AWS Step Functions without passing it between the steps?

A: You can make use of InputPath and ResultPath. In the ValidationWaiting step you can set the following properties (in State Machine definition)

This way you can send to external service only data that is actually needed by it and you won’t lose access to any data that was previously in the input.

Q: How can I diagnose an error or a failure within AWS Step Functions?

A: The following are some possible failure events that may occur

  1. State Machine Definition Issues.
  2. Task Failures due to exceptions thrown in a Lambda Function.
  3. Transient or Networking Issues.
  4. A task has surpassed its timeout threshold.
  5. Privileges are not set appropriately for a task to execute.

Source: This AWS Step Function post originally appeared on: https://digitalcloud.training/

AWS Secrets Manager vs SSM Parameter Store

If you want to be an AWS cloud professional, you need to understand the differences between the myriad of services AWS offer. You also need an in-depth understanding on how to use the Security services to ensure that your account infrastructure is highly secure and safe to use. This is job zero at AWS, and there is nothing that is taken more seriously than Security. AWS makes it really easy to implement security best practices and provides you with many tools to do so. 

AWS Secrets Manager and SSM Parameter store sound like very similar services on the surface -however, when you dig deeper – comparing AWS Secrets Manager vs SSM Parameter Store – you will find some significant differences which help you understand exactly when to use each tool.

AWS Secrets Manager

AWS Secrets Manager is designed to provide encryption for confidential information (like database credentials and API keys) that needs to be guarded safely in a secure way. Encryption is automatically enabled when creating a secret entry and there are a number of additional features we are going to explore in this article.

Through using AWS Secrets Manager, you can manage a wide range of secrets: Database credentials, API keys, and other self defined secrets are all eligible for this service.

If you are responsible for storing and managing secrets within your team, as well as ensuring that your company follows regulatory requirements – this is possible through AWS Secrets Manager which securely and safely stores all secrets within one place. Secrets Manager also has a large degree of added functionality. 

SSM Parameter store

SSM Parameter store is slightly different. The key differences become evident when you compare how AWS Secrets Manager vs SSM Parameter Store are used.

The SSM Parameter Store focuses on a slightly wider set of requirements. Based on your compliance requirements, SSM Parameter Store can be used to store the secrets encrypted or unencrypted within your code base.

By storing environmental configuration data and other parameters, the software simplifies and optimizes the application deployment process. With the AWS Secrets Manager, you can add key rotation, cross-account access, and faster integration with services offered by AWS.

Based on this explanation you may think that they both sound similar. Let’s break down the similarities and differences between these roles. 

Similarities

Managed Key/Value Store Services

Both services allow you to store values using a name and key. This is an extremely useful aspect of both of the services as the deployment of the application can reference different parameters or different secrets based on the deployment environment, allowing customizable and highly integratable deployments of your applications.

Both Referenceable in CloudFormation

You can use the powerful Infrastructure as Code (IaC) tool AWS CloudFormation to build your applications programmatically. The effortless deployment of either product using CloudFormation allows a seamless developer experience, without using painful manual processes.

While SSM Parameter Store only allows one version of a parameter to be active at any given time, Secrets Manager allows multiple versions to exist at the same time when you are rotating a secret using staging labels.

Similar Encryption Options

They are both inherently very secure services – and you do not have to choose one over another based on the encryption offered by either service.

Through another AWS Security service, KMS (the Key Management Service), IAM policies can be outlined to control and outline specific permissions on which only certain IAM users and roles have permission to decrypt the value. This restricts access to anyone who doesn’t need it – and it abides to the principle of least privilege, helping you abide by compliance standards. 

Versioning

Versioning outlines the ability to save multiple, and iteratively developed versions of something to allow quicker restore lost versions, and maintain multiple copies of the same thing etc. 

Both services support versioning of secret values within the service. This allows you to view multiple previous versions of your parameters. You can also optionally choose to promote a former version to the master up to date version, which can be useful as your application changes. 

Given that there are lots of similarities between the two services, it is now time to view and compare the differences, along with some use cases of either service. 

Differences

Cost

The costs are different across the services, namely the fact that SSM tends to cost less compared to Secrets Manager. Standard parameters are free for SSM. You won’t be charged for the first 10,000 parameters you store, however, Advanced Parameters will cost you. For every 10,000 API calls and every secret per month, AWS Secret Manager bills you a fixed fee.

This may factor into how you use each service and how you define your cloud spending strategy, so this is valuable information.

Password generation

A useful feature within AWS Secrets Manager allows us to generate random data during the creation phase to allow for the secure and auditable creation of strong and unique passwords and subsequently reference it in the same CloudFormation stack. This allows our applications to be fully built using IaC, and gives us all the benefits which that entails.

AWS Systems Manager Parameter Store on the other hand doesn’t work this way, and doesn’t allow us to generate random data — we need to do it manually using console or AWS CLI, and this can’t happen during the creation phase. 

Rotation of Secrets

A Powerful feature of AWS Secrets Manager is the ability to automatically rotate credentials based on a pre-defined schedule, which you set. AWS Secrets Manager integrates this feature natively with many AWS services, and this feature (automated data rotation) is simply not possible using AWS Systems Manager Parameter Store.You will have to refresh and update data daily which will include a lot more manual setup to achieve the same functionality that is supported natively with Secrets Manager. 

Cross-Account Access

Firstly, there is currently no way to attach resource-based IAM policy for AWS Systems Manager Parameter Store (Standard type).This means that cross-account access is not possible for Parameter store, and if you need this functionality you will have to configure an extensive work around, or use AWS Secrets Manager.

Size of Secrets

Each of the options stores a maximum set size of secret / parameter. 

Secrets Manager can store secrets of up to 10kb in size.

Standard Parameters can use up to 4096 characters (4KB size) for each entry, and Advanced Parameters can store up to 8KB entries.

Multi-Region Deployment

Like with many other features of AWS secrets Manager, AWS SSM Parameter store does not come with the same functionality. In this case you can’t easily replicate your secrets across multiple regions for added functionality / value, and you will need to implement an extensive work around for this to work.

In terms of use cases, you may want to use AWS Secrets Manager to store your encrypted secrets with easy rotation. If you require a feature rich solution for managing your secrets to stay compliant with your regulatory and compliance requirements, consider choosing AWS Secrets Manager.

On the other hand, you may want to choose SSM Parameter Store as a cheaper option to store your encrypted or unencrypted secrets. Parameter Store will provide some limited functionality to enable your application deployments by storing your parameters in a safe, cheap and secure way. 

Source: This post originally appeared on https://digitalcloud.training/aws-secrets-manager-vs-ssm-parameter-store/

 

There is great power in using automation and AWS managed services to offload the heavy lifting in application building and infrastructure deployment.

With AWS, you have many choices when you don’t want to worry about the manual provisioning of infrastructure – today, we’ll focus on Elastic Beanstalk, CloudFormation and OpsWorks!

These three services are often misunderstood and confused with each other. Knowing the concrete differences between Elastic Beanstalk vs CloudFormation vs OpsWorks will help improve your career aspirations and help you succeed in your AWS cloud journey. We have created this cheat sheet on Elastic Beanstalk vs CloudFormation vs OpsWorks as a resource to clear any misunderstanding once and for all.

We are going to break down and compare each of these services by category, and discuss similarities and differences between each of these seemingly similar services.

General Overview

Let’s start with some general features and key descriptions of Elastic Beanstalk vs CloudFormation vs OpsWorks.

AWS Elastic Beanstalk is a PaaS tool (Platform as a Service) in which you upload your application code to the Elastic Beanstalk service, and it simply builds your application for you. It handles everything from load balancing and scaling to application monitoring and more.

AWS CloudFormation is an IaC tool (Infrastructure as code) that provisions AWS infrastructure through the deployment of CloudFormation templates. You write your template, upload it via CloudFormation, and the service will automate the build of whatever you define in your code.

AWS OpsWorks on the other hand is a configuration management tool, providing managed support for the popular tools Chef and Puppet. There are three sub-services within OpsWorks, namely AWS Opsworks for Chef Automate, AWS OpsWorks for Puppet Enterprise, and AWS OpsWorks Stacks.

Feature Comparison

Firstly, AWS Elastic Beanstalk takes your pre-written application code (in a number of different programming languages) and you simply upload it to the Elastic Beanstalk console. Elastic Beanstalk will then provision the necessary components behind the scenes, and your application will work – with no knowledge of the infrastructure involved. Some of Elastic Beanstalks’ main features are as follows:

  • Built in application monitoring using Amazon CloudWatch and AWS X-Ray allow you to gain deep insights into your Beanstalk environments.
  • Elastic Beanstalk is compliant with ISO, PCI, SOC 1, SOC 2, and SOC 3 compliance along with the criteria for HIPAA eligibility.
  • You can use AWS Graviton (arm64-based processors) to allow for an optimal price to performance ratio.

AWS CloudFormation on the other hand is slightly different in that you don’t design applications per se, but you design the infrastructure to run your applications on. You don’t write your AWS CloudFormation templates to behave like application code, instead you use either JSON or YAML to give the AWS CloudFormation APIs the instructions to provision AWS services within your environment.

In short, Elastic Beanstalk builds full applications, and CloudFormation builds infrastructure. Let’s view some of CloudFormation’s key features:

  • Allows for easy cross-region and cross-account management, ensuring you are able to be as highly available as possible, with a single template.
  • There are no manual steps which can lead to security vulnerabilities or errors, and you can use Rollback functionality to ensure that unless something isn’t perfect, it doesn’t get built.
  • With Change Sets you can preview the changes the will be made when you update a stack without actually updating the production stack.
  • It takes a lot of the heavy lifting off your hands through automating the creation, update, and deletion of your infrastructure.

OpsWorks on the other hand is simply a managed service for the use of the popular configuration frameworks Chef and Puppet. It is used for deploying applications explicitly with these tools only.

OpsWorks does too have additional functionality that is explicitly tied to Chef and Puppet, like self-healing, using layers etc. Compared to CloudFormation, OpsWorks is more focused on orchestrations and builds alongside the software configuration, and less on what and how AWS resources are procured on your behalf.

What each service does and how to use it

The AWS Elastic Beanstalk service (being a PaaS product) builds full scale applications and includes everything need to run your code in a production environment. You write your application code like you normally would and simply add it into Elastic Beanstalk and full applications are built – not just the infrastructure.

CloudFormation solely provisions the infrastructure ready for you for you to populate with your applications. There is a wealth of internal documentation on writing CloudFormation templates, the downside being that writing effective templates will involve some learning if you have either written in Terraform, or haven’t had exposure to IaC yet.

OpsWorks is also capable of building automation into your cloud deployments through the main three aforementioned services, and they provide different functionality comparing AWS OpsWorks vs CloudFormation vs Elastic Beanstalk:

  • AWS Opsworks for Chef Automate is a hosted version of Chef Automate. This consists of a wide range of tools providing configuration control and includes automatically patching, updating, and backing up your server
  • AWS OpsWorks for Puppet Enterprise gives you access to all of the Puppet Enterprise features – as is also a fully managed version of a popular tool – however the difference being it is based on Puppet. It works seamlessly with your preconfigured Puppet code with minimal to no changes.
  • AWS OpsWorks Stacks allows you to arrange your architecture using appropriate methodologies e.g. test, development, production and you can interact with each layer independently.
  • With OpsWorks you can ensure your traffic is safely encrypted using SSL.

Use Cases

We have devised this simple table to show some broad difference in the services, so you can understand each service as deeply as possible.

Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus


Watch udemy, inc. certificação google cloud associate engineer (gcp) + bonus

taimienphi.vn

  • Bjarne Stroustrup - The C++ Programming Language
  • Brian W. Kernighan, Rob Pike - The Practice of Programming
  • Donald Knuth - The Art of Computer Programming
  • Ellen Ullman - Close to the Machine
  • Ellis Horowitz - Fundamentals of Computer Algorithms
  • Eric Raymond - The Art of Unix Programming
  • Gerald M. Weinberg - The Psychology of Computer Programming
  • James Gosling - The Java Programming Language
  • Joel Spolsky - The Best Software Writing I
  • Keith Curtis - After the Software Wars
  • Richard M. Stallman - Free Software, Free Society
  • Richard P. Gabriel - Patterns of Software
  • Richard P. Gabriel - Innovation Happens Elsewhere
  • Code Complete (2nd edition) by Steve McConnell
  • The Pragmatic Programmer
  • Structure and Interpretation of Computer Programs
  • The C Programming Language by Kernighan and Ritchie
  • Introduction to Algorithms by Cormen, Leiserson, Rivest & Stein
  • Design Patterns by the Gang of Four
  • Refactoring: Improving the Design of Existing Code
  • The Mythical Man Month
  • The Art of Computer Programming by Donald Knuth
  • Compilers: Principles, Techniques and Tools by Alfred V. Aho, Ravi Sethi and Jeffrey D. Ullman
  • Gödel, Escher, Bach by Douglas Hofstadter
  • Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin
  • Effective C++
  • More Effective C++
  • CODE by Charles Petzold
  • Programming Pearls by Jon Bentley
  • Working Effectively with Legacy Code by Michael C. Feathers
  • Peopleware by Demarco and Lister
  • Coders at Work by Peter Seibel
  • Surely You're Joking, Mr. Feynman!
  • Effective Java 2nd edition
  • Patterns of Enterprise Application Architecture by Martin Fowler
  • The Little Schemer
  • The Seasoned Schemer
  • Why's (Poignant) Guide to Ruby
  • The Inmates Are Running The Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
  • The Art of Unix Programming
  • Test-Driven Development: By Example by Kent Beck
  • Practices of an Agile Developer
  • Don't Make Me Think
  • Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
  • Domain Driven Designs by Eric Evans
  • The Design of Everyday Things by Donald Norman
  • Modern C++ Design by Andrei Alexandrescu
  • Best Software Writing I by Joel Spolsky
  • The Practice of Programming by Kernighan and Pike
  • Pragmatic Thinking and Learning: Refactor Your Wetware by Andy Hunt
  • Software Estimation: Demystifying the Black Art by Steve McConnel
  • The Passionate Programmer (My Job Went To India) by Chad Fowler
  • Hackers: Heroes of the Computer Revolution
  • Algorithms + Data Structures = Programs
  • Writing Solid Code
  • JavaScript - The Good Parts
  • Getting Real by 37 Signals
  • Foundations of Programming by Karl Seguin
  • Computer Graphics: Principles and Practice in C (2nd Edition)
  • Thinking in Java by Bruce Eckel
  • The Elements of Computing Systems
  • Refactoring to Patterns by Joshua Kerievsky
  • Modern Operating Systems by Andrew S. Tanenbaum
  • The Annotated Turing
  • Things That Make Us Smart by Donald Norman
  • The Timeless Way of Building by Christopher Alexander
  • The Deadline: A Novel About Project Management by Tom DeMarco
  • The C++ Programming Language (3rd edition) by Stroustrup
  • Patterns of Enterprise Application Architecture
  • Computer Systems - A Programmer's Perspective
  • Agile Principles, Patterns, and Practices in C# by Robert C. Martin
  • Growing Object-Oriented Software, Guided by Tests
  • Framework Design Guidelines by Brad Abrams
  • Object Thinking by Dr. David West
  • Advanced Programming in the UNIX Environment by W. Richard Stevens
  • Hackers and Painters: Big Ideas from the Computer Age
  • The Soul of a New Machine by Tracy Kidder
  • CLR via C# by Jeffrey Richter
  • The Timeless Way of Building by Christopher Alexander
  • Design Patterns in C# by Steve Metsker
  • Alice in Wonderland by Lewis Carol
  • Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
  • About Face - The Essentials of Interaction Design
  • Here Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky
  • The Tao of Programming
  • Computational Beauty of Nature
  • Writing Solid Code by Steve Maguire
  • Philip and Alex's Guide to Web Publishing
  • Object-Oriented Analysis and Design with Applications by Grady Booch
  • Effective Java by Joshua Bloch
  • Computability by N. J. Cutland
  • Masterminds of Programming
  • The Tao Te Ching
  • The Productive Programmer
  • The Art of Deception by Kevin Mitnick
  • The Career Programmer: Guerilla Tactics for an Imperfect World by Christopher Duncan
  • Paradigms of Artificial Intelligence Programming: Case studies in Common Lisp
  • Masters of Doom
  • Pragmatic Unit Testing in C# with NUnit by Andy Hunt and Dave Thomas with Matt Hargett
  • How To Solve It by George Polya
  • The Alchemist by Paulo Coelho
  • Smalltalk-80: The Language and its Implementation
  • Writing Secure Code (2nd Edition) by Michael Howard
  • Introduction to Functional Programming by Philip Wadler and Richard Bird
  • No Bugs! by David Thielen
  • Rework by Jason Freid and DHH
  • JUnit in Action
Back to top