Managed Kubernetes Comparison: EKS vs GKE

Kubernetes is changing the tech space as it becomes increasingly prominent across various industries and environments. Kubernetes can now be found in on-premise data centers, cloud environments, edge solutions, and even space.

As a container orchestration system, Kubernetes automatically manages the availability and scalability of your containerized applications. Its architecture consists of various planes that make up what is known as a cluster. This cluster can be implemented (or deployed) in various ways, including adopting a CNCF-certified hosted or managed Kubernetes cluster.

This article explores and contrasts two of the most popular hosted clusters: Amazon Elastic Container Service for Kubernetes (EKS) and Google Kubernetes Engine (GKE). You’ll compare the tools looking at ease of setup and management, compatibility with Kubernetes version releases, support for government cloud, support for hybrid cloud models, cost, and developer community adoption.

GKE vs EKS



Overview of Managed Kubernetes Solution

A managed Kubernetes solution involves a third-party, such as a cloud vendor, taking on some or full responsibility for the setup, configuration, support, and operations of the cluster. Google Kubernetes Engine (GKE), Amazon Elastic Container Service for Kubernetes (EKS), Azure Kubernetes Service, and IBM Cloud Kubernetes Service are examples of managed Kubernetes clusters.

Managed Kubernetes solutions are useful for software teams that want to focus on the development, deployment, and optimization of their workloads. The process of managing and configuring clusters is complex, time-consuming, and requires proficient Kubernetes administration, especially for production environments.



Overview of GKE

Let’s look at qualities that your organization should consider before choosing GKE as their hosted cluster solution:



Cluster Configurations

GKE has two cluster configuration options (or modes, as they are called): Standard and Autopilot.

  • Standard mode: This mode allows software teams to manage the underlying infrastructure (node configurations) of their clusters.

GKE Standard cluster architecture

  • Autopilot mode: This mode offers software teams a hands-off experience of a Kubernetes cluster. GKE manages the provisioning and optimization of the cluster and its node pools.

GKE Autopilot cluster architecture



Setup and Configuration Management

Cluster setup and configuration can be a time-consuming and arduous process. In a cloud environment, you must also understand networking topologies since they form the backbone of cluster deployments.

For teams and operators looking for a solution with less operational overhead, GKE has the automated capabilities you’re looking for. This includes automated health checks and repairs on nodes, as well as automatic cluster and node upgrades for new version releases.



Service Mesh

Software teams deploying applications based on microservice architectures quickly find out that Kubernetes service level capabilities are insufficient in a number of ways.

Service meshes are dedicated infrastructure layers that address network and security issues at an application service level and help complement large complex workloads.

GKE comes with Istio installed by default. Istio is an open-source service mesh implementation that can help organizations secure large and critical workloads.



Kubernetes Versions and Upgrades

In comparison to EKS, GKE offers a wide variety of release versions depending on the release channel you select (stable, regular, or rapid). The rapid channel includes the latest version of Kubernetes (v1.22 at the time of this post).

GKE also has auto-upgrade capabilities for both clusters and nodes in Standard and Autopilot cluster modes.



No Government Cloud Support

Google doesn’t offer a government cloud solution like AWS for hosted clusters. Any software solutions that require security posture, regulation, and stringency by government agencies will have to be developed based on your standard regional offerings.



Exclusive to Cloud VMs

A majority of enterprises prefer a hybrid model over other cloud strategies; however, GKE only offers cluster architecture models that consist of Virtual Machines (VMs) in a cloud environment.

For organizations looking to distribute their workloads between nodes in on-premise data centers and the cloud, EKS would be more suitable.



Conditional Service Level Agreement (SLA)

When making use of a single zone cluster, GKE is the more affordable solution, as there are no costs involved in managing the control plane; however, this solution type doesn’t offer a Service Level Agreement (SLA) unless you opt for a regional cluster solution, which costs ten cents per hour for control plane management.

EKS offers SLA coverage at 99.95 percent, whereas GKE only offers 99.5 percent for its zonal clusters and 99.95 percent for its regional clusters.



CLI Support

The GKE CLI is a sub-module of the official GCP CLI (gcloud). Once a user has installed gcloud and authenticated with gcloud init, they can proceed to perform lifecycle activities on their GKE clusters.



Pricing

GKE clusters can be launched either in Standard mode or Autopilot mode. Both modes have an hourly charge of ten cents per cluster after the free tier.

From a pricing perspective, GKE differs from EKS because it has a free tier with monthly credits that, if applied to a single zonal cluster or Autopilot cluster, will completely cover the operational costs involved in running the cluster.



Use Cases

Based on the characteristics outlined above, GKE works particularly well in the following scenarios:

  • Minimal management overhead
  • High degree of operational automation
  • Wide support of Kubernetes versions (including an option for latest versions)
  • Cost-effective model for small clusters
  • Out-the-box service mesh integration (with Istio)



Overview of EKS

Now let’s take a look at EKS and what factors you should consider before using their hosted cluster solution.



Cluster Configurations

EKS has three cluster configuration options for launching or deploying your managed Kubernetes cluster in AWS. These three configurations are managed node groups, self-managed nodes, and Fargate.



Managed Node Groups

The launch configuration automates the provision and lifecycle management of your EC2 worker nodes for your EKS cluster. In this mode, AWS manages the running and updating of the EKS AMI on your nodes, applying labels to node resources and draining of nodes.



Self-managed Worker Nodes

As the name implies, this option gives teams and operators the most flexibility for configuring and managing their nodes. It’s the DIY option from the different launch configurations.

You can either launch Auto Scaling groups or individual EC2 instances and register them as worker nodes to your EKS cluster. This approach requires that all underlying nodes have the same instance type, the same Amazon Machine Image (AMI), and the same Amazon EKS node IAM role.



Serverless Worker Nodes with Fargate

AWS Fargate is a serverless engine that allows you to focus on optimizing your container workloads while it takes care of the provisioning and configuration of the infrastructure for your containers to run on.



EKS Anywhere

Businesses recognize the cloud as a great enabler and are using it to meet their needs in combination with on-premise data centers.

Amazon EKS recently launched Amazon EKS Anywhere, which enables businesses to deploy Kubernetes clusters on their own infrastructure (using VMware vSphere) while still being supported by AWS automated cluster management.

This deployment supports the hybrid cloud model, which in turn enables businesses to have operational consistency in their workloads, both on-premises and in the cloud. At this point in time, EKS doesn’t offer the option for using bare metal nodes, but AWS has stated that this feature is expected in 2022.



Integration with AWS Ecosystem

For years, AWS has been the leading cloud compute services provider. EKS can easily integrate with other AWS services, allowing enterprises to make use of other cloud compute resources that meet their requirements. If your business’ cloud strategy consists of resources in the AWS landscape, your Kubernetes workloads can be seamlessly integrated using EKS.



Developer Community

EKS has a vast developer community with the highest adoption and usage rate among the Kubernetes managed cluster solutions. Because of the complex challenges that configuring and optimizing Kubernetes entails, this community offers you a great deal of value as it can support structure around common use cases, forms a knowledge base for you to query as you face problems, and offers examples from others using similar technologies.



Government Cloud Solution

AWS has a government cloud solution that enables you to run sensitive workloads securely while meeting the relevant compliance requirements. As a result, the power of Kubernetes can be used in the AWS ecosystem to support operations that fit this criterion.



Setup and Configuration Management

Compared to GKE, operating EKS from the console requires additional manual steps and configuration in order to provision the cluster. This requires knowledge and proficiency from software teams to understand the underlying networking components of AWS and how it impacts the cluster to be provisioned.

Furthermore, installation of components like Calico CNI, as well as upgrading the AWS VPC CNI has to be done manually, and EKS doesn’t support automatic node health repair checks.



Kubernetes Versions and Upgrades

EKS supports three or more minor Kubernetes version releases, not including the most recent Kubernetes release. In addition, when using EKS, Kubernetes version upgrades have to be done manually.

For software teams that want to stay on top of the latest security patches as well as work with the latest features, the limited options that EKS offers can make meeting certain requirements challenging.



CLI Support

Similar to GKE, EKS has full CLI support in the form of a sub-module of the official AWS CLI tool. When a software developer configures their AWS profile (that has the right permissions) with the CLI, they can proceed to perform operations on their EKS cluster.

Updating the local kube config file to contain the credentials for the Kubernetes cluster API endpoint can be done with the following command: aws eks update-kubeconfig --region <region> --name <cluster-name>.

In addition, the team from Weaveworks produced an EKS CLI tool called eksctl which is used to implement and manage the lifecycle of EKS clusters in the form of infrastructure-as-code.



Pricing

Amazon EKS charges ten cents per hour, which is a fee based on the management of the control plane. Any additional charges are incurred based on the standard prices for other AWS resources (i.e., EC2 instances for worker nodes).

When Amazon EKS is run on AWS Fargate (serverless engine), the additional pricing (outside of the hourly rate for the control plane) is calculated based on the memory and vCPU usage of the underlying resources used to run the container workloads.

Unlike GKE, AWS doesn’t offer a limited free tier service for EKS.



Use Cases

Based on the characteristics outlined above, EKS works particularly well in the following scenarios:

  • Running workloads in a hybrid cloud model
  • Integrating workloads with AWS ecosystem
  • Desired support from a large community of practitioners
  • Running workloads in a dedicated government cloud environment



Conclusion

By design, managed Kubernetes solutions like EKS and GKE reduce the operational overhead and complexities that come with managing a Kubernetes cluster. Each cluster solution has pros and cons that organizations need to consider against their needs and workload requirements.

Software teams also need to consider an optimal way of deploying their infrastructure and application workloads. In this case, Qovery can help your teams become more autonomous and efficient. Qovery is a cloud-agnostic deployment platform that can help teams with Kubernetes cluster management, whether EKS or GKE, in a scalable way.


Source link

Exploring Google Analytics Realtime Data with Python

Google Analytics can provide a lot of insight into traffic and about users visiting your website. A lot of this data is available in nice format in web console, but what if you wanted to build your own diagrams and visualizations, process the data further or just generally work with it programmatically? That’s where Google Analytics API can help you, and in this article we will look at how you can use it to query and process realtime analytics data with Python.



Exploring The API

Before jumping into using some specific Google API, it might be a good idea to first play around with some of them. Using Google’s API explorer, you can find out which API will be most useful for you, and it will also help you determine which API to enable in Google Cloud console.

We will start with Real Time Reporting API as we’re interested in realtime analytics data, whose API explorer is available here. To find other interesting APIs, check out the reporting landing page, from where you can navigate to other APIs and their explorers.

For this specific API to work, we need to provide at least 2 values – ids and metrics. First of them is so-called table ID, which is the ID of your analytics profile. To find it, go to your analytics dashboard, click Admin in bottom left, then choose View Settings, where you will find the ID in View ID field. For this API you need to provide the ID formatted as ga:<TABLE_ID>.

The other value you will need is a metric. You can choose one from metrics columns here. For the realtime API, you will want either rt:activeUsers or rt:pageviews.

With those values set, we can click execute and explore the data. If the data looks good, and you determine that this is the API you need then it’s time enable it and set up the project for it…



Setting Up

To be able to access the API, we will need to first create a project in Google Cloud. To do that, head over to Cloud Resource Manager and click on Create Project. Alternatively, you can do it also via CLI, with gcloud projects create $PROJECT_ID. After a few seconds you will see new project in the list.

Next, we need to enable the API for this project. You can find all the available APIs in API Library. The one we’re interested in – Google Analytics Reporting API – can be found here.

API is now ready to be used, but we need credentials to access it. There are couple different types of credentials based on the type of application. Most of them are suited for application that require user consent, such as client-side or Android/iOS apps. The one that is for our use-case (querying data and processing locally) is using service accounts.

To create a service account, go to credentials page, click Create Credentials and choose Service Account. Give it some name and make note of service account ID (second field), we’ll need it in a second. Click Create and Continue (no need to give service account accesses or permissions).

Next, on the Service Account page choose your newly created service account and go to Keys tab. Click Add Key and Create New Key. Choose JSON format and download it. Make sure to store it securely, as it can be used to access your project in Google Cloud account.

With that done, we now have project with API enabled and service account with credentials to access it programmatically. This service account however doesn’t have access to your Google Analytics view, so it cannot query your data. To fix this, you need to add the previously mentioned service account ID (XXXX@some-project-name.iam.gserviceaccount.com) as user in Google Analytics with Read & Analyse access – a guide for adding users can be found here.

Finally, we need to install Python client libraries to use the APIs. We need 2 of them, one for authentication and one for the actual Google APIs:

pip install google-auth-oauthlib
pip install google-api-python-client



Basic Queries

With all that out of the way, let’s write our first query:

import os
from googleapiclient.discovery import build
from google.oauth2 import service_account

KEY_PATH = os.getenv('SA_KEY_PATH', 'path-to-secrets.json')
TABLE_ID = os.getenv('TABLE_ID', '123456789')
credentials = service_account.Credentials.from_service_account_file(KEY_PATH)

scoped_credentials = credentials.with_scopes(['https://www.googleapis.com/auth/analytics.readonly'])

with build('analytics', 'v3', credentials=credentials) as service:
    realtime_data = service.data().realtime().get(
        ids=f'ga:TABLE_ID', metrics='rt:pageviews', dimensions='rt:pagePath').execute()

    print(realtime_data)

We begin by authenticating to the API using the JSON credentials for our service account (downloaded earlier) and limiting the scope of the credentials only to the read-only analytics API. After that we build a service which is used to query the API – the build function takes name of the API, it’s version and previously created credentials object. If you want to access different API, then see this list for the available names and versions.

Finally, we can query the API – we set ids, metrics and optionally dimensions as we did with API explorer earlier. You might be wondering where did I find the methods of service object (.data().realtime().get(...)) – they’re all documented here.

And when we run the code above, the print(...) will show us something like this (trimmed for readability):


  "query": 
    "ids": "ga:<TABLE_ID>",
    "dimensions": "rt:pagePath",
    "metrics": [
      "rt:pageviews"
    ]
  ,
  "profileInfo": 
    "profileName": "All Web Site Data",
    ...
  ,
  "totalsForAllResults": 
    "rt:pageviews": "23"
  ,
  "rows": [
    ["/", "2"],
    ["/404", "1"],
    ["/blog/18", "1"],
    ["/blog/25", "3"],
    ["/blog/28", "2"],
    ["/blog/3", "3"],
    ["/blog/51", "2"],
    ...
  ]

That works, but considering that the result is dictionary, you will probably want to access individual fields of the result:

print(realtime_data["profileInfo"]["profileName"])
# All Web Site Data
print(realtime_data["query"]["metrics"])
# ['rt:pageviews']
print(realtime_data["query"]["dimensions"])
# rt:pagePath
print(realtime_data["totalResults"])
# 23

The previous example shows usage of the realtime() method of the API, but there are 2 more we can make use of. First of them is ga():

with build('analytics', 'v3', credentials=credentials) as service:
    ga_data = service.data().ga().get(
        ids=f'ga:TABLE_ID',
        metrics='ga:sessions', dimensions='ga:country',
        start_date='yesterday', end_date='today').execute()

    print(ga_data)
    # 'totalsForAllResults': 'ga:sessions': '878', 'rows': [['Angola', '1'], ['Argentina', '5']]

This method returns historical (non-realtime) data from Google Analytics and also has more arguments that can be used for specifying time range, sampling level, segments, etc. This API also has additional required fields – start_date and end_date.

You probably also noticed that the metrics and dimensions for this method are a bit different – that’s because each API has its own set of metrics and dimensions. Those are always prefixed with the name of API – in this case ga:, instead of rt: earlier.

The third available method .mcf() is for Multi-Channel Funnels data, which is beyond scope of this article. If it sounds useful for you, check out the docs.

One last thing to mention when it comes to basic queries is pagination. If you build queries that return a lot of data, you might end up exhausting your query limits and quotas or have problems processing all the data at once. To avoid this you can use pagination:

with build('analytics', 'v3', credentials=credentials) as service:
    ga_data = service.data().ga().get(
        ids=f'ga:TABLE_ID',
        metrics='ga:sessions', dimensions='ga:country',
        start_index='1', max_results='2',
        start_date='yesterday', end_date='today').execute()

    print(f'Items per page  = ga_data["itemsPerPage"]')
    # Items per page  = 2
    print(f'Total results   = ga_data["totalResults"]')
    # Total results   = 73

    # These only have values if other result pages exist.
    if ga_data.get('previousLink'):
        print(f'Previous Link  = ga_data["previousLink"]')
    if ga_data.get('nextLink'):
        print(f'Next Link      = ga_data["nextLink"]')
        #       Next Link      = https://www.googleapis.com/analytics/v3/data/ga?ids=ga:<TABLE_ID>&dimensions=ga:country&metrics=ga:sessions&start-date=yesterday&end-date=today&start-index=3&max-results=2

In the above snippet we added start_index='1' and max_results="2" to force pagination. This causes the previousLink and nextLink to get populated which can be used to request previous and next pages, respectively. This however doesn’t work for realtime analytics using realtime() method, as it lacks the needed arguments.



Metrics and Dimensions

The API itself is pretty simple. The part that is very customizable is arguments such as metrics and dimensions. So, let’s take a better look at all the arguments and their possible values to see how we can take full advantage of this API.

Starting with metrics – there are 3 most important values to choose from – rt:activeUsers, rt:pageviews and rt:screenViews:

  • rt:activeUsers gives you number of users currently browsing your website as well as their attributes
  • rt:pageviews tells you which pages are being viewed by users
  • rt:screenViews – same as page views, but only relevant within application, e.g. Android or iOS

For each metric a set of dimensions can be used to break down the data. There’s way too many of them to list here, so let’s instead see some combinations of metrics and dimensions that you can plug into above examples to get some interesting information about visitors of your website:

  • metrics="rt:activeUsers", dimensions="rt:userType" – Differentiate currently active users based on whether they’re new or returning.
  • metrics="rt:pageviews", dimensions="rt:pagePath" – Current page views with breakdown by path.
  • metrics="rt:pageviews", dimensions="rt:medium,rt:trafficType" – Page views with breakdown by medium (e.g. email) and traffic type (e.g. organic).
  • metrics="rt:pageviews", dimensions="rt:browser,rt:operatingSystem" – Page views with breakdown by browser and operating system.
  • metrics="rt:pageviews", dimensions="rt:country,rt:city" – Page views with breakdown by country and city.

As you can see there’s a lot of data that can be queried and because of the sheer amount it might be necessary to filter it. To filter the results, filters argument can be used. The syntax is quite flexible and supports arithmetic and logical operators as well as regex queries. Let’s look at some examples:

  • rt:medium==ORGANIC – show only page visits from organic search
  • rt:pageviews>2 – show only results that have more than 2 page views
  • rt:country=~United.*,ga:country==Canada – show only visits from countries starting with “United” (UK, US) or Canada (, acts as OR operator, for AND use ;).

For complete documentation on filters see this page.

Finally, to make results a bit more readable or easier to process, you can also sort them using sort argument. For ascending sorting use you can use e.g. sort=rt:pagePath and for descending you will prepend -, e.g. sort=-rt:pageTitle.



Beyond Realtime API

If you can’t find some data, or you’re missing some features in Realtime Analytics API, then you can try exploring other Google Analytics APIs. One of them could be Reporting API v4, which has some improvements over older APIs.

It however, also has a little different approach to building queries, so let’s look at an example to get you started:

with build('analyticsreporting', 'v4', credentials=credentials) as service:
    reports = service.reports().batchGet(body=
        "reportRequests": [
            
                "viewId": f"ga:TABLE_ID",
                "dateRanges": [
                    
                        "startDate": "yesterday",
                        "endDate": "today"
                    ],
                "dimensions": [
                    
                        "name": "ga:browser"
                    ],
                "metrics": [
                    
                        "expression": "ga:sessions"
                    ]
            ]
    ).execute()

    print(reports)

As you can see, this API doesn’t provide large number of arguments that you can populate, instead it has single body argument, which takes request body with all the values that we’ve seen previously.

If you want to dive deeper into this one, then you should check out the samples in documentation, which give complete overview of its features.



Closing Thoughts

Even though this article shows only usage of analytics APIs, it should give you general idea for how to use all Google APIs with Python, as all the APIs in client library use same general design. Additionally, the authentication shown earlier can be applied to any API, all you need to change is the scope.

While this article used google-api-python-client library, Google also provides lightweight libraries for individual services and APIs at https://github.com/googleapis/google-cloud-python. At the time of writing the specific library for analytics is still in beta and lacks documentation, but when it becomes GA (or more stable), you should probably consider exploring it.


Source link

Login with google – ReactJs

Open App.js and import GoogleLogin from package
import GoogleLogin from ‘react-google-login’;

Now just add Google Login button with Client ID

clientId=”Your_own_client_ID.googleusercontent.com”
buttonText=”Login with Google”
onSuccess=pass
onFailure=fail
cookiePolicy=’single_host_origin’
/>

Just add two handles for onSuccess
const fail = (result) =>
alert(result.error);

and for onFailure
const pass = (googleData)=>
console.log(googleData);

Read more: https://easycodesardar.blogspot.com/2021/11/login-with-google-reactjs.html


Source link

CAST AI vs. GKE Autopilot: Where to manage Kubernetes on GKE?

Bonus content: Detailed simulation of cluster costs with CAST AI vs. GKE Autopilot

Running Kubernetes is a complex task, but luckily teams using Google Cloud Platform can choose from a few solutions that make the job easier. 

Let’s take a closer look at two of them – CAST AI and the GKE Autopilot mode – to see which one is a better fit for efficient teams looking to automate their Kubernetes workloads and cut cloud costs.

CAST AI – full-scale GKE automation and cost optimization

GKE Autopilot – cluster automation for a hands-off GKE experience

CAST AI is cloud-native platform that automatically analyzes, monitors, and optimizes Kubernetes environments. Companies across e-commerce and adtech use CAST AI to cut their cloud bills by 50% to even 90%.

GKE Autopilot is one of the two modes of operation GKE offers to its users. In Autopilot, the provider both provisions and manages the cluster’s underlying infrastructure to optimize the clusters running in GKE.

CAST AI vs. Google Autopilot – quick feature comparison

CAST AI vs. GKE Autopilot

Detailed feature comparison of Google Autopilot and CAST AI

  1. Observability, logging, and cost visibility
  2. Automated cost optimization
  3. Preemptible and Spot VMs automation
  4. Full multi cloud optimization
  5. Pricing

1. Observability, logging, and cost visibility

Cost visibility

CAST AI divides cloud costs into project, cluster, namespace, and deployment levels. You can track expenses down to individual microservices before calculating the total cost of your cluster. 

The solution uses industry-standard metrics that work with any cloud provider, not only Google Cloud Platform. 

Cost allocation is also an option in CAST AI – it’s done per cluster and per node. The team plans to add features like control plane, network, egress, storage, and other cost dimensions soon. 

GKE Autopilot comes with pre-configured use of Cloud Operations for GKE monitoring dashboards. Users can customize the system and workload logging to get the right metrics.

Multi cloud metrics

Many teams use more than one cloud platform today, so multi-cloud support is essential for visibility and optimization. 

CAST AI comes with a range of multi-cloud capabilities. It works with any cloud service provider and provides cross-cloud visibility thanks to universal metrics from Grafana and Kibana.

GKE Autopilot only displays metrics for clusters running in GKE. If you use Amazon Elastic Kubernetes Service (EKS) or Azure Kubernetes Service (EKS), you won’t be able to compare the metrics from all your clusters in one place.

2. Automated cost optimization 

Automated instance selection for best cost/performance

CAST AI selects the most cost-effective instance types and sizes to meet your application’s requirements while reducing cloud spending. When a cluster requires more nodes, the automation engine selects the instances with the highest performance at the lowest cost. Engineers don’t need to do anything here because everything is automated.

Since using the same instance shape for every node in a cluster can lead to overprovisioning, CAST AI also allows you to create multi-shape clusters. It gives the application the optimal mix of several instance types.

GKE Autopilot uses E2 standard and E2 shared-core machine types that fail to offer an optimal balance between cost and performance. These instances are overcommitted, shared-core, and latency insensitive. GKE Autopilot also doesn’t offer multi-shape clusters.

Horizontal and vertical pod autoscaling 

CAST AI automates pod scaling parameters to help companies reduce cloud waste. The Horizontal Pod Autoscaler determines the correct number of pod instances based on business KPIs. If no work needs to be done, the platform decreases the replica count of pods until it reaches 0 and then removes all pods.

CAST AI also ensures that the number of nodes in use is always adequate for the application’s needs, scaling nodes up and down dynamically.

GKE Autopilot automatically scales cluster resources based on the user’s pod specifications, but users need to configure Horizontal and Vertical Pod Autoscalers on their own. You can implement Horizontal pod autoscaling to automatically increase or decrease the number of pods via the standard Kubernetes CPU or memory metrics, or with custom metrics in Cloud Monitoring.

Note that GKE Autopilot is not a suitable solution for very small pods since the minimum is 0.25 CPU and 0.5GB RAM per pod. Tailoring with the VPA exact requirements is challenging as well since 0.25CPU is used as the increment – you can’t assign 0.66 CPUs to the pod because only values like 0.5 or 0.75 CPUs are allowed.

3. Preemptible and Spot VMs automation

When compared to pay-as-you-go VM instances, Preemptible VMs instances provide considerable cost savings – up to 91%! But Google Cloud Platform can reclaim these instances at any time. Teams wishing to take advantage of Preemptible VMs need to automate their processes.

In CAST AI, the replacement of interrupted Spot VMs is fully automated. Teams no longer need to worry about the capacity of their application running out. The platform continuously looks for the best instance alternatives and spins up new instances in milliseconds to provide high availability.

GKE Autopilot doesn’t support Preemptible VMs at the moment.

4. Full multi cloud optimization

As we enter the era of multi cloud, it’s more important than ever to monitor, manage, and optimize cloud costs across providers.

CAST AI provides a number of multi cloud capabilities to meet this need:

  • Active-Active Multi Cloud – the solution distributes apps and replicates data over many cloud services to ensure that even if one fails, the applications continue to operate, ensuring business continuity.
  • Traffic distribution – CAST AI distributes traffic among all cloud services in use and always picks up and healthy endpoints for global server load balancing.
  • Metrics across clouds – thanks to data from Grafana and Kibana, the platform provides cost allocation insights across cloud services.

GKE Autopilot doesn’t offer multi cloud support at the moment.

5. Pricing

To check for potential savings, users can run the CAST AI free Cluster Analyzer. The read-only agent evaluates their infrastructure and makes specific recommendations free of charge. Users can then implement these results manually or turn automatic cost optimization features on, choosing between two options (both with a free trial): Growth and Enterprise. Cost reductions of at least 50% are guaranteed using CAST AI.

GKE Autopilot clusters come at a flat fee of $0.10/h per cluster for every cluster after the free tier, adding to that the CPU, memory, and ephemeral storage compute resources provisioned for the pods. The Autopilot control plane and simple GKE cost $72 per month. But compared to standard GKE, the CPU and RAM costs in Autopilot are double. 

For example, an e2-standard-2 machine costs $0.075462 per hour. With Autopilot, the same instance will cost $0.1445536 (calculated for the Northern Virginia region). 

Pricing simulation

Here’s a pricing simulation that explains the difference between the optimization results from applying CAST AI and GKE Autopilot.

Let’s start with a look at GKE Autopilot pricing:

CAST AI vs. GKE Autopilot pricing

In this scenario, we have a manually optimized cluster that wastes only 25% of resources. We pay some $20k for the cluster but the actual pod requests amount to c. $15k. By switching to GKE Autopilot, the cluster costs rise to almost $30k. If you run the free analysis at CAST AI and implement its recommendations manually, you can slash the cost of your GKE cluster by 50%.

CAST AI vs. GKE Autopilot pricing

What if you’re dealing with a much larger waste volume? Using GKE Autopilot helps to reduce the costs significantly. But CAST AI brings even greater savings, as visualized in this example:

CAST AI vs. GKE Autopilot pricing

Overall winner: CAST AI

CAST AI vs. GKE Autopilot

Both GKE Autopilot and CAST AI are great solutions for automating many important features of workloads running on GKE.  

While Google Autopilot offers several helpful automation features, it comes with many limitations. CAST AI provides teams with a rich array of automation features and gives them customization opportunities for more flexibility. By picking the best VMs – including the heavily-discounted Preemptible VMs – CAST AI guarantees cloud cost savings of at least 50%. 

Combined with unique multi cloud functionality and cloud-native architecture, this positions CAST AI as the top cloud cost optimization platform.

P.S. If you’d like to start with something more hands-on, run the free CAST AI Cost Analyzer to check how much you could save and how to get there.


Source link

Django Cloud Task Queue – DEV Community

I’m developing and maintaining a Python package to easily integrate your Django Application with Google Cloud Task.

Some features:

  • Easily push tasks to Cloud Task using a decorator
  • Automatically route all tasks from a single endpoint
  • Ease scheduling with native python datetime
  • Named task to avoid duplicated
  • Local development support (coming soon…)

Simple and beautiful task definition

from datetime import timedelta
from django.utils.timezone import now
from cloudtask import (
    CloudTaskRequest,
    task)

@task(queue='default')
def add(request: CloudTaskRequest, a: int = 5, b: int = 4) -> None:
    print(f'Running task with args a= and b=')
    print(a + b)

# pushing task to queue
add(a=2, b=4).delay()

# executing the task immediately without push to queue
add(a=30)()

# scheduling the task
at = now() + timedelta(days=1)
add(b=15).schedule(at=at)

See the repository and full documentation on my Github page.


If it was helpful or if you found it interesting, also leave an 🙋🏽‍♂️❤️ in the comments. Thank you, looking forward to the next article. Enjoy and follow my work.

Thanks,
Stay safe!


Source link

Cloud Technology News of the Month: August 2021

The summer might be slowly coming to an end, but here’s something to invigorate you: another portion of fresh cloud technology news. 

This series brings you up to speed with the latest releases, acquisitions, research, and hidden gems in the world of cloud computing – the stuff actually worth reading. 

Here’s what happened in the cloud world this August.

_____

Story of the month: Multi cloud is here, there’s no denying it anymore

HashiCorp recently published its inaugural State of Cloud Strategy Survey, which showed that multi cloud is the new normal.

The company surveyed 3,205 tech practitioners and decision-makers from companies of different sizes and industries and hailing from various locations around the world. 

Here are the most interesting findings:

Multi cloud is real

Multi-Cloud Adoption Pie Chart

Source: HashiCorp

76% of respondents are already working in multi cloud environments, using more than one public or private cloud. Multi cloud is no longer an inspirational idea – it’s an everyday reality. And since 86% of tech practitioners expect to be using multi cloud within the next two years, the adoption of multi cloud will only grow.

Who goes multi cloud?

To no surprise, multi cloud adoption is greatest among larger organizations – 90% of companies with more than 5k employees are already using multi cloud. Still, 60% of small businesses (counting <100 employees) already have multi cloud environments, and 81% of them expect to embrace this approach within the next two years.

What drives multi cloud adoption?

Why are all of these companies adopting the multi cloud approach? The top reason lies in digital transformation programs. This is interesting because we all thought it was all about cost optimization and avoiding vendor lock-in.

Here are the top driving forces behind multi cloud:

  • 34% – digital transformation initiatives, 
  • 30% – avoiding single cloud vendor lock-in, 
  • 28% – cost reduction, 
  • 25% – ability to scale. 

Digital transformation was especially strong among enterprises as 50% of them pointed to this factor. But it also caught the attention of the financial services industry, where 41% of respondents consider it a top driver.

What are the business and technology factors driving your multi-cloud adoption?

Source: HashiCorp

What are the key inhibitors to multi cloud’s rise to fame?

Two things make moving to multi cloud hard: skill shortage and security

More than half (57%) of respondents consider skill shortage as the top challenge that hinders building multi cloud capabilities. Next, we see inconsistent workflows across cloud environments (33%) and team siloization (29%).

Another problem is security, the top-three inhibitors on many cloud journeys. Almost half (47%) of respondents said that security is the issue – be it governance, regulatory compliance and risk management, or data and privacy protection.

Top security concerns bar chart

And a final gem: 46% of tech leaders don’t think it’s COVID-19 that’s driving them to the cloud

Many ascribe the spread of cloud technologies to the pandemic’s impact on the global economy, but this seems to be an incomplete picture. Almost half of the survey respondents (46%) said that COVID-19 didn’t affect their move to the cloud, and another 19% said it had a low impact (speeding the shift by some 6-12 months).

This shows that in most organizations, cloud efforts were well underway before the pandemic started and are bound to continue in the post-pandemic future. 

Covid's cloud impact chart

Interestingly, in response to the pandemic, many companies embraced modern, cloud native technologies like:

  • Infrastructure as Code (49%), 
  • container orchestration (41%), 
  • network infrastructure automation (33%), 
  • and said self-service infrastructure (32%).

At CAST AI, we believe that multi cloud is the future, leading to the democratization of cloud services and reduced vendor lock-in. That’s why our platform comes with a host of multi cloud features –  find out more about them here: How to spin a multi cloud application with CAST AI.

Source: HashiCorp 

_____

The Business of Cloud

Rumor has it Databricks – the cloud data company that raised $1 billion earlier this year – agreed to a new deal that includes its valuation at a smashing $38 billion. Morgan Stanley is to lead the investment round said to bring at least $1.5 billion to the company. These figures prove that the cloud market is hotter than ever and we’re bound to see more investments into cloud companies in the near future.

Source: Bloomberg

GE Appliances signed a multi-year partnership with Google Cloud to develop next-gen smart home technologies. The company will benefit from the cloud giant’s expertise in data, AI, machine learning, and smooth integration with other Google technologies like Android, Google Assistant, and Vision AI. Let’s keep a close eye on the IoT scene and see what comes out of this collaboration.

Source: Google Cloud 

61% of public cloud comes from AWS, Microsoft Azure, and Google Cloud, according to the analytics company Canalys. AWS now accounts for 31% of global cloud infrastructure spending, bringing in home revenue of some $59 billion per year (that’s more than HP or Lenovo!). At a 22% market share, Microsoft Azure is the second-largest cloud provider (and growing by more than half from 2020Q2!).

Source: Canalys 

_____

Food for thought

The cloud gets political, this time in the tug-of-war between the US and China. ​​The cloud is said to become China’s next objective – and things are certainly looking good. During the pandemic, Chinese cloud providers noted incredible growth. Huawei more than doubled its global IaaS market share. Modern societies increasingly depend on the cloud and all the digital services it connects – from email to AI applications. It’s high time US policymakers started seeing the cloud as a strategic investment.

Source: Politico 

The UK government now officially advises that companies move to the cloud to curb carbon emissions. Cloud migration was listed by the Department for Business, Energy and Industrial Strategy (BEIS) as one of the steps businesses should take to fight climate change. This comes as part of the government’s border push to inspire companies to support its net-zero emissions campaign that assumes cutting carbon footprints by half by 2030.

If you’re interested in this topic, be sure to check out the session co-hosted by our Product Marketing Manager Annie Talvasto at the upcoming KubeCon + CloudNativeCon North America: How Event Driven Autoscaling in Kubernetes Can Combat Climate Change – Annie Talvasto, CAST AI & Adi Polak, Microsoft (more info here).

Source: Computer Weekly 

_____

New in CAST AI

And here are some new product features hot off the press:  

  • We released the first version of the cluster metrics endpoint that provides visibility into the CAST AI-captured metrics (explore the setup guide on Github). We will be expanding the list of exposed metrics, so stay tuned.
  • Our team implemented the Node Root Volume Policy policy that allows the configuration of root volume size based on the CPU count. That way, nodes with a high CPU count can have a larger root disk allocated to them.
  • We enhanced the Spot instance policy for EKS and Kops, so you can provision the least interrupted instances, the most cost-effective ones, or just go with the default balanced approach. 
  • CAST AI agent v.0.20.0 was released – it now supports auto-discovery of GKE clusters, so there’s no need to enter any cluster details manually.
  • Cluster headroom and Node constraints policies are now separated and can be used simultaneously.
  • We made it easier for users to set correct node CPU and Memory constraints that adhere to the supported ratios.

P.S. Be the first one to optimize a GKE cluster with CAST AI. Connect your cluster, get a self-served savings report now and start saving. Not a GKE user? Share this link with someone who is.

_____




Source link

AWS Certified Cloud Practitioner CLF-C01 Exam Questions Part 2

Source:

AWS: https://www.awslagi.com

GCP: https://www.gcp-examquestions.com

  1. Which service provides a user the ability to warehouse data in the AWS Cloud?

    A. Amazon EFS
    B. Amazon Redshift
    C. Amazon RDS
    D. Amazon VPC
    

Answer: B

  1. A user is planning to migrate an application workload to the AWS Cloud. Which control becomes the responsibility of AWS once the migration is complete?

    A. Patching the guest operating system
    B. Maintaining physical and environmental controls
    C. Protecting communications and maintaining zone security
    D. Patching specific applications
    

Answer: B

  1. Which AWS service can be used to provide an on-demand, cloud-based contact center?

    A. AWS Direct Connect
    B. Amazon Connect
    C. AWS Support Center
    D. AWS Managed Services
    

Answer: B

  1. What tool enables customers without an AWS account to estimate costs for almost all AWS services?

    A. Cost Explorer
    B. TCO Calculator
    C. AWS Budgets
    D. Simple Monthly Calculator
    

Answer: D

  1. Which component must be attached to a VPC to enable inbound Internet access?

    A. NAT gateway
    B. VPC endpoint
    C. VPN connection
    D. Internet gateway
    

Answer: D

  1. Which pricing model would result in maximum Amazon Elastic Compute Cloud (Amazon EC2) savings for a database server that must be online for one year?

    A. Spot Instance
    B. On-Demand Instance
    C. Partial Upfront Reserved Instance
    D. No Upfront Reserved Instance
    

Answer: C

  1. A company has a MySQL database running on a single Amazon EC2 instance. The company now requires higher availability in the event of an outage. Which set of tasks would meet this requirement?

    A. Add an Application Load Balancer in front of the EC2 instance
    B. Configure EC2 Auto Recovery to move the instance to another Availability Zone
    C. Migrate to Amazon RDS and enable Multi-AZ
    D. Enable termination protection for the EC2 instance to avoid outages
    

Answer: C

  1. A company wants to ensure that AWS Management Console users are meeting password complexity requirements. How can the company configure password complexity?

    A. Using an AWS IAM user policy
    B. Using an AWS Organizations service control policy (SCP)
    C. Using an AWS IAM account password policy
    D. Using an AWS Security Hub managed insight
    

Answer: C

  1. Under the AWS shared responsibility model, which of the following is the customer’s responsibility?

    A. Patching guest OS and applications
    B. Patching and fixing flaws in the infrastructure
    C. Physical and environmental controls
    D. Configuration of AWS infrastructure devices
    

Answer: A

  1. Which of the following tasks is required to deploy a PCI-compliant workload on AWS?

    A. Use any AWS service and implement PCI controls at the application layer
    B. Use an AWS service that is in-scope for PCI compliance and raise an AWS support ticket to enable PCI compliance at the application layer
    C. Use any AWS service and raise an AWS support ticket to enable PCI compliance on that service
    D. Use an AWS service that is in scope for PCI compliance and apply PCI controls at the application layer
    

Answer: D

  1. Which are benefits of using Amazon RDS over Amazon EC2 when running relational databases on AWS? (Choose two.)

    A. Automated backups
    B. Schema management
    C. Indexing of tables
    D. Software patching
    E. Extract, transform, and load (ETL) management
    

Answer: A D

  1. What does the Amazon S3 Intelligent-Tiering storage class offer?

    A. Payment flexibility by reserving storage capacity
    B. Long-term retention of data by copying the data to an encrypted Amazon Elastic Block Store (Amazon EBS) volume
    C. Automatic cost savings by moving objects between tiers based on access pattern changes
    D. Secure, durable, and lowest cost storage for data archival
    

Answer: C

  1. A company has multiple data sources across the organization and wants to consolidate data into one data warehouse. Which AWS service can be used to meet this requirement?

    A. Amazon DynamoDB
    B. Amazon Redshift
    C. Amazon Athena
    D. Amazon QuickSight
    

Answer: B

  1. Which AWS service can be used to track resource changes and establish compliance?

    A. Amazon CloudWatch
    B. AWS Config
    C. AWS CloudTrail
    D. AWS Trusted Advisor
    

Answer: C

  1. A user has underutilized on-premises resources. Which AWS Cloud concept can BEST address this issue?

    A. High availability
    B. Elasticity
    C. Security
    D. Loose coupling
    

Answer: C

  1. A user has a stateful workload that will run on Amazon EC2 for the next 3 years. What is the MOST cost-effective pricing model for this workload?

    A. On-Demand Instances
    B. Reserved Instances
    C. Dedicated Instances
    D. Spot Instances
    

Answer: B

  1. A cloud practitioner needs an Amazon EC2 instance to launch and run for 7 hours without interruptions. What is the most suitable and cost-effective option for this task?

    A. On-Demand Instance
    B. Reserved Instance
    C. Dedicated Host
    D. Spot Instance
    

Answer: A

  1. Which of the following are benefits of using AWS Trusted Advisor? (Choose two.)

    A. Providing high-performance container orchestration
    B. Creating and rotating encryption keys
    C. Detecting underutilized resources to save costs
    D. Improving security by proactively monitoring the AWS environment
    E. Implementing enforced tagging across AWS resources
    

Answer: C D

  1. A developer has been hired by a large company and needs AWS credentials. Which are security best practices that should be followed? (Choose two.)

    A. Grant the developer access to only the AWS resources needed to perform the job.
    B. Share the AWS account root user credentials with the developer.
    C. Add the developer to the administrator’s group in AWS IAM.
    D. Configure a password policy that ensures the developer’s password cannot be changed.
    E. Ensure the account password policy requires a minimum length.
    

Answer: A E

  1. Which AWS storage service is designed to transfer petabytes of data in and out of the cloud?

    A. AWS Storage Gateway
    B. Amazon S3 Glacier Deep Archive
    C. Amazon Lightsail
    D. AWS Snowball
    

Answer: D

  1. Which AWS service allows for effective cost management of multiple AWS accounts?

    A. AWS Organizations
    B. AWS Trusted Advisor
    C. AWS Direct Connect
    D. Amazon Connect
    

Answer: A

  1. A company is piloting a new customer-facing application on Amazon Elastic Compute Cloud (Amazon EC2) for one month. What pricing model is appropriate?

    A. Reserved Instances
    B. Spot Instances
    C. On-Demand Instances
    D. Dedicated Hosts
    

Answer: C

  1. Which AWS tools automatically forecast future AWS costs?

    A. AWS Support Center
    B. AWS Total Cost of Ownership (TCO) Calculator
    C. AWS Simple Monthly Calculator
    D. Cost Explorer
    

Answer: D

  1. Under the AWS shared responsibility model, which of the following is a responsibility of AWS?

    A. Enabling server-side encryption for objects stored in S3
    B. Applying AWS IAM security policies
    C. Patching the operating system on an Amazon EC2 instance
    D. Applying updates to the hypervisor
    

Answer: D

  1. A user is able to set up a master payer account to view consolidated billing reports through:

    A. AWS Budgets.
    B. Amazon Macie.
    C. Amazon QuickSight.
    D. AWS Organizations.
    

Answer: D

  1. Performing operations as code is a design principle that supports which pillar of the AWS Well-Architected Framework?

    A. Performance efficiency
    B. Operational excellence
    C. Reliability
    D. Security
    

Answer: B

  1. Which design principle is achieved by following the reliability pillar of the AWS Well-Architected Framework?

    A. Vertical scaling
    B. Manual failure recovery
    C. Testing recovery procedures
    D. Changing infrastructure manually
    

Answer: C

  1. What is a characteristic of Convertible Reserved Instances (RIs)?

    A. Users can exchange Convertible RIs for other Convertible RIs from a different instance family.
    B. Users can exchange Convertible RIs for other Convertible RIs in different AWS Regions.
    C. Users can sell and buy Convertible RIs on the AWS Marketplace.
    D. Users can shorten the term of their Convertible RIs by merging them with other Convertible RIs.
    

Answer: A

  1. The user is fully responsible for which action when running workloads on AWS?

    A. Patching the infrastructure components
    B. Implementing controls to route application traffic
    C. Maintaining physical and environmental controls
    D. Maintaining the underlying infrastructure components
    

Answer: B

  1. An architecture design includes Amazon EC2, an Elastic Load Balancer, and Amazon RDS. What is the BEST way to get a monthly cost estimation for this architecture?

    A. Open an AWS Support case, provide the architecture proposal, and ask for a monthly cost estimation.
    B. Collect the published prices of the AWS services and calculate the monthly estimate.
    C. Use the AWS Simple Monthly Calculator to estimate the monthly cost.
    D. Use the AWS Total Cost of Ownership (TCO) Calculator to estimate the monthly cost.
    

Answer: C

  1. Which AWS service allows users to download security and compliance reports about the AWS infrastructure on demand?

    A. Amazon GuardDuty
    B. AWS Security Hub
    C. AWS Artifact
    D. AWS Shield
    

Answer: C

  1. Which AWS managed services can be used to extend an on-premises data center to the AWS network? (Choose two.)

    A. AWS VPN
    B. NAT gateway
    C. AWS Direct Connect
    D. Amazon Connect
    E. Amazon Route 53
    

Answer: A C

  1. Which requirement must be met for a member account to be unlinked from an AWS Organizations account?

    A. The linked account must be actively compliant with AWS System and Organization Controls (SOC).
    B. The payer and the linked account must both create AWS Support cases to request that the member account be unlinked from the organization.
    C. The member account must meet the requirements of a standalone account.
    D. The payer account must be used to remove the linked account from the organization.
    

Answer: C

  1. What AWS benefit refers to a customer’s ability to deploy applications that scale up and down the meet variable demand?

    A. Elasticity
    B. Agility
    C. Security
    D. Scalability
    

Answer: D

  1. During a compliance review, one of the auditors requires a copy of the AWS SOC 2 report. Which service should be used to submit this request?

    A. AWS Personal Health Dashboard
    B. AWS Trusted Advisor
    C. AWS Artifact
    D. Amazon S3
    

Answer: C

  1. A company wants to set up a highly available workload in AWS with a disaster recovery plan that will allow the company to recover in case of a regional service interruption. Which configuration will meet these requirements?

    A. Run on two Availability Zones in one AWS Region, using the additional Availability Zones in the AWS Region for the disaster recovery site.
    B. Run on two Availability Zones in one AWS Region, using another AWS Region for the disaster recovery site.
    C. Run on two Availability Zones in one AWS Region, using a local AWS Region for the disaster recovery site.
    D. Run across two AWS Regions, using a third AWS Region for the disaster recovery site.
    

Answer: B

  1. A company has a 500 TB image repository that needs to be transported to AWS for processing. Which AWS service can import this data MOST cost-effectively?

    A. AWS Snowball
    B. AWS Direct Connect
    C. AWS VPN
    D. Amazon S3
    

Answer: A

  1. Which AWS service can run a managed PostgreSQL database that provides online transaction processing (OLTP)?

    A. Amazon DynamoDB
    B. Amazon Athena
    C. Amazon RDS
    D. Amazon EMR
    

Answer: C

  1. Which of the following assist in identifying costs by department? (Choose two.)

    A. Using tags on resources
    B. Using multiple AWS accounts
    C. Using an account manager
    D. Using AWS Trusted Advisor
    E. Using Consolidated Billing
    

Answer: A B

  1. A company wants to allow full access to an Amazon S3 bucket for a particular user. Which element in the S3 bucket policy holds the user details that describe who needs access to the S3 bucket?

    A. Principal
    B. Action
    C. Resource
    D. Statement
    

Answer: A

  1. A company must store critical business data in Amazon S3 with a backup to another AWS Region. How can this be achieved?

    A. Use an Amazon CloudFront Content Delivery Network (CDN) to cache data globally
    B. Set up Amazon S3 cross-region replication to another AWS Region
    C. Configure the AWS Backup service to back up to the data to another AWS Region
    D. Take Amazon S3 bucket snapshots and copy that data to another AWS Region
    

Answer: B

  1. Which AWS Cloud service can send alerts to customers if custom spending thresholds are exceeded?

    A. AWS Budgets
    B. AWS Cost Explorer
    C. AWS Cost Allocation Tags
    D. AWS Organizations
    

Answer: A

  1. What is the recommended method to request penetration testing on AWS resources?

    A. Open a support case
    B. Fill out the Penetration Testing Request Form
    C. Request a penetration test from your technical account manager
    D. Contact your AWS sales representative
    

Answer: B

  1. A user needs to automatically discover, classify, and protect sensitive data stored in Amazon S3. Which AWS service can meet these requirements?

    A. Amazon Inspector
    B. Amazon Macie
    C. Amazon GuardDuty
    D. AWS Secrets Manager
    

Answer: B

  1. Which components are required to build a successful site-to-site VPN connection on AWS? (Choose two.)

    A. Internet gateway
    B. NAT gateway
    C. Customer gateway
    D. Transit gateway
    E. Virtual private gateway
    

Answer: D E

  1. Which Amazon EC2 pricing option is best suited for applications with short-term, spiky, or unpredictable workloads that cannot be interrupted?

    A. Spot Instances
    B. Dedicated Hosts
    C. On-Demand Instances
    D. Reserved Instances
    

Answer: C

  1. Which AWS cloud architecture principle states that systems should reduce interdependencies?

    A. Scalability
    B. Services, not servers
    C. Removing single points of failure
    D. Loose coupling
    

Answer: D

  1. What is the MOST effective resource for staying up to date on AWS security announcements?

    A. AWS Personal Health Dashboard
    B. AWS Secrets Manager
    C. AWS Security Bulletins
    D. Amazon Inspector
    

Answer: C

  1. Which AWS service offers persistent storage for a file system?

    A. Amazon S3
    B. Amazon EC2 instance store
    C. Amazon Elastic Block Store (Amazon EBS)
    D. Amazon ElastiCache
    

Answer: C

  1. Which of the following allows AWS users to manage cost allocations for billing?

    A. Tagging resources
    B. Limiting who can create resources
    C. Adding a secondary payment method
    D. Running all operations on a single AWS account
    

Answer: A

  1. Which of the following tasks can only be performed after signing in with AWS account root user credentials? (Choose two.)

    A. Closing an AWS account
    B. Creating a new IAM policy
    C. Changing AWS Support plans
    D. Attaching a role to an Amazon EC2 instance
    E. Generating access keys for IAM users
    

Answer: A C

  1. Fault tolerance refers to:

    A. the ability of an application to accommodate growth without changing design
    B. how well and how quickly an application’s environment can have lost data restored
    C. how secure your application is
    D. the built-in redundancy of an application’s components
    

Answer: D

  1. A company operating in the AWS Cloud requires separate invoices for specific environments, such as development, testing, and production. How can this be achieved?

    A. Use multiple AWS accounts
    B. Use resource tagging
    C. Use multiple VPCs
    D. Use Cost Explorer
    

Answer: A

  1. Which AWS service can be used in the application deployment process?

    A. AWS AppSync
    B. AWS Batch
    C. AWS CodePipeline
    D. AWS DataSync
    

Answer: B

  1. What can be used to reduce the cost of running Amazon EC2 instances? (Choose two.)

    A. Spot Instances for stateless and flexible workloads
    B. Memory optimized instances for high-compute workloads
    C. On-Demand Instances for high-cost and sustained workloads
    D. Reserved Instances for sustained workloads
    E. Spend limits set using AWS Budgets
    

Answer: A D

  1. A company is launching an e-commerce site that will store and process credit card data. The company requires information about AWS compliance reports and AWS agreements. Which AWS service provides on-demand access to these items?

    A. AWS Certificate Manager
    B. AWS Config
    C. AWS Artifact
    D. AWS CloudTrail
    

Answer: C

  1. Which AWS service or feature allows the user to manager cross-region application traffic?

    A. Amazon AppStream 2.0
    B. Amazon VPC
    C. Elastic Load Balancer
    D. Amazon Route 53
    

Answer: C

  1. Which AWS service can be used to track unauthorized API calls?

    A. AWS Config
    B. AWS CloudTrail
    C. AWS Trusted Advisor
    D. Amazon Inspector
    

Answer: B

  1. A user needs to regularly audit and evaluate the setup of all AWS resources, identify non-compliant accounts, and be notified when a resource changes. Which AWS service can be used to meet these requirements?

    A. AWS Trusted Advisor
    B. AWS Config
    C. AWS Resource Access Manager
    D. AWS Systems Manager
    

Answer: B

  1. A user is planning to launch two additional Amazon EC2 instances to increase availability. Which action should the user take?

    A. Launch the instances across multiple Availability Zones in a single AWS Region.
    B. Launch the instances as EC2 Reserved Instances in the same AWS Region and the same Availability Zone.
    C. Launch the instances in multiple AWS Regions, but in the same Availability Zone.
    D. Launch the instances as EC2 Spot Instances in the same AWS Region, but in different Availability Zones.
    

Answer: A

  1. A company’s application has flexible start and end times. Which Amazon EC2 pricing model will be the MOST cost-effective?

    A. On-Demand Instances
    B. Spot Instances
    C. Reserved Instances
    D. Dedicated Hosts
    

Answer: B

  1. Under the AWS shared responsibility model, what are the customer’s responsibilities? (Choose two.)

    A. Physical and environmental security
    B. Physical network devices including firewalls
    C. Storage device decommissioning
    D. Security of data in transit
    E. Data integrity authentication
    

Answer: D E

  1. A cloud practitioner has a data analysis workload that is infrequently executed and can be interrupted without harm. To optimize for cost, which Amazon EC2 purchasing option should be used?

    A. On-Demand Instances
    B. Reserved Instances
    C. Spot Instances
    D. Dedicated Hosts
    

Answer: C

  1. Which AWS container service will help a user install, operate, and scale the cluster management infrastructure?

    A. Amazon Elastic Container Registry (Amazon ECR)
    B. AWS Elastic Beanstalk
    C. Amazon Elastic Container Service (Amazon ECS)
    D. Amazon Elastic Block Store (Amazon EBS)
    

Answer: C

  1. Which of the following allows an application running on an Amazon EC2 instance to securely write data to an Amazon S3 bucket without using long term credentials?

    A. Amazon Cognito
    B. AWS Shield
    C. AWS IAM role
    D. AWS IAM user access key
    

Answer: D

  1. A company with a Developer-level AWS Support plan provisioned an Amazon RDS database and cannot connect to it. Who should the developer contact for this level of support?

    A. AWS Support using a support case
    B. AWS Professional Services
    C. AWS technical account manager
    D. AWS consulting partners
    

Answer: A

  1. What is the purpose of having an internet gateway within a VPC?

    A. To create a VPN connection to the VPC
    B. To allow communication between the VPC and the Internet
    C. To impose bandwidth constraints on internet traffic
    D. To load balance traffic from the Internet across Amazon EC2 instances
    

Answer: B

  1. A company must ensure that its endpoint for a database instance remains the same after a single Availability Zone service interruption. The application needs to resume database operations without the need for manual administrative intervention. How can these requirements be met?

    A. Use multiple Amazon Route 53 routes to the standby database instance endpoint hosted on AWS Storage Gateway.
    B. Configure Amazon RDS Multi-Availability Zone deployments with automatic failover to the standby.
    C. Add multiple Application Load Balancers and deploy the database instance with AWS Elastic Beanstalk.
    D. Deploy a single Network Load Balancer to distribute incoming traffic across multiple Amazon CloudFront origins.
    

Answer: B

  1. Which AWS managed service can be used to distribute traffic between one or more Amazon EC2 instances?

    A. NAT gateway
    B. Elastic Load Balancing
    C. Amazon Athena
    D. AWS PrivateLink
    

Answer: B

  1. AWS Trusted Advisor provides recommendations on which of the following? (Choose two.)

    A. Cost optimization
    B. Auditing
    C. Serverless architecture
    D. Performance
    E. Scalability
    

Answer: A D

  1. How can a company separate costs for network traffic, Amazon EC2, Amazon S3, and other AWS services by department?

    A. Add department-specific tags to each resource
    B. Create a separate VPC for each department
    C. Create a separate AWS account for each department
    D. Use AWS Organizations
    

Answer: C


Source link

AWS Certified Cloud Practitioner CLF-C01 Exam Questions Part 1

  • Under the shared responsibility model, which of the following is the customer responsible for?

    A. Ensuring that disk drives are wiped after usE. 
    B. Ensuring that firmware is updated on hardware devices.
    C. Ensuring that data is encrypted at rest.
    D. Ensuring that network cables are category six or higher.
    
  • Which services are parts of the AWS serverless platform?

    A.  Amazon EC2, Amazon S3, Amazon Athena
    B.  Amazon Kinesis, Amazon SQS, Amazon EMR
    C.  AWS Step Functions, Amazon DynamoDB, Amazon SNS
    D. Amazon Athena, Amazon Cognito, Amazon EC2
    
  • Which of the following services is in the category of AWS serverless platform?

    A. Amazon EMR
    B. Elastic Load Balancing
    C. AWS Lambda
    D. AWS Mobile Hub
    
  • An administrator needs to rapidly deploy a popular IT solution and start using it immediately. Where can the administrator find assistance?

    A. AWS Well-Architected Framework documentation
    B. Amazon CloudFront
    C. AWS CodeCommit
    D. AWS Quick Start reference deployments
    
  • One benefit of On-Demand Amazon Elastic Compute Cloud (Amazon EC2) pricing is:

    A. the ability to bid for a lower hourly cost.
    B. paying a daily rate regardless of time useD. 
    C. paying only for time useD. 
    D. pre-paying for instances and paying a lower hourly ratE. 
    
  • Which of the following tasks is the responsibility of AWS?

    A. Encrypting client-side data
    B. Configuring AWS Identity and Access Management (IAM) roles
    C. Securing the Amazon EC2 hypervisor
    D. Setting user password policies
    
  • Which is the MINIMUM AWS Support plan that provides technical support through phone calls?

    A. Enterprise
    B. Business
    C. Developer
    D. Basic
    
  • How should a customer forecast the future costs for running a new web application?

    A. Amazon Aurora Backtrack
    B. Amazon CloudWatch Billing Alarms
    C. AWS Simple Monthly Calculator
    D. AWS Cost and Usage report
    
  • A company will be moving from an on-premises data center to the AWS ClouD. What would be one financial difference after the move?

    A. Moving from variable operational expense (opex) to upfront capital expense (capex).
    B. Moving from upfront capital expense (capex) to variable capital expense (capex).
    C. Moving from upfront capital expense (capex) to variable operational expense (opex).
    D. Elimination of upfront capital expense (capex) and elimination of variable operational expense (opex)
    
  • A solution that is able to support growth in users, traffic, or data size with no drop in performance aligns with which cloud architecture principle?

    A. Think parallel
    B. Implement elasticity
    C. Decouple your components
    D. Design for failure
    
  • Which of the following can limit Amazon Simple Storage Service (Amazon S3) bucket access to specific users?

    A. A public and private key-pair
    B. Amazon Inspector
    C. AWS Identity and Access Management (IAM) policies
    D. Security Groups
    
  • What can AWS edge locations be used for? (Choose two.)

    A. Hosting applications
    B. Delivering content closer to users
    C. Running NoSQL database caching services
    D. Reducing traffic on the server by caching responses
    E. Sending notification messages to end users
    
  • How does AWS shorten the time to provision IT resources?

    A. It supplies an online IT ticketing platform for resource requests.
    B. It supports automatic code validation services.
    C. It provides the ability to programmatically provision existing resources.
    D. It automates the resource request process from a company’s IT vendor list.
    
  • Which AWS service can serve a static website?

    A. Amazon S3
    B. Amazon Route 53
    C. Amazon QuickSight
    D. AWS X-Ray
    
  • Which is the minimum AWS Support plan that includes Infrastructure Event Management without additional costs?

    A. Enterprise
    B. Business
    C. Developer
    D. Basic
    
  • Which AWS feature should a customer leverage to achieve high availability of an application?

    A. AWS Direct Connect
    B. Availability Zones
    C. Data centers
    D. Amazon Virtual Private Cloud (Amazon VPC)
    
  • In which scenario should Amazon EC2 Spot Instances be used?

    A. A company wants to move its main website to AWS from an on-premises web server.
    B. A company has a number of application services whose Service Level Agreement (SLA) requires 99.999% uptimE. 
    C. A company’s heavily used legacy database is currently running on-premises.
    D. A company has a number of infrequent, interruptible jobs that are currently using On-Demand Instances
    
  • Which of the following common IT tasks can AWS cover to free up company IT resources? (Choose two.)

    A. Patching databases software
    B. Testing application releases
    C. Backing up databases
    D. Creating database schema
    E. Running penetration tests
    
  • Which AWS services can be used to gather information about AWS account activity? (Choose two.)

    A. Amazon CloudFront
    B. AWS Cloud9
    C. AWS CloudTrail
    D. AWS CloudHSM
    E. Amazon CloudWatch
    
  • How do customers benefit from Amazon’s massive economies of scale?

    A. Periodic price reductions as the result of Amazon’s operational efficiencies
    B. New Amazon EC2 instance types providing the latest hardware
    C. The ability to scale up and down when needed
    D. Increased reliability in the underlying hardware of Amazon EC2 instances
    
  • If each department within a company has its own AWS account, what is one way to enable consolidated billing?

    A. Use AWS Budgets on each account to pay only to budget.
    B. Contact AWS Support for a monthly bill.
    C. Create an AWS Organization from the payer account and invite the other accounts to join.
    D. Put all invoices into one Amazon Simple Storage Service (Amazon S3) bucket, load data into Amazon Redshift, and then run a billing report.
    
  • Which of the following features can be configured through the Amazon Virtual Private Cloud (Amazon VPC) Dashboard? (Choose two.)

    A. Amazon CloudFront distributions
    B. Amazon Route 53
    C. Security Groups
    D. Subnets
    E. Elastic Load Balancing
    
  • Which options does AWS make available for customers who want to learn about security in the cloud in an instructor-led setting? (Choose two.)

    A. AWS Trusted Advisor
    B. AWS Online Tech Talks
    C. AWS Blog
    D. AWS Forums
    E. AWS Classroom Training
    
  • Which of the following is a component of the shared responsibility model managed entirely by AWS?

    A. Patching operating system software
    B. Encrypting data
    C. Enforcing multi-factor authentication
    D. Auditing physical data center assets
    
  • Which service is best for storing common database query results, which helps to alleviate database access load?

    A. Amazon Machine Learning
    B. Amazon SQS
    C. Amazon ElastiCache
    D. Amazon EC2 Instance Store
    
  • Amazon Relational Database Service (Amazon RDS) offers which of the following benefits over traditional database management?

    A. AWS manages the data stored in Amazon RDS tables.
    B. AWS manages the maintenance of the operating system.
    C. AWS automatically scales up instance types on demanD. 
    D. AWS manages the database typE. 
    
  • Which AWS support plan includes a dedicated Technical Account Manager?

    A. Developer
    B. Enterprise
    C. Business
    D. Basic
    
  • Which of the following is an important architectural design principle when designing cloud applications?

    A. Use multiple Availability Zones.
    B. Use tightly coupled components.
    C. Use open source softwarE. 
    D. Provision extra capacity
    
  • Which of the following services falls under the responsibility of the customer to maintain operating system configuration, security patching, and networking?

    A. Amazon RDS
    B. Amazon EC2
    C. Amazon ElastiCache
    D. AWS Fargate
    
  • Which service provides a hybrid storage service that enables on-premises applications to seamlessly use cloud storage?

    A. Amazon Glacier
    B. AWS Snowball
    C. AWS Storage Gateway
    D. Amazon Elastic Block Storage (Amazon EBS)
    
  • Which of the following security measures protect access to an AWS account? (Choose two.)

    A. Enable AWS CloudTrail.
    B. Grant least privilege access to IAM users.
    C. Create one IAM user and share with many developers and users.
    D. Enable Amazon CloudFront.
    E. Activate multi-factor authentication (MFA) for privileged users.
    
  • Which of the following is an AWS Cloud architecture design principle?

    A. Implement single points of failurE. 
    B. Implement loose coupling.
    C. Implement monolithic design.
    D. Implement vertical scaling
    
  • Which of the following can an AWS customer use to launch a new Amazon Relational Database Service (Amazon RDS) cluster? (Choose two.)

    A. AWS Concierge
    B. AWS CloudFormation
    C. Amazon Simple Storage Service (Amazon S3)
    D. Amazon EC2 Auto Scaling
    E. AWS Management Console
    
  • Which AWS Cost Management tool allows you to view the most granular data about your AWS bill?

    A. AWS Cost Explorer
    B. AWS Budgets
    C. AWS Cost and Usage report
    D. AWS Billing dashboard
    
  • The financial benefits of using AWS are: (Choose two.)

    A. reduced Total Cost of Ownership (TCO).
    B. increased capital expenditure (capex).
    C. reduced operational expenditure (opex).
    D. deferred payment plans for startups.
    E. business credit lines for stratups
    
  • A company is migrating an application that is running non-interruptible workloads for a three-year time framE. Which pricing construct would provide the MOST cost-effective solution?

    A. Amazon EC2 Spot Instances
    B. Amazon EC2 Dedicated Instances
    C. Amazon EC2 On-Demand Instances
    D. Amazon EC2 Reserved Instances
    
  • Which AWS service can be used to manually launch instances based on resource requirements?

    A. Amazon EBS
    B. Amazon S3
    C. Amazon EC2
    D. Amazon ECS
    
  • Under the shared responsibility model, which of the following tasks are the responsibility of the AWS customer? (Choose two.)

    A. Ensuring that application data is encrypted at rest
    B. Ensuring that AWS NTP servers are set to the correct time
    C. Ensuring that users have received security training in the use of AWS services
    D. Ensuring that access to data centers is restricted
    E. Ensuring that hardware is disposed of properly
    
  • Which AWS service would you use to obtain compliance reports and certificates?

    A. AWS Artifact
    B. AWS Lambda
    C. Amazon Inspector
    D. AWS Certificate Manager
    
  • Which AWS services are defined as global instead of regional? (Choose two.)

    A. Amazon Route 53
    B. Amazon EC2
    C. Amazon S3
    D. Amazon CloudFront
    E. Amazon DynamoDB
    
  • What technology enables compute capacity to adjust as loads change?

    A. Load balancing
    B. Automatic failover
    C. Round robin
    D. Auto Scaling
    
  • How would an AWS customer easily apply common access controls to a large set of users?

    A. Apply an IAM policy to an IAM group.
    B. Apply an IAM policy to an IAM rolE. 
    C. Apply the same IAM policy to all IAM users with access to the same workloaD. 
    D. Apply an IAM policy to an Amazon Cognito user pool.
    
  • Which of the following AWS features enables a user to launch a pre-configured Amazon Elastic Compute Cloud (Amazon EC2) instance?

    A. Amazon Elastic Block Store (Amazon EBS)
    B. Amazon Machine Image
    C. Amazon EC2 Systems Manager
    D. Amazon AppStream 2.0
    
  • Which of the following steps should be taken by a customer when conducting penetration testing on AWS?

    A. Conduct penetration testing using Amazon Inspector, and then notify AWS support.
    B. Request and wait for approval from the customer’s internal security team, and then conduct
    testing.
    C. Notify AWS support, and then conduct testing immediately.
    D. Request and wait for approval from AWS support, and then conduct testing.
    
  • Which of the following is an advantage of consolidated billing on AWS?

    A. Volume pricing qualification
    B. Shared access permissions
    C. Multiple bills per account
    D. Eliminates the need for tagging
    
  • Which AWS service provides a customized view of the health of specific AWS services that power a customer’s workloads running on AWS?

    A. AWS Service Health Dashboard
    B. AWS X-Ray
    C. AWS Personal Health Dashboard
    D. Amazon CloudWatch
    
  • Where can AWS compliance and certification reports be downloaded?

    A. AWS Artifact
    B. AWS Concierge
    C. AWS Certificate Manager
    D. AWS Trusted Advisor
    
  • Which is the MINIMUM AWS Support plan that allows for one-hour target response time for support cases?

    A. Enterprise
    B. Business
    C. Developer
    D. Basic
    
  • Which design principles for cloud architecture are recommended when re-architecting a large monolithic application? (Choose two.)

    A. Use manual monitoring.
    B. Use fixed servers.
    C. Implement loose coupling.
    D. Rely on individual components.
    E. Design for scalability.
    
  • Which Amazon EC2 pricing model adjusts based on supply and demand of EC2 instances?

    A. On-Demand Instances
    B. Reserved Instances
    C. Spot Instances
    D. Convertible Reserved Instances
    
  • Which of the following services could be used to deploy an application to servers running onpremises? (Choose two.)

    A. AWS Elastic Beanstalk
    B. AWS OpsWorks
    C. AWS CodeDeploy
    D. AWS Batch
    E. AWS X-Ray
    
  • Which service allows a company with multiple AWS accounts to combine its usage to obtain volume discounts?

    A. AWS Server Migration Service
    B. AWS Organizations
    C. AWS Budgets
    D. AWS Trusted Advisor
    
  • What is Amazon CloudWatch?

    A.  A code repository with customizable build and team commit features.
    B.  A metrics repository with customizable notification thresholds and channels.
    C.  A security configuration repository with threat analytics.
    D.  A rule repository of a web application firewall with automated vulnerability prevention features.
    
  • If a customer needs to audit the change management of AWS resources, which of the following AWS services should the customer use?

    A.  AWS Config
    B.  AWS Trusted Advisor
    C.  Amazon CloudWatch
    D.  Amazon Inspector
    
  • Which AWS service provides the ability to manage infrastructure as code?

    A.  AWS CodePipeline
    B.  AWS CodeDeploy
    C.  AWS Direct Connect
    D.  AWS CloudFormation
    
  • When performing a cost analysis that supports physical isolation of a customer workload, which compute hosting model should be accounted for in the Total Cost of Ownership (TCO)?

    A.  Dedicated Hosts
    B.  Reserved Instances
    C.  On-Demand Instances
    D.  No Upfront Reserved Instances
    
  • Which of the following is a benefit of using the AWS Cloud?

    A.  Permissive security removes the administrative burden.
    B.  Ability to focus on revenue-generating activities.
    C.  Control over cloud network hardwarE. 
    D.  Choice of specific cloud hardware vendors.
    
  • Where should a company go to search software listings from independent software vendors to find, test, buy and deploy software that runs on AWS?

    A.  AWS Marketplace
    B.  Amazon Lumberyard
    C.  AWS Artifact
    D.  Amazon CloudSearch
    
  • Which task is AWS responsible for in the shared responsibility model for security and compliance?

    A.  Granting access to individuals and services
    B.  Encrypting data in transit
    C.  Updating Amazon EC2 host firmware
    D.  Updating operating systems
    
  • Which of the following are categories of AWS Trusted Advisor? (Choose two.)

    A.  Fault Tolerance
    B.  Instance Usage
    C.  Infrastructure
    D.  Performance
    E.  Storage Capacity
    
  • Which AWS service provides alerts when an AWS event may impact a company’s AWS resources?

    A.  AWS Personal Health Dashboard
    B.  AWS Service Health Dashboard
    C.  AWS Trusted Advisor
    D.  AWS Infrastructure Event Management
    
  • A company wants to reduce the physical compute footprint that developers use to run codE. Which service would meet that need by enabling serverless architectures?

    A.  Amazon Elastic Compute Cloud (Amazon EC2)
    B.  AWS Lambda
    C.  Amazon DynamoDB
    D.  AWS CodeCommit
    
  • Which AWS service allows companies to connect an Amazon VPC to an on-premises data center?

    A.  AWS VPN
    B.  Amazon Redshift
    C.  API Gateway
    D.  Amazon Direct Connect
    
  • Under the shared responsibility model, which of the following is a shared control between a customer and AWS?

    A.  Physical controls
    B.  Patch management
    C.  Zone security
    D.  Data center auditing
    
  • Which AWS service should be used for long-term, low-cost storage of data backups?

    A.  Amazon RDS
    B.  Amazon Glacier
    C.  AWS Snowball
    D.  AWS EBS
    
  • When architecting cloud applications, which of the following are a key design principle?

    A.  Use the largest instance possible
    B.  Provision capacity for peak load
    C.  Use the Scrum development process
    D.  Implement elasticity
    
  • Which AWS service provides a simple and scalable shared file storage solution for use with Linuxbased AWS and on-premises servers?

    A.  Amazon S3
    B.  Amazon Glacier
    C.  Amazon EBS
    D.  Amazon EFS
    
  • Which AWS managed service is used to host databases?

    A.  AWS Batch
    B.  AWS Artifact
    C.  AWS Data Pipeline
    D.  Amazon RDS
    
  • Which of the following security-related services does AWS offer? (Choose two.)

    A.  Multi-factor authentication physical tokens
    B.  AWS Trusted Advisor security checks
    C.  Data encryption
    D.  Automated penetration testing
    E.  Amazon S3 copyrighted content detection
    
  • Which of the following Identity and Access Management (IAM) entities is associated with an access key ID and secret access key when using AWS Command Line Interface (AWS CLI)?

    A.  IAM group
    B.  IAM user
    C.  IAM role
    D.  IAM policy
    
  • Which service provides a virtually unlimited amount of online highly durable object storage?

    A.  Amazon Redshift
    B.  Amazon Elastic File System (Amazon EFS)
    C.  Amazon Elastic Container Service (Amazon ECS)
    D.  Amazon S3
    
  • What is the benefit of using AWS managed services, such as Amazon ElastiCache and Amazon Relational Database Service (Amazon RDS)?

    A.  They require the customer to monitor and replace failing instances.
    B.  They have better performance than customer-managed services.
    C.  They simplify patching and updating underlying OSs.
    D.  They do not require the customer to optimize instance type or size selections.
    
  • Web servers running on Amazon EC2 access a legacy application running in a corporate data center. What term would describe this model?

    A. Cloud-native
    B. Partner network
    C. Hybrid architecture
    D. Infrastructure as a service
    
  • Which of the following AWS services can be used to serve large amounts of online video content with the lowest possible latency? (Choose two.)

    A.  AWS Storage Gateway
    B.  Amazon S3
    C.  Amazon Elastic File System (EFS)
    D.  Amazon Glacier
    E.  Amazom CloudFront
    
  • The AWS Cloud’s multiple Regions are an example of

    A.  agility.
    B.  global infrastructurE. 
    C.  elasticity.
    D.  pay-as-you-go pricing.
    
  • Which of the following are valid ways for a customer to interact with AWS services? (Choose two.)

    A.  Command line interface
    B.  On-premises
    C.  Software Development Kits
    D.  Software-as-a-service
    E.  Hybrid
    
  • Which statement best describes Elastic Load Balancing?

    A.  It translates a domain name into an IP address using DNS.
    B.  It distributes incoming application traffic across one or more Amazon EC2 instances.
    C.  It collects metrics on connected Amazon EC2 instances.
    D.  It automatically adjusts the number of Amazon EC2 instances to support incoming traffic. 
    
  • A company is looking for a scalable data warehouse solution. Which of the following AWS solutions would meet the company’s needs?

    A. Amazon Simple Storage Service (Amazon S3)
    B. Amazon DynamoDB
    C. Amazon Kinesis
    D. Amazon Redshift
    
  • Which of the following AWS Cloud services can be used to run a customer-managed relational database?

    A. Amazon EC2
    B. Amazon Route 53
    C. Amazon ElastiCache
    D. Amazon DynamoDB
    
  • What is the AWS customer responsible for according to the AWS shared responsibility model?

    A.  Physical access controls
    B.  Data encryption
    C.  Secure disposal of storage devices
    D.  Environmental risk management
    
  • Which Amazon EC2 instance pricing model can provide discounts of up to 90%?

    A.  Reserved Instances
    B.  On-Demand
    C.  Dedicated Hosts
    D.  Spot Instances
    
  • Which storage service can be used as a low-cost option for hosting static websites?

    A.  Amazon Glacier
    B.  Amazon DynamoDB
    C.  Amazon Elastic File System (Amazon EFS)
    D.  Amazon Simple Storage Service (Amazon S3)
    
  • A customer is deploying a new application and needs to choose an AWS Region. Which of the following factors could influence the customer’s decision? (Choose two.)

    A.  Reduced latency to users
    B.  The application’s presentation in the local language
    C.  Data sovereignty compliance
    D.  Cooling costs in hotter climates
    E.  Proximity to the customer’s office for on-site visits
    
  • Which of the following is an AWS managed Domain Name System (DNS) web service?

    A.  Amazon Route 53
    B.  Amazon Neptune
    C.  Amazon SageMaker
    D.  Amazon Lightsail
    
  • Which of the following are features of Amazon CloudWatch Logs? (Choose two.)

    A.  Summaries by Amazon Simple Notification Service (Amazon SNS)
    B.  Free Amazon Elasticsearch Service analytics
    C.  Provided at no charge
    D.  Real-time monitoring
    E.  Adjustable retention
    
  • A customer is using multiple AWS accounts with separate billing. How can the customer take advantage of volume discounts with minimal impact to the AWS resources?

    A.  Create one global AWS account and move all AWS resources to the account.
    B.  Sign up for three years of Reserved Instance pricing up front.
    C.  Use the consolidated billing feature from AWS Organizations.
    D.  Sign up for the AWS Enterprise support plan to get volume discounts.
    
  • Which of the following is the customer’s responsibility under the AWS shared responsibility model?

    A.  Patching underlying infrastructure
    B.  Physical security
    C.  Patching Amazon EC2 instances
    D.  Patching network infrastructure
    
  • Which feature of the AWS Cloud will support an international company’s requirement for low latency to all of its customers?

    A.  Fault tolerance
    B.  Global reach
    C.  Pay-as-you-go pricing
    D.  High availability
    
  • For which auditing process does AWS have sole responsibility?

    A.  AWS IAM policies
    B.  Physical security
    C.  Amazon S3 bucket policies
    D.  AWS CloudTrail Logs
    
  • What approach to transcoding a large number of individual video files adheres to AWS architecture principles?

    A. Using many instances in parallel
    B. Using a single large instance during off-peak hours
    C. Using dedicated hardware
    D. Using a large GPU instance type
    
  • Which service should a customer use to consolidate and centrally manage multiple AWS accounts?

    A.  AWS IAM
    B.  AWS Organizations
    C.  AWS Schema Conversion Tool
    D.  AWS Config
    
  • What is an example of agility in the AWS Cloud?

    A.  Access to multiple instance types
    B.  Access to managed services
    C.  Using Consolidated Billing to produce one bill
    D.  Decreased acquisition time for new compute resources
    
  • Which of the following is a fast and reliable NoSQL database service?

    A.  Amazon Redshift
    B.  Amazon RDS
    C.  Amazon DynamoDB
    D.  Amazon S3
    
  • Which AWS IAM feature allows developers to access AWS services through the AWS CLI?

    A.  API keys
    B.  Access keys
    C.  User names/Passwords
    D.  SSH keys
    
  • What is the lowest-cost, durable storage option for retaining database backups for immediate retrieval?

    A.  Amazon S3
    B.  Amazon Glacier
    C.  Amazon EBS
    D.  Amazon EC2 Instance Store
    
  • One of the advantages to moving infrastructure from an on-premises data center to the AWS Cloud is:

    A.  it allows the business to eliminate IT bills.
    B.  it allows the business to put a server in each customer’s data center.
    C.  it allows the business to focus on business activities.
    D.  it allows the business to leave servers unpatched. 
    
  • How many Availability Zones should compute resources be provisioned across to achieve high availability?

    A.  A minimum of one
    B.  A minimum of two
    C.  A minimum of three
    D.  A minimum of four or more
    
  • Which of the following is a shared control between the customer and AWS?

    A.  Providing a key for Amazon S3 client-side encryption
    B.  Configuration of an Amazon EC2 instance
    C.  Environmental controls of physical AWS data centers
    D.  Awareness and training
    
  • Which of the following components of the AWS Global Infrastructure consists of one or more discrete data centers interconnected through low latency links?

    A. Availability Zone
    B. Edge location
    C. Region
    D. Private networking
    
  • A customer needs to run a MySQL database that easily scales. Which AWS service should they use?

    A. Amazon Aurora
    B. Amazon Redshift
    C. Amazon DynamoDB
    D. Amazon ElastiCache

  • What is one of the advantages of the Amazon Relational Database Service (Amazon RDS)?

    A. It simplifies relational database administration tasks.
    B. It provides 99.99999999999% reliability and durability.
    C. It automatically scales databases for loads.
    D. It enabled users to dynamically adjust CPU and RAM resources.

  • Which AWS services should be used for read/write of constantly changing data? (Choose two.)

    A. Amazon Glacier
    B. Amazon RDS
    C. AWS Snowball
    D. Amazon Redshift
    E. Amazon EFS

  • AWS supports which of the following methods to add security to Identity and Access Management (IAM) users? (Choose two.)

    A. Implementing Amazon Rekognition
    B. Using AWS Shield-protected resources
    C. Blocking access with Security Groups
    D. Using Multi-Factor Authentication (MFA)
    E. Enforcing password strength and expiration

  • According to best practices, how should an application be designed to run in the AWS Cloud?

    A. Use tighly coupled components.
    B. Use loosely coupled components.
    C. Use infrequently coupled components.
    D. Use frequently coupled components.

  • Which is a recommended pattern for designing a highly available architecture on AWS?

    A. Ensure that components have low-latency network connectivity.
    B. Run enough Amazon EC2 instances to operate at peak loaD.
    C. Ensure that the application is designed to accommodate failure of any single component.
    D. Use a monolithic application that handles all operations.

  • Under the AWS shared responsibility model, which of the following activities are the customer’s responsibility? (Choose two.)

    A. Patching operating system components for Amazon Relational Database Server (Amazon RDS)
    B. Encrypting data on the client-side
    C. Training the data center staff
    D. Configuring Network Access Control Lists (ACL)
    E. Maintaining environmental controls within a data center

  • Where are AWS compliance documents, such as an SOC 1 report, located?

    A. Amazon Inspector
    B. AWS CloudTrail
    C. AWS Artifact
    D. AWS Certificate Manager

  • Which of the following services will automatically scale with an expected increase in web traffic?

    A. AWS CodePipeline
    B. Elastic Load Balancing
    C. Amazon EBS
    D. AWS Direct Connect

  • Which AWS feature will reduce the customer’s total cost of ownership (TCO)?

    A. Shared responsibility security model
    B. Single tenancy
    C. Elastic computing
    D. Encryption

  • Which of the Reserved Instance (RI) pricing models can change the attributes of the RI as long as the exchange results in the creation of RIs of equal or greater value?

    A. Dedicated RIs
    B. Scheduled RIs
    C. Convertible RIs
    D. Standard RIs

  • Which of the following security-related actions are available at no cost?

    A. Calling AWS Support
    B. Contacting AWS Professional Services to request a workshop
    C. Accessing forums, blogs, and whitepapers
    D. Attending AWS classes at a local university

  • Which of the following can limit Amazon Storage Service (Amazon S3) bucket access to specific users?

    A. A public and private key-pair
    B. Amazon Inspector
    C. AWS Identity and Access Management (IAM) policies
    D. Security Groups

  • A characteristic of edge locations is that they:

    A. host Amazon EC2 instances closer to users.
    B. help lower latency and improve performance for users.
    C. cache frequently changing data without reaching the origin server.
    D. refresh data changes daily

  • Compared with costs in traditional and virtualized data centers, AWS has:

    A. greater variable costs and greater upfront costs.
    B. fixed usage costs and lower upfront costs.
    C. lower variable costs and greater upfront costs.
    D. lower variable costs and lower upfront costs.

  • Which of the following Reserved Instance (RI) pricing models provides the highest average savings compared to On-Demand pricing?

    A. One-year, No Upfront, Standard RI pricing
    B. One-year, All Upfront, Convertible RI pricing
    C. Three-year, All Upfront, Standard RI pricing
    D. Three-year, No Upfront, Convertible RI pricing

  • Which of the following are advantages of AWS consolidated billing? (Choose two.)

    A. The ability to receive one bill for multiple accounts
    B. Service limits increasing by default in all accounts
    C. A fixed discount on the monthly bill
    D. Potential volume discounts, as usage in all accounts is combined
    E. The automatic extension of the master account’s AWS support plan to all accounts

  • Which AWS tools assist with estimating costs? (Choose threE. )

    A. Detailed billing report
    B. Cost allocation tags
    C. AWS Simple Monthly Calculator
    D. AWS Total Cost of Ownership (TCO) Calculator
    E. Cost Estimator

  • Which of the following is a correct relationship between regions, Availability Zones, and edge locations?

    A. Data centers contain regions.
    B. Regions contain Availability Zones.
    C. Availability Zones contain edge locations.
    D. Edge locations contain regions.

  • A company is considering using AWS for a self-hosted database that requires a nightly shutdown for maintenance and cost-saving purposes. Which service should the company use?

    A. Amazon Redshift
    B. Amazon DynamoDB
    C. Amazon Elastic Compute Cloud (Amazon EC2) with Amazon EC2 instance store
    D. Amazon EC2 with Amazon Elastic Block Store (Amazon EBS)

  • What costs are included when comparing AWS Total Cost of Ownership (TCO) with on-premises TCO?

    A. Project management
    B. Antivirus software licensing
    C. Data center security
    D. Software development

  • Which services can be used across hybrid AWS Cloud architectures? (Choose two.)

    A. Amazon Route 53
    B. Virtual Private Gateway
    C. Classic Load Balancer
    D. Auto Scaling
    E. Amazon CloudWatch default metrics

  • Which of the following are characteristics of Amazon S3? (Choose two.)

    A. A global file system
    B. An object store
    C. A local file store
    D. A network file system
    E. A durable storage system

  • Which service enables risk auditing by continuously monitoring and logging account activity, including user actions in the AWS Management Console and AWS SDKs?

    A. Amazon CloudWatch
    B. AWS CloudTrail
    C. AWS Config
    D. AWS Health

  • Which AWS characteristics make AWS cost effective for a workload with dynamic user demand? (Choose two.)

    A. High availability
    B. Shared security model
    C. Elasticity
    D. Pay-as-you-go pricing
    E. Reliability

  • Which of the following Amazon EC2 pricing models allow customers to use existing server-bound software licenses?

    A. Spot Instances
    B. Reserved Instances
    C. Dedicated Hosts
    D. On-Demand Instances

  • Which of the following inspects AWS environments to find opportunities that can save money for users and also improve system performance?

    A. AWS Cost Explorer
    B. AWS Trusted Advisor
    C. Consolidated billing
    D. Detailed billing

  • Which AWS services can host a Microsoft SQL Server database? (Choose two.)

    A. Amazon EC2
    B. Amazon Relational Database Service (Amazon RDS)
    C. Amazon Aurora
    D. Amazon Redshift
    E. Amazon S3

  • Distributing workloads across multiple Availability Zones supports which cloud architecture design principle?

    A. Implement automation.
    B. Design for agility.
    C. Design for failurE.
    D. Implement elasticity.

  • A customer would like to design and build a new workload on AWS Cloud but does not have the AWS-related software technical expertise in-housE. Which of the following AWS programs can a customer take advantage of to achieve that outcome?

    A. AWS Partner Network Technology Partners
    B. AWS Marketplace
    C. AWS Partner Network Consulting Partners
    D. AWS Service Catalog

  • What AWS team assists customers with accelerating cloud adoption through paid engagements in any of several specialty practice areas?

    A. AWS Enterprise Support
    B. AWS Solutions Architects
    C. AWS Professional Services
    D. AWS Account Managers

  • Which service stores objects, provides real-time access to those objects, and offers versioning and lifecycle capabilities?

    A. Amazon Glacier
    B. AWS Storage Gateway
    C. Amazon S3
    D. Amazon EBS

  • The use of what AWS feature or service allows companies to track and categorize spending on a detailed level?

    A. Cost allocation tags
    B. Consolidated billing
    C. AWS Budgets
    D. AWS Marketplace


  • Source link

    Google Associate Cloud Engineer Exam Questions Part 1

    Source:

    AWS: https://www.awslagi.com

    GCP: https://www.gcp-examquestions.com

    1. You are a project owner and need your co-worker to deploy a new version of your application to App Engine. You want to follow Google’s recommended practices. Which IAM roles should you grant your co-worker?

      A. Project Editor
      B. App Engine Service Admin
      C. App Engine Deployer
      D. App Engine Code Viewer
      

    Hint Answer: C
    https://cloud.google.com/iam/docs/understanding-roles

    1. Your company has reserved a monthly budget for your project. You want to be informed automatically of your project spend so that you can take action when you approach the limit. What should you do?

      A. Link a credit card with a monthly limit equal to your budget.
      B. Create a budget alert for 50%, 90%, and 100% of your total monthly budget.
      C. In App Engine Settings, set a daily budget at the rate of 1/30 of your monthly budget.
      D. In the GCP Console, configure billing export to BigQuery. Create a saved view that queries your total spend.
      

    Hint Answer: B
    https://cloud.google.com/appengine/pricing#spending_limit
    https://cloud.google.com/billing/docs/how-to/budgets

    1. You have a project using BigQuery. You want to list all BigQuery jobs for that project. You want to set this project as the default for the bq command-line tool. What should you do?

      A. Use “gcloud config set project” to set the default project.
      B. Use “bq config set project” to set the default project.
      C. Use “gcloud generate config-url” to generate a URL to the Google Cloud Platform Console to set the default project.
      D. Use “bq generate config-url” to generate a URL to the Google Cloud Platform Console to set the default project.
      

    Hint Answer: A
    https://cloud.google.com/bigquery/docs/reference/bq-cli-reference
    https://cloud.google.com/sdk/gcloud/reference/config/set

    1. Your project has all its Compute Engine resources in the europe-west1 region. You want to set europe-west1 as the default region for gcloud commands. What should you do?

      A. Use Cloud Shell instead of the command line interface of your device. Launch Cloud Shell after you navigate to a resource in the europe-west1 region. The europe-west1 region will automatically become the default region.
      B. Use “gcloud config set compute/region europe-west1” to set the default region for future gcloud commands.
      C. Use “gcloud config set compute/zone europe-west1” to set the default region for future gcloud commands.
      D. Create a VPN from on-premises to a subnet in europe-west1, and use that connection when executing gcloud commands.
      

    Hint Answer: B
    https://cloud.google.com/compute/docs/regions-zones/changing-default-zone-region

    1. You developed a new application for App Engine and are ready to deploy it to production. You need to estimate the costs of running your application on Google Cloud Platform as accurately as possible. What should you do?

      A. Create a YAML file with the expected usage. Pass this file to the “gcloud app estimate” command to get an accurate estimation.
      B. Multiply the costs of your application when it was in development by the number of expected users to get an accurate estimation.
      C. Use the pricing calculator for App Engine to get an accurate estimation of the expected charges.
      D. Create a ticket with Google Cloud Billing Support to get an accurate estimation.
      

    Hint Answer: C is correct because this is the proper way to estimate charges

    1. Your company processes high volumes of IoT data that are time-stamped. The total data volume can be several petabytes. The data needs to be written and changed at a high speed. You want to use the most performant storage option for your data. Which product should you use?

      A. Cloud Datastore
      B. Cloud Storage
      C. Cloud Bigtable
      D. BigQuery
      

    Hint Answer: C is correct because Cloud Bigtable is the most performant storage option to work with IoT and time series data.
    https://cloud.google.com/bigtable/docs/schema-design-time-series

    1. Your application has a large international audience and runs stateless virtual machines within a managed instance group across multiple locations. One feature of the application lets users upload files and share them with other users. Files must be available for 30 days; after that, they are removed from the system entirely. Which storage solution should you choose?

      A. A Cloud Datastore database.
      B. A multi-regional Cloud Storage bucket.
      C. Persistent SSD on virtual machine instances.
      D. A managed instance group of Filestore servers.
      

    Hint Answer: B is correct because buckets can be multi-regional and have lifecycle management.

    1. You have a definition for an instance template that contains a web application. You are asked to deploy the application so that it can scale based on the HTTP traffic it receives. What should you do?

      A. Create a VM from the instance template. Create a custom image from the VM’s disk. Export the image to Cloud Storage. Create an HTTP load balancer and add the Cloud Storage bucket as its backend service.
      B. Create a VM from the instance template. Create an App Engine application in Automatic Scaling mode that forwards all traffic to the VM.
      C. Create a managed instance group based on the instance template. Configure autoscaling based on HTTP traffic and configure the instance group as the backend service of an HTTP load balancer.
      D. Create the necessary amount of instances required for peak user traffic based on the instance template. Create an unmanaged instance group and add the instances to that instance group. Configure the instance group as the Backend Service of an HTTP load balancer.
      

    Hint Answer: C is correct because a managed instance group can use an instance template to scale based on HTTP traffic.
    https://cloud.google.com/compute/docs/instance-groups/#managed_instance_groups_and_autoscaling
    https://cloud.google.com/compute/docs/images/export-image
    https://cloud.google.com/compute/docs/load-balancing/http/adding-a-backend-bucket-to-content-based-load-balancing

    1. You are creating a Kubernetes Engine cluster to deploy multiple pods inside the cluster. All container logs must be stored in BigQuery for later analysis. You want to follow Google-recommended practices. Which two approaches can you take?

      A. Turn on Stackdriver Logging during the Kubernetes Engine cluster creation.
      B. Turn on Stackdriver Monitoring during the Kubernetes Engine cluster creation.
      C. Develop a custom add-on that uses Cloud Logging API and BigQuery API. Deploy the add-on to your Kubernetes Engine cluster.
      D. Use the Stackdriver Logging export feature to create a sink to Cloud Storage. Create a Cloud Dataflow job that imports log files from Cloud Storage to BigQuery.
      E. Use the Stackdriver Logging export feature to create a sink to BigQuery. Specify a filter expression to export log records related to your Kubernetes Engine cluster only.
      

    Hint Answer: A Is correct because creating a cluster with Stackdriver Logging option will enable all the container logs to be stored in Stackdriver Logging.
    E Is correct because Stackdriver Logging support exporting logs to BigQuery by creating sinks
    https://cloud.google.com/kubernetes-engine/docs/how-to/logging
    https://cloud.google.com/logging/docs/export/configure_export_v2
    https://kubernetes.io/docs/reference/labels-annotations-taints/

    1. You need to create a new Kubernetes Cluster on Google Cloud Platform that can autoscale the number of worker nodes. What should you do?

      A. Create a cluster on Kubernetes Engine and enable autoscaling on Kubernetes Engine.
      B. Create a cluster on Kubernetes Engine and enable autoscaling on the instance group of the cluster.
      C. Configure a Compute Engine instance as a worker and add it to an unmanaged instance group. Add a load balancer to the instance group and rely on the load balancer to create additional Compute Engine instances when needed.
      D. Create Compute Engine instances for the workers and the master, and install Kubernetes. Rely on Kubernetes to create additional Compute Engine instances when needed.
      

    Hint Answer: A is correct because this is the way to set up an autoscaling Kubernetes cluster.
    https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler

    1. You have an application server running on Compute Engine in the europe-west1-d zone. You need to ensure high availability and replicate the server to the europe-west2-c zone using the fewest steps possible. What should you do?

      A. Create a snapshot from the disk. Create a disk from the snapshot in the europe-west2-c zone. Create a new VM with that disk.
      B. Create a snapshot from the disk. Create a disk from the snapshot in the europe-west1-d zone and then move the disk to europe-west2-c. Create a new VM with that disk.
      C. Use “gcloud” to copy the disk to the europe-west2-c zone. Create a new VM with that disk.
      D. Use “gcloud compute instances move” with parameter “–destination-zone europe-west2-c” to move the instance to the new zone.
      

    Hint Answer: A is correct because this makes sure the VM gets replicated in the new zone.

    1. Your company has a mission-critical application that serves users globally. You need to select a transactional, relational data storage system for this application. Which two products should you consider

      A. BigQuery
      B. Cloud SQL
      C. Cloud Spanner
      D. Cloud Bigtable
      E. Cloud Datastore
      

    Hint Answer: B is correct because Cloud SQL is a relational and transactional database in the list.
    C Is correct because Spanner is a relational and transactional database in the list.

    1. You have a Kubernetes cluster with 1 node-pool. The cluster receives a lot of traffic and needs to grow. You decide to add a node. What should you do?

      A. Use “gcloud container clusters resize” with the desired number of nodes.
      B. Use “kubectl container clusters resize” with the desired number of nodes.
      C. Edit the managed instance group of the cluster and increase the number of VMs by 1.
      D. Edit the managed instance group of the cluster and enable autoscaling.
      

    Hint Answer: A is correct because this resizes the cluster to the desired number of nodes.

    1. You created an update for your application on App Engine. You want to deploy the update without impacting your users. You want to be able to roll back as quickly as possible if it fails. What should you do?

      A. Delete the current version of your application. Deploy the update using the same version identifier as the deleted version.
      B. Notify your users of an upcoming maintenance window. Deploy the update in that maintenance window.
      C. Deploy the update as the same version that is currently running.
      D. Deploy the update as a new version. Migrate traffic from the current version to the new version.
      

    Hint Answer: D is correct because this makes sure there is no downtime and you can roll back the fastest.
    https://cloud.google.com/appengine/docs/admin-api/migrating-splitting-traffic

    1. You have created a Kubernetes deployment, called Deployment-A, with 3 replicas on your cluster. Another deployment, called Deployment-B, needs access to Deployment-A . You cannot expose Deployment-A outside of the cluster. What should you do?

      A. Create a Service of type NodePort for Deployment A and an Ingress Resource for that Service. Have Deployment B use the Ingress IP address.
      B. Create a Service of type LoadBalancer for Deployment A Have Deployment B use the Service IP address.
      C. Create a Service of type LoadBalancer for Deployment A and an Ingress Resource for that Service. Have Deployment B use the Ingress IP address.
      D. Create a Service of type ClusterIP for Deployment A Have Deployment B use the Service IP address.
      

    Hint Answer: D is correct because this exposes the service on a cluster-internal IP address. Choosing this method makes the service reachable only from within the cluster.
    https://kubernetes.io/docs/concepts/services-networking/service/

    1. You need to estimate the annual cost of running a Bigquery query that is scheduled to run nightly. What should you do?

      A. Use “gcloud query –dry_run” to determine the number of bytes read by the query. Use this number in the Pricing Calculator.
      B. Use “bq query –dry_run” to determine the number of bytes read by the query. Use this number in the Pricing Calculator.
      C. Use “gcloud estimate” to determine the amount billed for a single query. Multiply this amount by 365.
      D. Use “bq estimate” to determine the amount billed for a single query. Multiply this amount by 365.
      

    Hint Answer: B is correct because this is the correct way to estimate the yearly BigQuery querying costs.

    1. You want to find out who in your organization has Owner access to a project called “my-project”.What should you do?

      A. In the Google Cloud Platform Console, go to the IAM page for your organization and apply the filter “Role:Owner”.
      B. In the Google Cloud Platform Console, go to the IAM page for your project and apply the filter “Role:Owner”.
      C. Use “gcloud iam list-grantable-role –project my-project” from your Terminal.
      D. Use “gcloud iam list-grantable-role” from Cloud Shell on the project page.
      

    Hint Answer: B is correct because this shows you the Owners of the project.

    1. You want to create a new role for your colleagues that will apply to all current and future projects created in your organization. The role should have the permissions of the BigQuery Job User and Cloud Bigtable User roles. You want to follow Google’s recommended practices. How should you create the new role?

      A. Use “gcloud iam combine-roles –global” to combine the 2 roles into a new custom role.
      B. For one of your projects, in the Google Cloud Platform Console under Roles, select both roles and combine them into a new custom role. Use “gcloud iam promote-role” to promote the role from a project role to an organization role.
      C. For all projects, in the Google Cloud Platform Console under Roles, select both roles and combine them into a new custom role.
      D. For your organization, in the Google Cloud Platform Console under Roles, select both roles and combine them into a new custom role.
      

    Hint Answer: D is correct because this creates a new role with the combined permissions on the organization level.

    1. You work in a small company where everyone should be able to view all resources of a specific project. You want to grant them access following Google’s recommended practices. What should you do?

      A. Create a script that uses “gcloud projects add-iam-policy-binding” for all users’ email addresses and the Project Viewer role.
      B. Create a script that uses “gcloud iam roles create” for all users’ email addresses and the Project Viewer role.
      C. Create a new Google Group and add all users to the group. Use “gcloud projects add-iam-policy-binding” with the Project Viewer role and Group email address.
      D. Create a new Google Group and add all members to the group. Use “gcloud iam roles create” with the Project Viewer role and Group email address.
      

    Hint Answer: C is correct because Google recommends to use groups where possible.
    https://cloud.google.com/sdk/gcloud/reference/iam/

    1. You need to verify the assigned permissions in a custom IAM role. What should you do?

      A. Use the GCP Console, IAM section to view the information.
      B. Use the “gcloud init” command to view the information.
      C. Use the GCP Console, Security section to view the information.
      D. Use the GCP Console, API section to view the information.
      

    Hint Answer: A is correct because this is the correct console area to view permission assigned to a custom role in a particular project.
    https://cloud.google.com/iam/docs/understanding-roles
    https://cloud.google.com/iam/docs/creating-custom-roles

    1. Which of the following services provides real-time messaging?

      A. Cloud Pub/Sub
      B. Big Query
      C. App Engine
      D. Datastore
      

    Answer: A

    1. Which of the following tasks would Nearline Storage be well suited for?

      A. A mounted Linux file system
      B. Image assets for a high traffic website
      C. Frequently read files
      D. Infrequently read data backups
      

    Answer: D
    https://cloud.google.com/storage/docs/storage-classes#comparison_of_storage_classes

    1. Which of the following products will allow you to administer your projects through a browser based command-line?

      A. Cloud Datastore
      B. Cloud Command-line
      C. Cloud Terminal
      D. Cloud Shell
      

    Answer: D
    https://cloud.google.com/shell/

    1. Cloud SQL is based on which database engine?

      A. Microsoft SQL Server
      B. MySQL
      C. Oracle
      D. Informix
      

    Answer: B
    https://cloud.google.com/sql/docs/features#differences

    1. Which of the following products will allow you to perform live debugging without stopping your application?

      A. App Engine Active Debugger (AEAD)
      B. Stackdriver Debugger
      C. Code Inspector
      D. Pause IT
      

    Answer: B
    https://cloud.google.com/debugger/docs/

    1. Which of these options is not a valid Cloud Storage class?

      A. Glacier Storage
      B. Nearline Storage
      C. Coldline Storage
      D. Regional Storage
      

    Answer: A
    https://cloud.google.com/storage/docs/storage-classes

    1. Regarding Cloud Storage, which option allows any user to access to a Cloud Storage resource for a limited time, using a specific URL?

      A. Open Buckets
      B. Temporary Resources
      C. Signed URLs
      D. Temporary URLs
      

    Answer: C
    https://cloud.google.com/storage/docs/access-control/signed-urls

    1. Of the options given, which is a NoSQL database?

      A. Cloud Datastore
      B. Cloud SQL
      C. All of the given options
      D. Cloud Storage
      

    Answer: A
    https://cloud.google.com/appengine/docs/python/datastore/

    1. Container Engine allows orchastration of what type of containers?

      A. Blue Whale
      B. LXC
      C. BSD Jails
      D. Docker
      

    Answer: D

    1. Regarding Cloud IAM, what type of role(s) are available?

      A. Basic roles and Compiled roles
      B. Primitive roles and Predefined roles
      C. Simple roles
      D. Basic roles and Curated roles
      

    Answer: B
    https://cloud.google.com/iam/docs/overview

    1. Which of the follow products will allow you to host a static website?

      A. Cloud SDK
      B. Cloud Endpoints
      C. Cloud Storage
      D. Cloud Datastore
      

    Answer: C

    1. Container Engine is built on which open source system?

      A. Swarm
      B. Kubernetes
      C. Docker Orchastrate
      D. Mesos
      

    Answer: B
    https://cloud.google.com/container-engine/

    1. Cloud Source Repositories provide a hosted version of which version control system?

      A. Git
      B. RCS
      C. SVN
      D. Mercurial
      

    Answer: A
    https://cloud.google.com/source-repositories/docs/

    1. Which of the following is an analytics data warehouse?

      A. Cloud SQL
      B. Big Query
      C. Datastore
      D. Cloud Storage
      

    Answer: B
    https://cloud.google.com/bigquery/

    1. Which service offers the ability to create and run virtual machines?

      A. Google Virtualization Engine
      B. Compute Containers
      C. VM Engine
      D. Compute Engine
      

    Answer: D
    https://cloud.google.com/compute/

    1. Which of the following is not helpful for mitigating the impact of an unexpected failure or reboot?

      A. Use persistent disks
      B. Configure tags and labels
      C. Use startup scripts to re-configure the system as needed
      D. Back up your data
      

    Answer: B
    https://cloud.google.com/compute/docs/tutorials/robustsystems

    1. Which tool allows you to sync data in your Google domain with Active Directory?

      A. Google Cloud Directory Sync (GCDS)
      B. Google Active Directory (GAD)
      C. Google Domain Sync Service
      D. Google LDAP Sync
      

    Answer: A
    https://support.google.com/a/answer/106368?hl=en

    1. Regarding Cloud Storage: which of the following allows for time-limited access to buckets and objects without a Google account?

      A. Signed URLs
      B. gsutil
      C. Single sign-on
      D. Temporary Storage Accounts
      

    Answer: A
    https://cloud.google.com/storage/docs/access-control/signed-urls

    1. Which of the following is a virtual machine instance that can be terminated by Compute Engine without warning?

      A. A preemptible VM
      B. A shared-core VM
      C. A high-cpu VM
      D. A standard VM
      

    Answer: A
    https://cloud.google.com/compute/docs/instances/preemptible

    1. Regarding Compute Engine: What is a managed instance group?

      A. A managed instance group combines existing instances of different configurations into one manageable group
      B. A managed instance group uses an instance template to create identical instances
      C. A managed instance group creates a firewall around instances
      D. A managed instance group is a set of servers used exclusively for batch processing
      

    Answer: B
    https://cloud.google.com/compute/docs/instance-groups/

    1. What type of firewall rule(s) does Google Cloud’s networking support?

      A. deny
      B. allow, deny & filtered
      C. allow
      D. allow & deny
      

    Answer: A
    https://cloud.google.com/compute/docs/networking

    1. How are subnetworks different than the legacy networks?

      A. They’re the same, only the branding is different
      B. Each subnetwork controls the IP address range used for instances that are allocated to that subnetwork
      C. With subnetworks IP address allocation occurs at the global network level
      D. Legacy networks are the preferred way to create networks
      

    nswer: B
    https://cloud.google.com/compute/docs/subnetworks

    1. Which of the following is not a valid metric for triggering autoscaling?

      A. Google Cloud Pub/Sub queuing
      B. Average CPU utilization
      C. Stackdriver Monitoring metrics
      D. App Engine Task Queues
      

    Answer: D
    https://cloud.google.com/compute/docs/autoscaler/

    1. Which of the following features makes applying firewall settings easier?

      A. Service accounts
      B. Tags
      C. Metadata
      D. Labels
      

    Answer: B
    https://cloud.google.com/compute/docs/label-or-tag-resources

    1. What option does Cloud SQL offer to help with high availability?

      A. Point-in-time recovery
      B. The AlwaysOn setting
      C. Snapshots
      D. Failover replicas
      

    Answer: D
    https://cloud.google.com/sql/docs/configure-ha#test

    1. Regarding Compute Engine: when executing a startup script on a Linux server which user does the instance execute the script as?

      A. ubuntu
      B. The Google provided “gceinstance” user
      C. Whatever user you specify in the console
      D. root
      

    Answer: D
    https://cloud.google.com/compute/docs/startupscript

    1. Which of the follow methods will not cause a shutdown script to be executed?

      A. When an instance shuts down through a request to the guest operating system
      B. A preemptible instance being terminated
      C. An instances.reset API call
      D. Shutting down via the cloud console
      

    Answer: C
    https://cloud.google.com/compute/docs/shutdownscript

    1. Which type of account would you use in code when you want to interact with Google Cloud services?

      A. Google group
      B. Service account
      C. Code account
      D. Google account
      

    Answer: B
    https://cloud.google.com/iam/docs/overview

    1. Which of the following is not an IAM best practice?

    . Use primitive roles by default
    B. Treat each component of your application as a separate trust boundary
    C. Grant roles at the smallest scope needed
    D. Restrict who has access to create and manage service accounts in your project

    Answer: A
    https://cloud.google.com/iam/docs/using-iam-securely

    1. Which of the following would not reduce your recovery time in the event of a disaster?

      A. Make it as easy as possible to adjust the DNS record to cut over to your warm standby server.
      B. Replace your warm standby server with a hot standby server.
      C. Use a highly preconfigured machine image for deploying new instances.
      D. Replace your active/active hybrid production environment (on-premises and GCP) with a warm standby server.
      

    Answer: D
    https://cloud.google.com/solutions/disaster-recovery-cookbook

    1. Which of the following is not a best practice for mitigating Denial of Service attacks on your Google Cloud infrastructure?

      A. Block SYN floods using Cloud Router
      B. Isolate your internal traffic from the external world
      C. Scale to absorb the attack
      D. Reduce the attack surface for your GCE deployment
      

    Answer: A
    https://cloud.google.com/files/GCPDDoSprotection-04122016.pdf

    1. Which is the fastest instance storage option that will still be available when an instance is stopped?

      A. Local SSD
      B. Standard Persistent Disk
      C. SSD Persistent Disk
      D. RAM disk
      

    Answer: C
    https://cloud.google.com/compute/docs/disks/

    1. Which of these statements about Microsoft licenses is true?

      A. You can migrate your existing Microsoft application licenses to Compute Engine instances, but not your Microsoft Windows licenses.
      B. You can migrate your existing Microsoft Windows and Microsoft application licenses to Compute Engine instances.
      C. You cannot migrate your existing Microsoft Windows or Microsoft application licenses to Compute Engine instances.
      D. You can migrate your existing Microsoft Windows licenses to Compute Engine instances, but not your Microsoft application licenses.
      

    Answer: B
    https://cloud.google.com/compute/docs/instances/windows/bring-your-own-license/

    1. Which database services support standard SQL queries?

      A. Cloud Bigtable and Cloud SQL
      B. Cloud Spanner and Cloud SQL
      C. Cloud SQL and Cloud Datastore
      D. Cloud SQL
      

    Answer: B
    https://cloud.google.com/products/storage/

    1. Which statement about IP addresses is false?

      A. You are charged for a static external IP address for every hour it is in use.
      B. You are not charged for ephemeral IP addresses.
      C. Google Cloud Engine supports only IPv4 addresses, not IPv6.
      D. You are charged for a static external IP address when it is assigned but unused.
      

    Answer: A
    https://cloud.google.com/compute/all-pricing

    1. Which Google Cloud Platform service requires the least management because it takes care of the underlying infrastructure for you?

      A. Container Engine
      B. Cloud Engine
      C. App Engine
      D. Docker containers running on Cloud Engine
      

    Answer: C

    1. To ensure that your application will handle the load even if an entire zone fails, what should you do?

      A. Don’t select the “Multizone” option when creating your managed instance group.
      B. Spread your managed instance group over two zones and overprovision by 100%.
      C. Create a regional unmanaged instance group and spread your instances across multiple zones.
      D. Overprovision your regional managed instance group by at least 50%.
      

    Answer: D
    https://cloud.google.com/compute/docs/instance-groups/distributing-instances-with-regional-instance-groups

    1. If you do not grant a user named Bob permission to access a Cloud Storage bucket, but then use an ACL to grant access to an object inside that bucket to Bob, what will happen?

      A. Bob will be able to access all of the objects inside the bucket because he was granted access to at least one object in the bucket.
      B. Bob will be able to access the object because bucket and object ACLs are independent of each other.
      C. Bob will not be able to access the object because he does not have access to the bucket.
      D. It is not possible to grant access to an object when it is inside a bucket for which a user does not have access.
      

    Answer: B
    https://cloud.google.com/storage/docs/best-practices#security

    1. To set up a virtual private network between your office network and Google Cloud Platform and have the routes automatically updated when the network topology changes, what is the minimal number of each type of component you need to implement?

      A. 2 Cloud VPN Gateways and 1 Peer Gateway
      B. 1 Cloud VPN Gateway, 1 Peer Gateway, and 1 Cloud Router
      C. 2 Peer Gateways and 1 Cloud Router
      D. 2 Cloud VPN Gateways and 1 Cloud Router
      

    Answer: B
    https://cloud.google.com/compute/docs/cloudrouter#cloud_router_for_vpns_with_vpc_networks

    1. Which of the following statements about encryption on GCP is not true?

      A. Google Cloud Platform encrypts customer data stored at rest by default.
      B. Each encryption key is itself encrypted with a set of master keys.
      C. If you want to manage your own encryption keys for data on Google Cloud Storage, the only option is Customer-Managed Encryption Keys (CMEK) using Cloud KMS.
      D. Data in Google Cloud Platform is broken into subfile chunks for storage, and each chunk is encrypted at the storage level with an individual encryption key.
      

    Answer: C
    https://cloud.google.com/security/encryption-at-rest/

    1. Which database service requires that you configure a failover replica to make it highly available?

      A. Cloud Spanner
      B. Cloud SQL
      C. BigQuery
      D. Cloud Datastore
      

    Answer: B
    https://cloud.google.com/sql/docs/mysql/configure-ha

    1. Which of these is not a principle you should apply when setting roles and permissions?

      A. Whenever possible, assign roles to groups instead of to individuals.
      B. Grant users the appropriate permissions to facilitate least privilege
      C. Whenever possible, assign primitive roles rather than predefined roles.
      D. Audit all policy changes by checking the Cloud Audit Logs.
      

    Answer: C
    https://cloud.google.com/iam/docs/using-iam-securely

    1. Which of these is not a recommended method of authenticating an application with a Google Cloud service?

      A. Use the gcloud and/or gsutil commands.
      B. Request an OAuth2 access token and use it directly.
      C. Embed the service account’s credentials in the application’s source code.
      D. Use one of the Google Cloud Client Libraries.
      

    Answer: C
    https://cloud.google.com/docs/authentication#token_lifecycle_management

    1. What are two different features that fully isolate groups of VM instances?

      A. Firewall rules and subnetworks
      B. Networks and subnetworks
      C. Subnetworks and projects
      D. Projects and networks
      

    Answer: D
    https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#use_projects_to_fully_isolate_resources

    1. Suppose you have a web server that is working properly, but you can’t connect to its instance VM over SSH. Which of these troubleshooting methods can you use without disrupting production traffic? (Select 3 answers.)

      A. Create a snapshot of the disk and use it to create a new disk; then attach the new disk to a new instance
      B. Use netcat to try to connect to port 22
      C. Access the serial console output
      D. Create a startup script to collect information.
      

    Answer: ABC

    1. To configure Stackdriver to monitor a web server and let you know if it goes down, what steps do you need to take? (Select 2 answers.)

      A. Install the Stackdriver Logging Agent on the web server
      B. Create an alerting policy
      C. Install the Stackdriver Monitoring Agent on the web server
      D. Create an uptime check
      

    Answer: BD

    1. Which of these tools can you use to copy data from AWS S3 to Cloud Storage? (Select 2 answers.)

      A. Cloud Storage Transfer Service
      B. S3 Storage Transfer Service
      C. Cloud Storage Console
      D. gsutil
      

    Answer: AD
    https://cloud.google.com/storage/transfer/

    1. What are two of the actions you can take to troubleshoot a virtual machine instance that won’t start up at all? (Select 2 answers.)

      A. Increase the CPU and memory on the instance by changing the machine type.
      B. Validate that your disk has a valid file system.
      C. Examine your virtual machine instance’s serial port output.
      D. Connect to your virtual machine instance using SSH.
      

    Answer: BC
    https://cloud.google.com/compute/docs/troubleshooting#pdboot

    1. Which statements about application load testing are true? (Select 2 answers.)

      A. You should test at the maximum load that you expect to encounter.
      B. You should test at 50% more than the maximum load that you expect to encounter.
      C. It is not necessary to test sudden increases in traffic since GCP scales seamlessly.
      D. Your load tests should include testing sudden increases in traffic.
      

    Answer: AD
    https://cloud.google.com/appengine/articles/scalability#loadtesting

    1. Which of these statements about resilience testing are true? (Select 2 answers.)

      A. In a resilience test, your application should keep running with little or no downtime.
      B. To test the resilience of an autoscaling instance group, you can terminate a random instance within that group.
      C. In order for an application to survive instance failures, it should not be stateless.
      D. Resilience testing is the same as disaster recovery testing.
      

    Answer: AB
    https://cloudacademy.com/google/managing-your-google-cloud-infrastructure-course/testing.html

    1. Which combination of Stackdriver services will alert you about errors generated by your applications and help you locate the root cause in the code?

      A. Monitoring, Trace, and Debugger
      B. Monitoring and Error Reporting
      C. Debugger and Error Reporting
      D. Alerts and Debugger
      

    Answer: C
    https://cloud.google.com/products/

    1. If you have configured Stackdriver Logging to export logs to BigQuery, but logs entries are not getting exported to BigQuery, what is the most likely cause?

      A. The Cloud Data Transfer Service has not been enabled.
      B. There isn’t a firewall rule allowing traffic between Stackdriver and BigQuery.
      C. Stackdriver Logging does not have permission to write to the BigQuery dataset.
      D. The size of the Stackdriver log entries being exported exceeds the maximum capacity of the BigQuery dataset.
      

    Answer: C
    https://cloud.google.com/logging/docs/export/configure_export_v2#errors_exporting_to_bigquery

    1. You can use Stackdriver to monitor virtual machines on which cloud platforms?

      A. Google Cloud Platform, Microsoft Azure
      B. Google Cloud Platform
      C. Google Cloud Platform, Microsoft Azure, Amazon Web Services
      D. Google Cloud Platform, Amazon Web Services
      

    Answer: D
    https://cloud.google.com/stackdriver/

    1. To minimize the risk of someone changing your log files to hide their activities, which of the following principles would help? (Select 3 answers.)

      A. Restrict usage of the owner role for projects and log buckets.
      B. Require two people to inspect the logs.
      C. Implement object versioning on the log-buckets.
      D. Encrypt the logs using Cloud KMS.
      

    Answer: ABC
    https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#prevent_unwanted_changes_to_logs

    1. If network traffic between one Google Compute Engine instance and another instance is being dropped, what is the most likely cause?

      A. The instances are on a network with low bandwidth.
      B. The TCP keep-alive setting is too short.
      C. The instances are on a default network with no additional firewall rules.
      D. A firewall rule was deleted.
      

    Answer: D
    https://cloud.google.com/compute/docs/troubleshooting#networktraffic

    1. Which of the following practices can help you develop more secure software? (Select 3 answers.)

      A. Penetration tests
      B. Integrating static code analysis tools into your CI/CD pipeline
      C. Encrypting your source code
      D. Peer review of code
      

    Answer: ABD

    1. Which two places hold information you can use to monitor the effects of a Cloud Storage lifecycle policy on specific objects? (Select 2 answers.)

      A. Cloud Storage Lifecycle Monitoring
      B. Expiration time metadata
      C. Access logs
      D. Lifecycle config file
      

    Answer: BC
    https://cloud.google.com/storage/docs/lifecycle#expirationtime

    1. If you have object versioning enabled on a multi-regional bucket, what will the following lifecycle config file do?

      A. Archive objects older than 30 days (the second rule doesn’t do anything)
      B. Delete objects older than 30 days (the second rule doesn’t do anything)
      C. Archive objects older than 30 days and move objects to Coldline Storage after 365 days
      D. Delete objects older than 30 days and move objects to Coldline Storage after 365 days
      

    Answer: D
    https://cloud.google.com/storage/docs/managing-lifecycles#enable

    1. Which of the following statements about Stackdriver Trace are true? (Select 2 answers.)

      A. Stackdriver Trace tracks the performance of the virtual machines running the application.
      B. Stackdriver Trace tracks the latency of incoming requests.
      C. Applications in App Engine automatically submit traces to Stackdriver Trace. Applications outside of App Engine need to use the Trace SDK or Trace API.
      D. To make an application work with Stackdriver Trace, you need to add instrumentation code using the Trace SDK or Trace API, even if the application is in App
      

    Answer: D
    https://cloud.google.com/trace/docs/reference

    1. You have been asked to select the storage system for the click-data of your company’s large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams. Which storage infrastructure should you choose?

      A. Google Cloud SQL
      B. Google Cloud Bigtable
      C. Google Cloud Storage
      D. Google Cloud Datastore
      

    Answer: B
    https://cloud.google.com/storage-options/

    1. You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading. Where should you store the data?

      A. Google BigQuery
      B. Google Cloud SQL
      C. Google Cloud Bigtable
      D. Google Cloud Storage
      

    Answer: C
    https://cloud.google.com/storage-options/

    1. You have been asked to select the storage system for the click-data of your company’s large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams. Which storage infrastructure should you choose?

      A. Google Cloud SQL
      B. Google Cloud Bigtable
      C. Google Cloud Storage
      D. Google Cloud Datastore
      

    Answer: B

    1. You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading.
      Where should you store the data?

      A. Google BigQuery
      B. Google Cloud SQL
      C. Google Cloud Bigtable
      D. Google Cloud Storage
      

    Answer: C


    Source link