EKS Anywhere Curated Packages make it easy to install, configure, and maintain operational components in EKS Anywhere clusters. EKS Anywhere Curated Packages are built, tested, and distributed by AWS to use with EKS Anywhere clusters as part of EKS Anywhere Enterprise Subscriptions.
Components of EKS Anywhere curated packages consist of a controller, a CLI, and artifacts.
Package controller
The package controller is responsible for installing, upgrading, configuring, and removing packages from the cluster. It performs these actions by watching the package and packagebundle custom resources. Moreover, it uses the packagebundle to determine which packages to run and sets appropriate configuration values.
Packages custom resources map to helm charts that the package controller uses to install packages workloads (such as cluster-autoscaler or metrics-server) on your clusters. The packagebundle object is the mapping between the package name and the specific helm chart and images that will be installed.
The package controller only runs on the management cluster, including single-node clusters, to perform the above outlined responsibilities. However, packages may be installed on both management and workload clusters. For more information, see the guide on installing packages on workload clusters.
Package release information is stored in a package bundle manifest. The package controller will continually monitor and download new package bundles. When a new package bundle is downloaded, it will show up as “available” in the PackageBundleController resource’s status.detail field. A package bundle upgrade always requires manual intervention as outlined in the package bundles
docs.
Any changes to a package custom resource will trigger an install, upgrade, configuration or removal of that package. The package controller will use ECR or private registry to get all resources including bundle, helm charts, and container images.
The Curated Packages CLI provides the user experience required to manage curated packages.
Through the CLI, a user is able to discover, create, delete, and upgrade curated packages to a cluster.
These functionalities can be achieved during and after an EKS Anywhere cluster is created.
The CLI provides both imperative and declarative mechanisms to manage curated packages. These
packages will be included as part of a packagebundle that will be provided by the EKS Anywhere team.
Whenever a user requests a package creation through the CLI (eksctl anywhere create package), a custom resource is created on the cluster
indicating the existence of a new package that needs to be installed. When a user executes a delete operation (eksctl anywhere delete package),
the custom resource will be removed from the cluster indicating the need for uninstalling a package.
An upgrade through the CLI (eksctl anywhere upgrade packages) upgrades all packages to the latest release.
Please check out Install EKS Anywhere
to install the eksctl anywhere CLI on your machine.
The create cluster page for each EKS Anywhere provider
describes how to configure and install curated packages at cluster creation time.
Curated packages artifacts
There are three types of build artifacts for packages: the container images, the helm charts and the package bundle manifests. The container images, helm charts and bundle manifests for all of the packages will be built and stored in EKS Anywhere ECR repository. Each package may have multiple versions specified in the packages bundle. The bundle will reference the helm chart tag in the ECR repository. The helm chart will reference the container images for the package.
Installing packages on workload clusters
The package controller only runs on the management cluster. It determines which cluster to install your package on based on the namespace specified in the Package resource.
By specifying metadata.namespace: eksa-packages-wk0, the package controller will install the resource on workload cluster wk0.
The pattern for these namespaces is always eksa-packages-<cluster-name>.
By specifying spec.targetNamespace: default, the package controller will install the hello-eks-anywhere package helm chart in the default namespace in cluster wk0.
2 - Prerequisites
Prerequisites for using curated packages
Prerequisites
Before installing any curated packages for EKS Anywhere, do the following:
Check that the cluster Kubernetes version is v1.21 or above. For example, you could run kubectl get cluster -o yaml <cluster-name> | grep -i kubernetesVersion
Check that the version of eksctl anywhere is v0.11.0 or above with the eksctl anywhere version command.
It is recommended that the package controller is only installed on the management cluster.
Check the existence of package controller:
kubectl get pods -n eksa-packages |grep"eks-anywhere-packages"
If the returned result is empty, you need to install the package controller.
Install the package controller if it is not installed:
Install the package controller
Note This command is temporarily provided to ease integration with curated packages. This command will be deprecated in the future
To request a free trial, talk to your Amazon representative or connect with one here
.
Identify AWS account ID for ECR packages registry
The AWS account ID for ECR packages registry depends on the EKS Anywhere Enterprise Subscription.
For EKS Anywhere Enterprise Subscriptions purchased through the AWS console or APIs the AWS account ID for ECR packages registry varies depending on the region the Enterprise Subscription was purchased. Reference the table in the expanded output below for a mapping of AWS Regions to ECR package registries.
Expand for packages registry to AWS Region table
AWS Region
Packages Registry Account
us-west-2
346438352937
us-west-1
440460740297
us-east-1
331113665574
us-east-2
297090588151
ap-east-1
804323328300
ap-northeast-1
143143237519
ap-northeast-2
447311122189
ap-south-1
357015164304
ap-south-2
388483641499
ap-southeast-1
654894141437
ap-southeast-2
299286866837
ap-southeast-3
703305448174
ap-southeast-4
106475008004
af-south-1
783635962247
ca-central-1
064352486547
eu-central-1
364992945014
eu-central-2
551422459769
eu-north-1
826441621985
eu-south-1
787863792200
eu-west-1
090204409458
eu-west-2
371148654473
eu-west-3
282646289008
il-central-1
131750224677
me-central-1
454241080883
me-south-1
158698011868
sa-east-1
517745584577
For EKS Anywhere Curated Packages trials or EKS Anywhere Enterprise Subscriptions purchased before October 2023 the AWS account ID for ECR packages registry is 783794618700. This supports pulling images from the following regions.
Expand for AWS Regions table
AWS Region
us-east-2
us-east-1
us-west-1
us-west-2
ap-northeast-3
ap-northeast-2
ap-southeast-1
ap-southeast-2
ap-northeast-1
ca-central-1
eu-central-1
eu-west-1
eu-west-2
eu-west-3
eu-north-1
sa-east-1
After identifying the AWS account ID; export it for further reference. Example
exportECR_PACKAGES_ACCOUNT=346438352937
Setup authentication to use curated packages
When you have been notified that your account has been given access to curated packages, create an IAM user in your account with a policy that only allows ECR read access to the Curated Packages repository; similar to this:
Prepare for using curated packages for airgapped environments
If you are running in an airgapped environment and you set up a local registry mirror, you can copy curated packages from Amazon ECR to your local registry mirror with the following command.
The $BUNDLE_RELEASE_YAML_PATH should be set to the eks-anywhere-downloads/bundle-release.yaml location where you unpacked the tarball from theeksctl anywhere download artifacts command. The $REGISTRY_MIRROR_CERT_PATH and $REGISTRY_MIRROR_URL values must be the same as the registryMirrorConfiguration in your EKS Anywhere cluster specification.
Once the curated packages images are in your local registry mirror, you must configure the curated packages controller to use your local registry mirror post-cluster creation. Configure the defaultImageRegistry and defaultRegistry settings for the PackageBundleController to point to your local registry mirror by applying a similar yaml definition as the one below to your standalone or management cluster. Existing PackageBundleController can be changed, and you do not need to deploy a new PackageBundleController. See the Packages configuration documentation
for more information.
Full EKS Anywhere configuration reference for curated packages
This is a generic template with detailed descriptions below for reference. To generate your own package configuration, follow instructions from Package Management
section and modify it using descriptions below.
PackageAvailableUpgrade details the package’s available upgrade versions.
Name
Type
Description
Required
tag
string
Tag is a specific version number or sha256 checksum for the package upgrade.
true
version
string
Version is a human-friendly version name for the package upgrade.
true
4 - Managing the package controller
Installing the package controller
Important
The package controller installation creates a package bundle controller resource for each cluster, thus allowing each to activate a different package bundle version. Ideally, you should never delete this resource because it would mean losing that information and upon re-installing, the latest bundle would be selected. However, you can always go back to the previous bundle version. For more information, see Managing package bundles.
The package controller is typically installed during cluster creation, but may be disabled intentionally in your cluster.yaml by setting spec.packages.disable to true.
If you created a cluster without the package controller or if the package controller was not properly configured, you may need to manually install it.
Enable the package controller in your cluster.yaml, if it was previously disabled:
Authenticate docker to the private AWS ECR registry with your AWS credentials. Reference prerequisites to identity the AWS account that houses the EKS Anywhere packages artifacts. Authentication is required to pull images from it.
Use kubectl to check the eks-anywhere-packages pod is running in your management cluster:
kubectl get pods -n eksa-packages
NAME READY STATUS RESTARTS AGE
eks-anywhere-packages-55bc54467c-jfhgp 1/1 Running 0 21s
Updating the package credentials
You may need to create or update your credentials which you can do with a command like this. Set the environment variables to the proper values before running the command.
EKS Anywhere v0.15.0 (packages controller v0.3.9+) and onwards includes support for the eks-anywhere-packages controller as a self-managed package feature. The package controller now upgrades automatically according to the version specified within the management cluster’s selected package bundle.
For any version prior to v0.3.X, manual steps must be executed to upgrade.
Important
This operation may change your cluster’s selected package bundle to the latest version. However, you can always go back to the previous bundle version. For more information, see Managing package bundles.
To manually upgrade the package controller, do the following:
Migrating EKS Anywhere Curated Packages to latest regional ECR repositories
When you purchase an EKS Anywhere Enterprise Subscription through the Amazon EKS console or APIs, the AWS account that purchased the subscription is automatically granted access to EKS Anywhere Curated Packages in the AWS Region where the subscription is created. If you received trial access to EKS Anywhere Curated Packages or if you have an EKS Anywhere Enterprise Subscription that was created before October 2023, then you need to migrate your EKS Anywhere Curated Packages configuration to use the latest ECR regional repositories. This process would cause all the Curated Packages installed on the cluster to rollout and be deployed from the latest ECR regional repositories.
Expand for packages registry to AWS Region table
AWS Region
Packages Registry Account
us-west-2
346438352937
us-west-1
440460740297
us-east-1
331113665574
us-east-2
297090588151
ap-east-1
804323328300
ap-northeast-1
143143237519
ap-northeast-2
447311122189
ap-south-1
357015164304
ap-south-2
388483641499
ap-southeast-1
654894141437
ap-southeast-2
299286866837
ap-southeast-3
703305448174
ap-southeast-4
106475008004
af-south-1
783635962247
ca-central-1
064352486547
eu-central-1
364992945014
eu-central-2
551422459769
eu-north-1
826441621985
eu-south-1
787863792200
eu-west-1
090204409458
eu-west-2
371148654473
eu-west-3
282646289008
il-central-1
131750224677
me-central-1
454241080883
me-south-1
158698011868
sa-east-1
517745584577
Steps for Migration
Ensure you have an active EKS Anywhere Enterprise Subscription. For more information, refer Purchase subscriptions.
If the AWS account that created the EKS Anywhere Enterprise Subscription through the Amazon EKS console or APIs and the AWS IAM user credentials for curated packages on your existing cluster are different, you need to update the aws-secret object on the cluster with new credentials. Refer Updating the package credentials
.
Edit the ecr-credential-provider-package package on the cluster and update matchImages with the correct ECR package registry for the AWS Region where you created your subscription. Example, 346438352937.dkr.ecr.us-west-2.amazonaws.com for us-west-2. Reference the table in the expanded output at the top of this page for a mapping of AWS Regions to ECR package registries.
This causes ecr-credential-provider-package pods to rollout and the kubelet is configured to use AWS credentials for pulling images from the new regional ECR packages registry.
Edit the PackageBundleController object on the cluster and set the defaultImageRegistry and defaultRegistry to point to the ECR package registry for the AWS Region where you created your subscription.
This step causes the package controller to pull down a new package bundle onto the cluster and marks the PackageBundleController as upgrade available. Example
NAMESPACE NAME ACTIVEBUNDLE STATE DETAIL
eksa-packages my-cluster-name v1-28-160 upgrade available v1-28-274 available
Edit the PackageBundleController object on the cluster and set the activeBundle field to the new bundle number that is available.
This step causes all the packages on the cluster to be reinstalled and pods rolled out from the new registry.
Edit the ecr-credential-provider-package package again and now set the sourceRegistry to point to the ECR package registry for the AWS Region where you created your subscription.
This causes ecr-credential-provider-package to be reinstalled from the new registry.
6 - Managing package bundles
Getting new package bundles
Package bundle resources are created and managed in the management cluster, so first set up the KUBECONFIG environment variable for the management cluster.
The EKS Anywhere package controller periodically checks upstream for the latest package bundle and applies it to your management cluster, except for when in an airgapped environment
. In that case, you would have to get the package bundle manually from outside of the airgapped environment and apply it to your management cluster.
To view the available packagebundles in your cluster, run the following:
kubectl get packagebundles -n eksa-packages
NAMESPACE NAME STATE
eksa-packages v1-27-125 available
Use kubectl to apply the new package bundle to your cluster to make it available for use.
kubectl apply -f bundle.yaml
The package bundle should now be available for use in the management cluster.
kubectl get packagebundles -n eksa-packages
NAMESPACE NAME STATE
eksa-packages v1-27-125 available
eksa-packages v1-27-126 available
Activating a package bundle
There are multiple packagebundlecontrollers resources in the management cluster which allows for each cluster to activate different package bundle versions. The active package bundle determines the versions of the packages that are installed on that cluster.
To view which package bundle is active for each cluster, use the kubectl command to list the packagebundlecontrollers objects in the management cluster.
kubectl get packagebundlecontrollers -A
NAMESPACE NAME ACTIVEBUNDLE STATE DETAIL
eksa-packages mgmt v1-27-125 active
eksa-packages w01 v1-27-125 active
To upgrade the active package bundle for the target cluster, edit the packagebundlecontroller object on the cluster and set the activeBundle field to the new bundle number that is available.
Any supported EKS Anywhere curated package should be modified through package yaml files (with kind: Package) and applied through the command kubectl apply -f packageFileName. Modifying objects outside of package yaml files may lead to unpredictable behaviors.
ADOT Collector is an AWS distribution of the OpenTelemetry Collector, which provides a vendor-agnostic solution to receive, process and export telemetry data.
Cluster Autoscaler is a component that automatically adjusts the size of a Kubernetes Cluster so that all pods have a place to run and there are no unneeded nodes.
The major component of Curated Packages is the package controller. If the container is not running or not running correctly, packages will not be installed. Generally it should be debugged like any other Kubernetes application. The first step is to check that the pod is running.
kubectl get pods -n eksa-packages
You should see at least two pods with running and one or more refresher completed.
NAME READY STATUS RESTARTS AGE
eks-anywhere-packages-69d7bb9dd9-9d47l 1/1 Running 0 14s
eksa-auth-refresher-w82nm 0/1 Completed 0 10s
The describe command might help to get more detail on why there is a problem:
kubectl describe pods -n eksa-packages
Logs of the controller can be seen in a normal Kubernetes fashion:
To get the general state of the package controller, run the following command:
kubectl get packages,packagebundles,packagebundlecontrollers -A
You should see an active packagebundlecontroller and an available bundle. The packagebundlecontroller should indicate the active bundle. It may take a few minutes to download and activate the latest bundle. The state of the package in this example is installing and there is an error downloading the chart.
NAMESPACE NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
eksa-packages-sammy package.packages.eks.amazonaws.com/my-hello hello-eks-anywhere 42h installed 0.1.1-bc7dc6bb874632972cd92a2bca429a846f7aa785 0.1.1-bc7dc6bb874632972cd92a2bca429a846f7aa785 (latest)
eksa-packages-tlhowe package.packages.eks.amazonaws.com/my-hello hello-eks-anywhere 44h installed 0.1.1-083e68edbbc62ca0228a5669e89e4d3da99ff73b 0.1.1-083e68edbbc62ca0228a5669e89e4d3da99ff73b (latest)
NAMESPACE NAME STATE
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-21-83 available
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-23-70 available
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-23-81 available
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-23-82 available
eksa-packages packagebundle.packages.eks.amazonaws.com/v1-23-83 available
NAMESPACE NAME ACTIVEBUNDLE STATE DETAIL
eksa-packages packagebundlecontroller.packages.eks.amazonaws.com/sammy v1-23-70 upgrade available v1-23-83 available
eksa-packages packagebundlecontroller.packages.eks.amazonaws.com/tlhowe v1-21-83 active active
Package controller not running
If you do not see a pod or various resources for the package controller, it may be that it is not installed.
No resources found in eksa-packages namespace.
Most likely the cluster was created with an older version of the EKS Anywhere CLI. Curated packages became generally available with v0.11.0. Use the eksctl anywhere version command to verify you are running a new enough release and you can use the eksctl anywhere install packagecontroller command to install the package controller on an older release.
Error: this command is currently not supported
Error: this command is currently not supported
Curated packages became generally available with version v0.11.0. Use the version command to make sure you are running version v0.11.0 or later:
eksctl anywhere version
Error: cert-manager is not present in the cluster
Error: curated packages cannot be installed as cert-manager is not present in the cluster
This is most likely caused by an action to install curated packages at a workload cluster with eksctl anywhere version older than v0.12.0. In order to use packages on workload clusters, please upgrade eksctl anywhere version to v0.12+. The package manager will remotely manage packages on the workload cluster from the management cluster.
Package registry authentication
Error: ImagePullBackOff on Package
If a package fails to start with ImagePullBackOff:
NAME READY STATUS RESTARTS AGE
generated-harbor-jobservice-564d6fdc87 0/1 ImagePullBackOff 0 2d23h
If a package pod cannot pull images, you may not have your AWS credentials set up properly. Verify that your credentials are working properly.
Make sure you are authenticated with the AWS CLI. Use the credentials you set up for packages. These credentials should have limited capabilities
:
You may need to create or update your credentials which you can do with a command like this. Set the environment variables to the proper values before running the command.
Starting at eksctl anywhere version v0.12.0, packages on workload clusters are remotely managed by the management cluster. While interacting with the package resources by the following commands for a workload cluster, please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster.
Package manager is not managing packages on workload cluster
If the package manager is not managing packages on a workload cluster, make sure the management cluster has various resources for the workload cluster:
kubectl get packages,packagebundles,packagebundlecontrollers -A
You should see a PackageBundleController for the workload cluster named with the name of the workload cluster and the status should be set. There should be a namespace for the workload cluster as well:
kubectl get ns |grep eksa-packages
Create a PackageBundlecController for the workload cluster if it does not exist (where billy here is the cluster name):
NAMESPACE NAME ACTIVEBUNDLE STATE DETAIL
eksa-packages packagebundlecontroller.packages.eks.amazonaws.com/billy disconnected initializing target client: getting kubeconfig for cluster "billy": Secret "billy-kubeconfig" not found
In the example above, the secret does not exist which may be that the management cluster is not managing the cluster, the PackageBundleController name is wrong or the secret was deleted.
This also may happen if the management cluster cannot communicate with the workload cluster or the workload cluster was deleted, although the detail would be different.
Error: the server doesn’t have a resource type “packages”
All packages are remotely managed by the management cluster, and packages, packagebundles, and packagebundlecontrollers resources are all deployed on the management cluster. Please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster while interacting with package-related resources.
Error: packagebundlecontrollers.packages.eks.amazonaws.com “clusterName” not found
A package command run on a cluster that does not seem to be managed by the management cluster. To get a list of the clusters managed by the management cluster run the following command:
eksctl anywhere get packagebundlecontroller
NAME ACTIVEBUNDLE STATE DETAIL
billy v1-21-87 active
There will be one packagebundlecontroller for each cluster that is being managed. The only valid cluster name in the above example is billy.
10 - What's New
New EKS Anywhere Curated Packages releases, features, and fixes
10.1 - Changelog
Changelog for Curated packages release
Package Bundle Release (12-26-2024)
Changed
cert-manager 1.15.3 to 1.16.1
Updated helm patch to include properties for eksa-packages in values.schema.json#4171
cluster-autoscaler 9.43.0 to 9.43.2
credential-provider-package 0.4.4 to 0.4.5
Added support to update both legacy and default path for kubelet-extra-args for ubuntu#1177
As of this release the cert-manager ctl is no longer part of the main repo, it has been broken out into its own project. As such the startupapicheck job uses a new OCI image called startupapicheck. If you run in an environment in which images cannot be pulled, be sure to include the new image.
cluster-autoscaler 9.37.0 to 9.43.0
harbor 2.11.0 to 2.11.1
metrics-server 0.7.1 to 0.7.2
prometheus 2.54.0 to 2.54.1
10.2 - Release Alerts
SNS Alerts for EKS Anywhere Curated Packages releases
EKS Anywhere uses Amazon Simple Notification Service (SNS) to notify availability of a new releasef for Curated Packages.
It is recommended that your clusters are kept up to date with the latest EKS Anywhere Curated Packages.
Please follow the instructions below to subscribe to SNS notification.
Sign in to your AWS Account
Select us-east-1 region
Go to the SNS Console
In the left navigation pane, choose “Subscriptions”
On the Subscriptions page, choose “Create subscription”
On the Create subscription page, in the Details section enter the following information
In few minutes, you will receive an email asking you to confirm the subscription
Click the confirmation link in the email
11 - ADOT Configuration
OpenTelemetry Collector provides a vendor-agnostic solution to receive, process and export telemetry data. It removes the need to run, operate, and maintain multiple agents/collectors. ADOT Collector is an AWS-supported distribution of the OpenTelemetry Collector.
Best Practice
Any supported EKS Anywhere curated package should be modified through package yaml files (with kind: Package) and applied through the command kubectl apply -f packageFileName. Modifying objects outside of package yaml files may lead to unpredictable behaviors.
We included Test sections below for critical steps to help users to validate they have completed such procedure properly. We recommend going through them in sequence as checkpoints of the progress.
We recommend creating all resources in the us-west-2 region.
Create an AMP workspace
An AMP workspace is created to receive metrics from the ADOT package, and respond to query requests from AMG. Follow steps below to complete the set up:
Choose region us-west-2 from the top right corner.
Click on Create to create a workspace.
Type a workspace alias (adot-amp-test as an example), and click on Create workspace.
Make notes of the URLs displayed for Endpoint - remote write URL and Endpoint - query URL. You’ll need them when you configure your ADOT package to remote write metrics to this workspace and when you query metrics from this workspace. Make sure the workspace’s Status shows Active before proceeding to the next step.
For additional options (i.e. through CLI) and configurations (i.e. add a tag) to create an AMP workspace, refer to AWS AMP create a workspace guide.
Create a cluster with IRSA
To enable ADOT pods that run in EKS Anywhere clusters to authenticate with AWS services, a user needs to set up IRSA at cluster creation. EKS Anywhere cluster spec for Pod IAM
gives step-by-step guidance on how to do so. There are a few things to keep in mind while working through the guide:
make sure the service account is created in the same namespace as the ADOT package (which is controlled by the package definition file with field spec.targetNamespace);
take a note of the service account that gets created in this step as it will be used in ADOT package installation;
add an annotation eks.amazonaws.com/role-arn: <role-arn> to the created service account.
By default, the service account is installed in the default namespace with name pod-identity-webhook, and the annotation eks.amazonaws.com/role-arn: <role-arn> is not added automatically.
IRSA Set Up Test
To ensure IRSA is set up properly in the cluster, a user can create an awscli pod for testing.
If the pod has issues listing AMP workspaces, re-visit IRSA set up guidance before proceeding to the next step.
Exit the pod:
exit
Install the ADOT package
The ADOT package will be created with three components:
the Prometheus Receiver, which is designed to be a drop-in replacement for a Prometheus Server and is capable of scraping metrics from microservices instrumented with the Prometheus client library
;
the Prometheus Remote Write Exporter, which employs the remote write features and send metrics to AMP for long term storage;
the Sigv4 Authentication Extension, which enables ADOT pods to authenticate to AWS services.
Follow steps below to complete the ADOT package installation:
Update the following config file. Review comments carefully and replace everything that is wrapped with a <> tag. Note this configuration aims to mimic the Prometheus community helm chart. A user can tailor the scrape targets further by modifying the receiver section below. Refer to ADOT package spec
for additional explanations of each section.
Bind additional roles to the service account pod-identity-webhook (created at step Create a cluster with IRSA
) by applying the following file in the cluster (using kubectl apply -f <file-name>). This is because pod-identity-webhook by design does not have sufficient permissions to scrape all Kubernetes targets listed in the ADOT config file above. If modifications are made to the Prometheus Receiver, make updates to the file below to add / remove additional permissions before applying the file.
Click to expand clusterrole and clusterrolebinding config
---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:name: otel-prometheus-role
rules:-apiGroups:-""resources:- nodes
- nodes/proxy
- services
- endpoints
- pods
verbs:- get
- list
- watch
-apiGroups:- extensions
resources:- ingresses
verbs:- get
- list
- watch
-nonResourceURLs:- /metrics
verbs:- get
---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: otel-prometheus-role-binding
roleRef:apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: otel-prometheus-role
subjects:-kind: ServiceAccount
name: pod-identity-webhook # replace with name of the service account created at step Create a cluster with IRSAnamespace: default # replace with namespace where the service account was created at step Create a cluster with IRSA
Use the ADOT package config file defined above to complete the ADOT installation. Refer to ADOT installation guide
for details.
ADOT Package Test
To ensure the ADOT package is installed correctly in the cluster, a user can perform the following tests.
Check pod logs
Check ADOT pod logs using kubectl logs <adot-pod-name> -n <namespace>. It should display logs similar to below.
...
2022-09-30T23:22:59.184Z info service/telemetry.go:103 Setting up own telemetry...
2022-09-30T23:22:59.184Z info service/telemetry.go:138 Serving Prometheus metrics {"address": "0.0.0.0:8888", "level": "basic"}
2022-09-30T23:22:59.185Z info components/components.go:30 In development component. May change in the future. {"kind": "exporter", "data_type": "metrics", "name": "logging", "stability": "in development"}
2022-09-30T23:22:59.186Z info extensions/extensions.go:42 Starting extensions...
2022-09-30T23:22:59.186Z info extensions/extensions.go:45 Extension is starting... {"kind": "extension", "name": "health_check"}
2022-09-30T23:22:59.186Z info healthcheckextension@v0.58.0/healthcheckextension.go:44 Starting health_check extension {"kind": "extension", "name": "health_check", "config": {"Endpoint":"0.0.0.0:13133","TLSSetting":null,"CORS":null,"Auth":null,"MaxRequestBodySize":0,"IncludeMetadata":false,"Path":"/","CheckCollectorPipeline":{"Enabled":false,"Interval":"5m","ExporterFailureThreshold":5}}}
2022-09-30T23:22:59.186Z info extensions/extensions.go:49 Extension started. {"kind": "extension", "name": "health_check"}
2022-09-30T23:22:59.186Z info extensions/extensions.go:45 Extension is starting... {"kind": "extension", "name": "memory_ballast"}
2022-09-30T23:22:59.187Z info ballastextension/memory_ballast.go:52 Setting memory ballast {"kind": "extension", "name": "memory_ballast", "MiBs": 0}
2022-09-30T23:22:59.187Z info extensions/extensions.go:49 Extension started. {"kind": "extension", "name": "memory_ballast"}
2022-09-30T23:22:59.187Z info extensions/extensions.go:45 Extension is starting... {"kind": "extension", "name": "sigv4auth"}
2022-09-30T23:22:59.187Z info extensions/extensions.go:49 Extension started. {"kind": "extension", "name": "sigv4auth"}
2022-09-30T23:22:59.187Z info pipelines/pipelines.go:74 Starting exporters...
2022-09-30T23:22:59.187Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "metrics", "name": "logging"}
2022-09-30T23:22:59.187Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "metrics", "name": "logging"}
2022-09-30T23:22:59.187Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite"}
2022-09-30T23:22:59.187Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "metrics", "name": "prometheusremotewrite"}
2022-09-30T23:22:59.187Z info pipelines/pipelines.go:86 Starting processors...
2022-09-30T23:22:59.187Z info pipelines/pipelines.go:90 Processor is starting... {"kind": "processor", "name": "batch/metrics", "pipeline": "metrics"}
2022-09-30T23:22:59.187Z info pipelines/pipelines.go:94 Processor started. {"kind": "processor", "name": "batch/metrics", "pipeline": "metrics"}
2022-09-30T23:22:59.187Z info pipelines/pipelines.go:98 Starting receivers...
2022-09-30T23:22:59.187Z info pipelines/pipelines.go:102 Receiver is starting... {"kind": "receiver", "name": "prometheus", "pipeline": "metrics"}
2022-09-30T23:22:59.187Z info kubernetes/kubernetes.go:326 Using pod service account via in-cluster config {"kind": "receiver", "name": "prometheus", "pipeline": "metrics", "discovery": "kubernetes"}
2022-09-30T23:22:59.188Z info kubernetes/kubernetes.go:326 Using pod service account via in-cluster config {"kind": "receiver", "name": "prometheus", "pipeline": "metrics", "discovery": "kubernetes"}
2022-09-30T23:22:59.188Z info kubernetes/kubernetes.go:326 Using pod service account via in-cluster config {"kind": "receiver", "name": "prometheus", "pipeline": "metrics", "discovery": "kubernetes"}
2022-09-30T23:22:59.188Z info kubernetes/kubernetes.go:326 Using pod service account via in-cluster config {"kind": "receiver", "name": "prometheus", "pipeline": "metrics", "discovery": "kubernetes"}
2022-09-30T23:22:59.189Z info pipelines/pipelines.go:106 Receiver started. {"kind": "receiver", "name": "prometheus", "pipeline": "metrics"}
2022-09-30T23:22:59.189Z info healthcheck/handler.go:129 Health Check state change {"kind": "extension", "name": "health_check", "status": "ready"}
2022-09-30T23:22:59.189Z info service/collector.go:215 Starting aws-otel-collector... {"Version": "v0.21.1", "NumCPU": 2}
2022-09-30T23:22:59.189Z info service/collector.go:128 Everything is ready. Begin running and processing data.
...
Check AMP endpoint using awscurl
Use awscurl commands below to check if AMP received the metrics data sent by ADOT. The awscurl tool is a curl like tool with AWS Signature Version 4 request signing. The command below should return a status code success.
In the Workspace details window, for Workspace name, enter a name for the workspace.
In the config settings window, choose Authentication access by AWS IAM Identity Center, and Permission type of Service managed.
In the IAM permission access setting window, choose Current account access, and Amazon Managed Service for Prometheus as data source.
Review all settings and click on Create workspace.
Once the workspace shows a Status of Active, you can access it by clicking the Grafana workspace URL. Click on Sign in with AWS IAM Identity Center to finish the authentication.
Follow steps below to add the AMP workspace to AMG.
Click on the config sign on the left navigation bar, select Data sources, then choose Prometheus as the Data source.
Configure Prometheus data source with the following details:
Name: AMPDataSource as an example.
URL: add the AMP workspace remote write URL without the api/v1/remote_write at the end.
SigV4 auth: enable.
Under the SigV4 Auth Details section:
Authentication Provider: choose Workspace IAM Role;
Default Region: choose us-west-2 (where you created the AMP workspace)
Select the Save and test, and a notification data source is working should be displayed.
Import a dashboard template by clicking on the plus (+) sign on the left navigation bar. In the Import screen, type 3119 in the Import via grafana.com textbox and select Import.
From the dropdown at the bottom, select AMPDataSource and select Import.
A Kubernetes cluster monitoring (via Prometheus) dashboard will be displayed.
Starting at eksctl anywhere version v0.12.0, packages on workload clusters are remotely managed by the management cluster.
While following this guide to install packages on a workload cluster, please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace command below, which should be run with kubeconfig pointing to the workload cluster.
Create the namespace
(If overriding targetNamespace, change observability to the value of targetNamespace)
kubectl create namespace observability
Install adot
eksctl anywhere create packages -f adot.yaml
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
my-adot adot 19h installed 0.25.0-c26690f90d38811dbb0e3dad5aea77d1efa52c7b 0.25.0-c26690f90d38811dbb0e3dad5aea77d1efa52c7b (latest)
Update
To update package configuration, update adot.yaml file, and run the following command:
eksctl anywhere apply package -f adot.yaml
Upgrade
ADOT will automatically be upgraded when a new bundle is activated.
Specifies Collector receiver, processor, exporter, and extensions configurations. Refer to aws-otel-collector
for full details. Note EKS Anywhere ADOT package version matches the exact aws-otel-collector version.
Specifies tasks that do not involve processing telemetry data. Examples of extensions include health monitoring, service discovery, and data forwarding. Extensions are optional.
Specifies what components are enabled in the Collector based on the configuration found in the receivers, processors, exporters, and extensions sections. If a component is configured, but not defined within the service section, then it is not enabled.
The default config.config deploys an ADOT Collector with the metrics pipeline, which includes otlp and prometheus receiver, and logging exporter. See below specification for details.
Specifies Collector receiver, processor, exporter, and extensions configurations. Refer to aws-otel-collector
for full details. Note EKS Anywhere ADOT package version matches the exact aws-otel-collector version.
Specifies tasks that do not involve processing telemetry data. Examples of extensions include health monitoring, service discovery, and data forwarding. Extensions are optional.
Specifies what components are enabled in the Collector based on the configuration found in the receivers, processors, exporters, and extensions sections. If a component is configured, but not defined within the service section, then it is not enabled.
The default config.config deploys an ADOT Collector with the metrics pipeline, which includes otlp and prometheus receiver, and logging exporter. See below specification for details.
Specifies Collector receiver, processor, exporter, and extensions configurations. Refer to aws-otel-collector
for full details. Note EKS Anywhere ADOT package version matches the exact aws-otel-collector version.
Specifies tasks that do not involve processing telemetry data. Examples of extensions include health monitoring, service discovery, and data forwarding. Extensions are optional.
Specifies what components are enabled in the Collector based on the configuration found in the receivers, processors, exporters, and extensions sections. If a component is configured, but not defined within the service section, then it is not enabled.
The default config.config deploys an ADOT Collector with the metrics pipeline, which includes otlp and prometheus receiver, and logging exporter. See below specification for details.
Specifies Collector receiver, processor, exporter, and extensions configurations. Refer to aws-otel-collector
for full details. Note EKS Anywhere ADOT package version matches the exact aws-otel-collector version.
Specifies tasks that do not involve processing telemetry data. Examples of extensions include health monitoring, service discovery, and data forwarding. Extensions are optional.
Specifies what components are enabled in the Collector based on the configuration found in the receivers, processors, exporters, and extensions sections. If a component is configured, but not defined within the service section, then it is not enabled.
The default config.config deploys an ADOT Collector with the metrics pipeline, which includes otlp and prometheus receiver, and logging exporter. See below specification for details.
Specifies Collector receiver, processor, exporter, and extensions configurations. Refer to aws-otel-collector
for full details. Note EKS Anywhere ADOT package version matches the exact aws-otel-collector version.
Specifies tasks that do not involve processing telemetry data. Examples of extensions include health monitoring, service discovery, and data forwarding. Extensions are optional.
Specifies what components are enabled in the Collector based on the configuration found in the receivers, processors, exporters, and extensions sections. If a component is configured, but not defined within the service section, then it is not enabled.
The default config.config deploys an ADOT Collector with the metrics pipeline, which includes otlp and prometheus receiver, and logging exporter. See below specification for details.
The cert-manager package adds certificates and certificate issuers as resource types in Kubernetes clusters, and simplifies the process of obtaining, renewing and using those certificates.
Best Practice
Any supported EKS Anywhere curated package should be modified through package yaml files (with kind: Package) and applied through the command kubectl apply -f packageFileName. Modifying objects outside of package yaml files may lead to unpredictable behaviors.
Starting at eksctl anywhere version v0.12.0, packages on workload clusters are remotely managed by the management cluster.
While following this guide to install packages on a workload cluster, please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace command below, which should be run with kubeconfig pointing to the workload cluster.
Install on workload cluster
NOTE: The cert-manager package can only be installed on a workload cluster
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
my-cert-manager cert-manager 15s installed 1.9.1-dc0c845b5f71bea6869efccd3ca3f2dd11b5c95f 1.9.1-dc0c845b5f71bea6869efccd3ca3f2dd11b5c95f (latest)
Update
To update package configuration, update cert-manager.yaml file, and run the following command:
The following table lists the configurable parameters of the cert-manager package spec and the default values.
Parameter
Description
Default
General
namespace
The namespace to use for installing cert-manager package
cert-manager
imagePullPolicy
The image pull policy
IfNotPresent
global
global.logLevel
The log level: integer from 0-6
2
Webhook
webhook.timeoutSeconds
The time in seconds to wait for the webhook to connect with the kube-api server
0
13 - Cluster Autoscaler Configuration
Cluster Autoscaler is a component that automatically adjusts the size of a Kubernetes Cluster so that all pods have a place to run and there are no unneeded nodes.
Configuration options for Cluster Autoscaler
13.1 - Cluster Autoscaler
Install/upgrade/uninstall Cluster Autoscaler
If you have not already done so, make sure your EKS Anywhere cluster meets the package prerequisites.
Ensure you have configured at least one worker node group in your cluster specification to enable autoscaling as outlined in Autoscaling configuration.
Cluster Autoscaler only works on node groups with an autoscalingConfiguration set:
Add the desired configuration to cluster-autoscaler.yaml. See configuration options
for all configuration options and their default values. See below for an example package file configuring a Cluster Autoscaler package.
eksctl anywhere get packages --cluster <cluster-name>
NAMESPACE NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
eksa-packages-mgmt-v-vmc cluster-autoscaler cluster-autoscaler 18h installed 9.21.0-1.21-147e2a701f6ab625452fe311d5c94a167270f365 9.21.0-1.21-147e2a701f6ab625452fe311d5c94a167270f365 (latest)
To verify that autoscaling works, apply the deployment below. You must continue scaling pods until the deployment has pods in a pending state.
This is when Cluster Autoscaler will begin to autoscale your machine deployment.
This process may take a few minutes.
Update Worker Node Group Autoscaling Configuration
It is possible to change the autoscaling configuration of a worker node group by updating the autoscalingConfiguration in your cluster specification and running a cluster upgrade.
Upgrade
The Cluster Autoscaler can be upgraded by PackageController’s activeBundle field to a newer version.
The curated packages bundle contains the SHAs of the images and helm charts associated with a particular package. When a new version is activated, the Package Controller will reconcile all active packages to their newest versions as defined in the bundle.
The Curated Packages Controller automatically polls the bundle repository for new bundle resources.
The curated packages controller automatically polls for the latest bundle, but requires the activeBundle field on the PackageController resource to be updated before a new bundle will take effect and upgrade the resources.
Uninstall
To uninstall Cluster Autoscaler, delete the package
Path to kubeconfig for connecting to Cluster API Management Cluster, only used if clusterAPIMode=kubeconfig-kubeconfig or incluster-kubeconfig Example:
Path to kubeconfig for connecting to Cluster API Management Cluster, only used if clusterAPIMode=kubeconfig-kubeconfig or incluster-kubeconfig Example:
Path to kubeconfig for connecting to Cluster API Management Cluster, only used if clusterAPIMode=kubeconfig-kubeconfig or incluster-kubeconfig Example:
Path to kubeconfig for connecting to Cluster API Management Cluster, only used if clusterAPIMode=kubeconfig-kubeconfig or incluster-kubeconfig Example:
Additional volumes to mount from Secrets. Example:
extraVolumeSecrets: {}
{}
14 - Credential Provider Package Configuration
Credential provider package provides a solution to authenticate with private Amazon Elastic Container Registry by utilizing the kubelet image credential provider
Best Practice
Any supported EKS Anywhere curated package should be modified through package yaml files (with kind: Package) and applied through the command kubectl apply -f packageFileName. Modifying objects outside of package yaml files may lead to unpredictable behaviors.
Configuration options for Credential-Provider-Package
14.1 - Credential Provider Package with IAM Roles Anywhere
This tutorial demonstrates how to configure the credential provider package to authenticate using IAM Roles Anywhere
to pull from a private AWS Elastic Container Registry (ECR).
IAM Roles Anywhere enables workloads outside of AWS to access AWS resources by using X.509 digital certificates to obtain temporary AWS credentials. A trust anchor is used to reference a certificate authority with IAM Roles Anywhere. For this use case, the Kubernetes Cluster CA can be registered and each kubelet client’s x509 cert can be used to authenticate to get temporary AWS credentials.
For setting up the certificate authority later, you will need to obtain your cluster’s CA. This can be obtain by:
# Assuming CLUSTER_NAME and KUBECONFIG are set:
kubectl get secret -n eksa-system ${CLUSTER_NAME}-ca -o yaml | yq '.data."tls.crt"'| base64 -d
A role should be created to allow read access for curated packages. This role can be extended to include private registries that you would also like to pull from into your cluster. A sample policy for curated packages would be.
Next create a trust anchor and profile. The trust anchor will be a reference to the CA certificate from step 1 and the profile should point to the role created in step 2. See here
for instructions on creating the trust anchor and profile.
Create a secret that will be referenced by the credential-provider-package to authenticate the kubelet with ECR.
# Set PROFILE_ARN, ROLE_ARN, and TRUST_ANCHOR_ARN obtained in previous step# Set AWS_REGION to region to pull images from# This will create a file credfile which will then be turned into a secretcat<< EOF >> credfile
[default]
region =$AWS_REGION
credential_process = aws_signing_helper credential-process --certificate /var/lib/kubelet/pki/kubelet-client-current.pem --private-key /var/lib/kubelet/pki/kubelet-client-current.pem --profile-arn $PROFILE_ARN --role-arn $ROLE_ARN --trust-anchor-arn $TRUST_ANCHOR_ARN
EOF
# Create secret, for this example the secret name aws-config is used and the package will be installed in eksa-packages
kubectl create secret generic aws-config --from-file=config=credfile -n eksa-packages
Either edit the existing package or delete and create a new credential-provider-package that points towards the new secret. For more information on specific configuration option refer to installation guide
for details]
The example below changes the default secret name from aws-secret to newly created aws-config. It also changes the match images to pull from multiple regions as well as across multiple accounts. Make sure to change cluster-name to match your CLUSTER_NAME
Starting at eksctl anywhere version v0.12.0, packages on workload clusters are remotely managed by the management cluster.
While following this guide to install packages on a workload cluster, please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace command below, which should be run with kubeconfig pointing to the workload cluster.
Install
By default an instance of this package is installed with the controller to help facilitate authentication for other packages. The following are instructions in case you want to tweak the default values.
Add the desired configuration to credential-provider-package.yaml
Please see complete configuration options
for all configuration options and their default values.
Example default package using IAM User Credentials installed with the controller
Create the namespace (if not installing to eksa-packages).
If you are overriding targetNamespace, change eksa-packages to the value of targetNamespace.
In this example, the credential provider will use the secret provided in aws-secret (created automatically on cluster creation) to authenticate to the repository from which curated package images are pulled. Tolerations were also added so that the control plane nodes would also be configured with authentication.
The secret can exist in two forms: either a base64 encoding of a credential config or individual keys for fields.
List of strings used to match against images. See here
for more info Example to match against any account across multiple regions for ECR: "*.dkr.ecr.*.amazonaws.com"
"[]"
defaultCacheDuration
Duration the kubelet will cache credentials in-memory. For ECR it is recommended to keep this value less then 12 hours.
"5h"
14.4 - v0.4.4
Configuring Credential Provider Package in EKS Anywhere package spec
In this example, the credential provider will use the secret provided in aws-secret (created automatically on cluster creation) to authenticate to the repository from which curated package images are pulled. Tolerations were also added so that the control plane nodes would also be configured with authentication.
The secret can exist in two forms: either a base64 encoding of a credential config or individual keys for fields.
List of strings used to match against images. See here
for more info Example to match against any account across multiple regions for ECR: "*.dkr.ecr.*.amazonaws.com"
"[]"
defaultCacheDuration
Duration the kubelet will cache credentials in-memory. For ECR it is recommended to keep this value less then 12 hours.
"5h"
14.5 - v0.4.5
Configuring Credential Provider Package in EKS Anywhere package spec
In this example, the credential provider will use the secret provided in aws-secret (created automatically on cluster creation) to authenticate to the repository from which curated package images are pulled. Tolerations were also added so that the control plane nodes would also be configured with authentication.
The secret can exist in two forms: either a base64 encoding of a credential config or individual keys for fields.
List of strings used to match against images. See here
for more info Example to match against any account across multiple regions for ECR: "*.dkr.ecr.*.amazonaws.com"
"[]"
defaultCacheDuration
Duration the kubelet will cache credentials in-memory. For ECR it is recommended to keep this value less then 12 hours.
"5h"
15 - Emissary Configuration
Emissary Ingress is an open-source Kubernetes-native API Gateway + Layer 7 load balancer + Kubernetes Ingress built on Envoy Proxy.
Best Practice
Any supported EKS Anywhere curated package should be modified through package yaml files (with kind: Package) and applied through the command kubectl apply -f packageFileName. Modifying objects outside of package yaml files may lead to unpredictable behaviors.
Starting at eksctl anywhere version v0.12.0, packages on workload clusters are remotely managed by the management cluster.
While following this guide to install packages on a workload cluster, please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace command below, which should be run with kubeconfig pointing to the workload cluster.
The emissary-apiext service has a known issue where its TLS certificate expires after one year and does not auto-renew. To resolve this, manually renew the certificate by running kubectl delete --all secrets --namespace=emissary-system followed by kubectl rollout restart deploy/emissary-apiext -n emissary-system prior to certificate expiry.
Configuring Emissary Ingress in EKS Anywhere package spec
Parameter
Description
Default
General
hostNetwork
Whether Emissary will use the host network, useful for on-premise setup . Example:
hostNetwork: false
false
createDefaultListeners
Whether Emissary should be created with default listeners, HTTP on port 8080 and HTTPS on port 8443. Example:
createDefaultListeners: false
false
replicaCount
Replica count for Emissary to deploy. Example:
replicaCount: 2
2
daemonSet
Whether to create Emissary as a Daemonset instead of a deployment Example:
daemonSet: false
false
15.3 - v3.3.0
Emissary version 0.3.3 has decoupled the CRD portion of the package, and now supports installing multiple instances of the emissary package in the same cluster.
Configuring Emissary Ingress in EKS Anywhere package spec
Parameter
Description
Default
General
hostNetwork
Whether Emissary will use the host network, useful for on-premise setup . Example:
hostNetwork: false
false
createDefaultListeners
Whether Emissary should be created with default listeners, HTTP on port 8080 and HTTPS on port 8443. Example:
createDefaultListeners: false
false
replicaCount
Replica count for Emissary to deploy. Example:
replicaCount: 2
2
daemonSet
Whether to create Emissary as a Daemonset instead of a deployment Example:
daemonSet: false
false
15.4 - v3.9.1
Emissary version 3.9.1 has decoupled the CRD portion of the package, and now supports installing multiple instances of the emissary package in the same cluster.
Configuring Emissary Ingress in EKS Anywhere package spec
Parameter
Description
Default
General
hostNetwork
Whether Emissary will use the host network, useful for on-premise setup . Example:
hostNetwork: false
false
createDefaultListeners
Whether Emissary should be created with default listeners, HTTP on port 8080 and HTTPS on port 8443. Example:
createDefaultListeners: false
false
replicaCount
Replica count for Emissary to deploy. Example:
replicaCount: 2
2
daemonSet
Whether to create Emissary as a Daemonset instead of a deployment Example:
daemonSet: false
false
16 - Harbor Configuration
Harbor
is an open source trusted cloud native registry project that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity and management. Having a registry closer to the build and run environment can improve the image transfer efficiency. Harbor supports replication of images between registries, and also offers advanced security features such as user management, access control and activity auditing. For EKS Anywhere deployments, common use cases for Harbor include:
Any supported EKS Anywhere curated package should be modified through package yaml files (with kind: Package) and applied through the command kubectl apply -f packageFileName. Modifying objects outside of package yaml files may lead to unpredictable behaviors.
Starting at eksctl anywhere version v0.12.0, packages on workload clusters are remotely managed by the management cluster.
While following this guide to install packages on a workload cluster, please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace command below, which should be run with kubeconfig pointing to the workload cluster.
Install
Set the KUBECONFIG environment variable to use the config of the management cluster
exportKUBECONFIG=<path to management cluster kubeconfig>
All configuration options are listed in dot notations (e.g., expose.tls.enabled) in the doc, but they have to be transformed to hierachical structures when specified in the config section in the YAML spec.
Harbor web portal is exposed through NodePort by default, and its default port number is 30003 with TLS enabled and 30002 with TLS disabled.
TLS is enabled by default for connections to Harbor web portal, and a secret resource named harbor-tls-secret is required for that purpose. It can be provisioned through cert-manager or manually with the following command using self-signed certificate:
kubectl create secret tls harbor-tls-secret --cert=[path to certificate file] --key=[path to key file] -n eksa-packages
secretKey has to be set as a string of 16 characters for encryption.
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
my-harbor harbor 5m34s installed v2.5.1 v2.5.1 (latest)
Harbor web portal is accessible at whatever externalURL is set to. See complete configuration options
for all default values.
Update
To update package configuration, update harbor.yaml file, and run the following command:
eksctl anywhere apply package -f harbor.yaml
Upgrade
Note
New versions of software packages will be automatically downloaded but not automatically installed. You can always manually run eksctl to check and install updates.
Verify a new bundle is available
eksctl anywhere get packagebundle
Example command output
NAME VERSION STATE
v1.25-120 1.25 active (upgrade available)
v1.26-120 1.26 inactive
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
my-harbor Harbor 14m installed v2.7.1 v2.7.1 (latest)
Uninstall
Uninstall Harbor
Important
By default, PVCs created for jobservice and registry are not removed during a package delete operation, which can be changed by leaving persistence.resourcePolicy empty.
Proxy a public Amazon Elastic Container Registry (ECR) repository
This use case is to use Harbor to proxy and cache images from a public ECR repository, which helps limit the amount of requests made to a public ECR repository, avoiding consuming too much bandwidth or being throttled by the registry server.
Login
Log in to the Harbor web portal with the default credential as shown below
admin
Harbor12345
Create a registry proxy
Navigate to Registries on the left panel, and then click on NEW ENDPOINT button. Choose Docker Registry as the Provider, and enter public-ecr as the Name, and enter https://public.ecr.aws/ as the Endpoint URL. Save it by clicking on OK.
Create a proxy project
Navigate to Projects on the left panel and click on the NEW PROJECT button. Enter proxy-project as the Project Name, check Public access level, and turn on Proxy Cache and choose public-ecr from the pull-down list. Save the configuration by clicking on OK.
Pull images
Note
harbor.eksa.demo:30003 should be replaced with whatever externalURL is set to in the Harbor package YAML file.
Proxy a private Amazon Elastic Container Registry (ECR) repository
This use case is to use Harbor to proxy and cache images from a private ECR repository, which helps limit the amount of requests made to a private ECR repository, avoiding consuming too much bandwidth or being throttled by the registry server.
Login
Log in to the Harbor web portal with the default credential as shown below
admin
Harbor12345
Create a registry proxy
In order for Harbor to proxy a remote private ECR registry, an IAM credential with necessary permissions need to be created. Usually, it follows three steps:
For simplicity, the build-in policy AdministratorAccess is used here.
User group
This is an easy way to manage a pool of users who share the same set of permissions by attaching the policy to the group.
User
Create a user and add it to the user group. In addition, please navigate to Security credentials to generate an access key. Access keys consists of two parts: an access key ID and a secret access key. Please save both as they are used in the next step.
Navigate to Registries on the left panel, and then click on NEW ENDPOINT button. Choose Aws ECR as Provider, and enter private-ecr as Name, https://[ACCOUNT NUMBER].dkr.ecr.us-west-2.amazonaws.com/ as Endpoint URL, use the access key ID part of the generated access key as Access ID, and use the secret access key part of the generated access key as Access Secret. Save it by click on OK.
Create a proxy project
Navigate to Projects on the left panel and click on NEW PROJECT button. Enter proxy-private-project as Project Name, check Public access level, and turn on Proxy Cache and choose private-ecr from the pull-down list. Save the configuration by clicking on OK.
Pull images
Create a repository in the target private ECR registry
Repository replication from Harbor to a private Amazon Elastic Container Registry (ECR) repository
This use case is to use Harbor to replicate local images and charts to a private ECR repository in push mode. When a replication rule is set, all resources that match the defined filter patterns are replicated to the destination registry when the triggering condition is met.
Login
Log in to the Harbor web portal with the default credential as shown below
admin
Harbor12345
Create a nonproxy project
Create a registry proxy
In order for Harbor to proxy a remote private ECR registry, an IAM credential with necessary permissions need to be created. Usually, it follows three steps:
For simplicity, the build-in policy AdministratorAccess is used here.
User group
This is an easy way to manage a pool of users who share the same set of permissions by attaching the policy to the group.
User
Create a user and add it to the user group. In addition, please navigate to Security credentials to generate an access key. Access keys consists of two parts: an access key ID and a secret access key. Please save both as they are used in the next step.
Navigate to Registries on the left panel, and then click on the NEW ENDPOINT button. Choose Aws ECR as the Provider, and enter private-ecr as the Name, https://[ACCOUNT NUMBER].dkr.ecr.us-west-2.amazonaws.com/ as the Endpoint URL, use the access key ID part of the generated access key as Access ID, and use the secret access key part of the generated access key as Access Secret. Save it by clicking on OK.
Create a replication rule
Prepare an image
Note
harbor.eksa.demo:30003 should be replaced with whatever externalURL is set to in the Harbor package YAML file.
docker pull alpine
docker tag alpine:latest harbor.eksa.demo:30003/nonproxy-project/alpine:latest
Authenticate with Harbor with the default credential as shown below
admin
Harbor12345
Note
harbor.eksa.demo:30003 should be replaced with whatever externalURL is set to in the Harbor package YAML file.
docker logout
docker login harbor.eksa.demo:30003
Push images
Create a repository in the target private ECR registry
Note
harbor.eksa.demo:30003 should be replaced with whatever externalURL is set to in the Harbor package YAML file.
The image should appear in the target ECR repository shortly.
Set up trivy image scanner in an air-gapped environment
This use case is to manually import vulnerability database to Harbor trivy when Harbor is running in an air-gapped environment. All the following commands are assuming Harbor is running in the default namespace.
kubectl exec -it harbor-helm-trivy-0 -c trivy bashcd /home/scanner/.cache/trivy
mkdir db
mv db.tar.gz db
cd db
tar zxvf db.tar.gz
16.3 - v2.5.0
Trivy, Notary and Chartmuseum are not supported at this moment.
Configuring Harbor in EKS Anywhere package spec
The following table lists the configurable parameters of the Harbor package spec and the default values.
Parameter
Description
Default
General
externalURL
The external URL for Harbor core service
https://127.0.0.1:30003
imagePullPolicy
The image pull policy
IfNotPresent
logLevel
The log level: debug, info, warning, error or fatal
info
harborAdminPassword
The initial password of the Harbor admin account. Change it from the portal after launching Harbor
Harbor12345
secretKey
The key used for encryption. Must be a string of 16 chars
""
Expose
expose.type
How to expose the service: nodePort or loadBalancer, other values will be ignored and the creation of the service will be skipped.
nodePort
expose.tls.enabled
Enable TLS or not.
true
expose.tls.certSource
The source of the TLS certificate. Set as auto, secret or none and fill the information in the corresponding section: 1) auto: generate the TLS certificate automatically 2) secret: read the TLS certificate from the specified secret. The TLS certificate can be generated manually or by cert manager 3) none: configure no TLS certificate.
secret
expose.tls.auto.commonName
The common name used to generate the certificate. It’s necessary when expose.tls.certSource is set to auto
expose.tls.secret.secretName
The name of the secret which contains keys named: tls.crt - the certificate; tls.key - the private key
harbor-tls-secret
expose.nodePort.name
The name of the NodePort service
harbor
expose.nodePort.ports.http.port
The service port Harbor listens on when serving HTTP
80
expose.nodePort.ports.http.nodePort
The node port Harbor listens on when serving HTTP
30002
expose.nodePort.ports.https.port
The service port Harbor listens on when serving HTTPS
443
expose.nodePort.ports.https.nodePort
The node port Harbor listens on when serving HTTPS
30003
expose.loadBalancer.name
The name of the service
harbor
expose.loadBalancer.IP
The IP address of the loadBalancer. It only works when the loadBalancer supports assigning an IP address
""
expose.loadBalancer.ports.httpPort
The service port Harbor listens on when serving HTTP
80
expose.loadBalancer.ports.httpsPort
The service port Harbor listens on when serving HTTPS
30002
expose.loadBalancer.annotations
The annotations attached to the loadBalancer service
{}
expose.loadBalancer.sourceRanges
List of IP address ranges to assign to loadBalancerSourceRanges
[]
Internal TLS
internalTLS.enabled
Enable TLS for the components (core, jobservice, portal, and registry)
true
Persistence
persistence.resourcePolicy
Setting it to keep to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart is deleted. Does not affect PVCs created for internal database and redis components.
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
""
persistence.persistentVolumeClaim.database.size
The size of the volume. If an external database is used, the setting will be ignored
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning. If an external database is used, the setting will be ignored
""
persistence.persistentVolumeClaim.redis.size
The size of the volume. If an external Redis is used, the setting will be ignored
Specify the storageClass used to provision the volumem, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning. If an external Redis is used, the setting will be ignored
""
Registry
registry.relativeurls
If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL. Needed if harbor is behind a reverse proxy
false
16.4 - v2.5.1
Notary and Chartmuseum are not supported at this moment.
Configuring Harbor in EKS Anywhere package spec
The following table lists the configurable parameters of the Harbor package spec and the default values.
Parameter
Description
Default
General
externalURL
The external URL for Harbor core service
https://127.0.0.1:30003
imagePullPolicy
The image pull policy
IfNotPresent
logLevel
The log level: debug, info, warning, error or fatal
info
harborAdminPassword
The initial password of the Harbor admin account. Change it from the portal after launching Harbor
Harbor12345
secretKey
The key used for encryption. Must be a string of 16 chars
""
Expose
expose.type
How to expose the service: nodePort or loadBalancer, other values will be ignored and the creation of the service will be skipped.
nodePort
expose.tls.enabled
Enable TLS or not.
true
expose.tls.certSource
The source of the TLS certificate. Set as auto, secret or none and fill the information in the corresponding section: 1) auto: generate the TLS certificate automatically 2) secret: read the TLS certificate from the specified secret. The TLS certificate can be generated manually or by cert manager 3) none: configure no TLS certificate.
secret
expose.tls.auto.commonName
The common name used to generate the certificate. It’s necessary when expose.tls.certSource is set to auto
expose.tls.secret.secretName
The name of the secret which contains keys named: tls.crt - the certificate; tls.key - the private key
harbor-tls-secret
expose.nodePort.name
The name of the NodePort service
harbor
expose.nodePort.ports.http.port
The service port Harbor listens on when serving HTTP
80
expose.nodePort.ports.http.nodePort
The node port Harbor listens on when serving HTTP
30002
expose.nodePort.ports.https.port
The service port Harbor listens on when serving HTTPS
443
expose.nodePort.ports.https.nodePort
The node port Harbor listens on when serving HTTPS
30003
expose.loadBalancer.name
The name of the service
harbor
expose.loadBalancer.IP
The IP address of the loadBalancer. It only works when loadBalancer supports assigning an IP address
""
expose.loadBalancer.ports.httpPort
The service port Harbor listens on when serving HTTP
80
expose.loadBalancer.ports.httpsPort
The service port Harbor listens on when serving HTTPS
30002
expose.loadBalancer.annotations
The annotations attached to the loadBalancer service
{}
expose.loadBalancer.sourceRanges
List of IP address ranges to assign to loadBalancerSourceRanges
[]
Internal TLS
internalTLS.enabled
Enable TLS for the components (core, jobservice, portal, and registry)
true
Persistence
persistence.resourcePolicy
Setting it to keep to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart is deleted. Does not affect PVCs created for internal database and redis components.
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
""
persistence.persistentVolumeClaim.database.size
The size of the volume. If an external database is used, the setting will be ignored
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning. If an external database is used, the setting will be ignored
""
persistence.persistentVolumeClaim.redis.size
The size of the volume. If an external Redis is used, the setting will be ignored
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning. If an external Redis is used, the setting will be ignored
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
""
Trivy
trivy.enabled
The flag to enable Trivy scanner
true
trivy.vulnType
Comma-separated list of vulnerability types. Possible values os and library.
os,library
trivy.severity
Comma-separated list of severities to be checked
UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
trivy.skipUpdate
The flag to disable Trivy DB
downloads from GitHub
false
trivy.offlineScan
The flag prevents Trivy from sending API requests to identify dependencies.
false
Registry
registry.relativeurls
If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL. Needed if harbor is behind a reverse proxy
false
16.5 - v2.7.1
Notary and Chartmuseum are not supported at this moment.
Configuring Harbor in EKS Anywhere package spec
The following table lists the configurable parameters of the Harbor package spec and the default values.
Parameter
Description
Default
General
externalURL
The external URL for Harbor core service
https://127.0.0.1:30003
imagePullPolicy
The image pull policy
IfNotPresent
logLevel
The log level: debug, info, warning, error or fatal
info
harborAdminPassword
The initial password of the Harbor admin account. Change it from the portal after launching Harbor
Harbor12345
secretKey
The key used for encryption. Must be a string of 16 chars
""
Expose
expose.type
How to expose the service: nodePort or loadBalancer, other values will be ignored and the creation of the service will be skipped.
nodePort
expose.tls.enabled
Enable TLS or not.
true
expose.tls.certSource
The source of the TLS certificate. Set as auto, secret or none and fill the information in the corresponding section: 1) auto: generate the TLS certificate automatically 2) secret: read the TLS certificate from the specified secret. The TLS certificate can be generated manually or by cert manager 3) none: configure no TLS certificate.
secret
expose.tls.auto.commonName
The common name used to generate the certificate. It’s necessary when expose.tls.certSource is set to auto
expose.tls.secret.secretName
The name of the secret which contains keys named: tls.crt - the certificate; tls.key - the private key
harbor-tls-secret
expose.nodePort.name
The name of the NodePort service
harbor
expose.nodePort.ports.http.port
The service port Harbor listens on when serving HTTP
80
expose.nodePort.ports.http.nodePort
The node port Harbor listens on when serving HTTP
30002
expose.nodePort.ports.https.port
The service port Harbor listens on when serving HTTPS
443
expose.nodePort.ports.https.nodePort
The node port Harbor listens on when serving HTTPS
30003
expose.loadBalancer.name
The name of the service
harbor
expose.loadBalancer.IP
The IP address of the loadBalancer. It only works when loadBalancer supports assigning an IP address
""
expose.loadBalancer.ports.httpPort
The service port Harbor listens on when serving HTTP
80
expose.loadBalancer.ports.httpsPort
The service port Harbor listens on when serving HTTPS
30002
expose.loadBalancer.annotations
The annotations attached to the loadBalancer service
{}
expose.loadBalancer.sourceRanges
List of IP address ranges to assign to loadBalancerSourceRanges
[]
Internal TLS
internalTLS.enabled
Enable TLS for the components (core, jobservice, portal, and registry)
true
Persistence
persistence.resourcePolicy
Setting it to keep to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart is deleted. Does not affect PVCs created for internal database and redis components.
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
""
persistence.persistentVolumeClaim.database.size
The size of the volume. If an external database is used, the setting will be ignored
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning. If an external database is used, the setting will be ignored
""
persistence.persistentVolumeClaim.redis.size
The size of the volume. If an external Redis is used, the setting will be ignored
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning. If an external Redis is used, the setting will be ignored
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
""
Trivy
trivy.enabled
The flag to enable Trivy scanner
true
trivy.vulnType
Comma-separated list of vulnerability types. Possible values os and library.
os,library
trivy.severity
Comma-separated list of severities to be checked
UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
trivy.skipUpdate
The flag to disable Trivy DB
downloads from GitHub
false
trivy.offlineScan
The flag prevents Trivy from sending API requests to identify dependencies.
false
Registry
registry.relativeurls
If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL. Needed if harbor is behind a reverse proxy
false
16.6 - v2.10.2
Configuring Harbor in EKS Anywhere package spec
The following table lists the configurable parameters of the Harbor package spec and the default values.
Parameter
Description
Default
General
externalURL
The external URL for Harbor core service
https://127.0.0.1:30003
imagePullPolicy
The image pull policy
IfNotPresent
logLevel
The log level: debug, info, warning, error or fatal
info
harborAdminPassword
The initial password of the Harbor admin account. Change it from the portal after launching Harbor
Harbor12345
secretKey
The key used for encryption. Must be a string of 16 chars
""
Expose
expose.type
How to expose the service: nodePort or loadBalancer, other values will be ignored and the creation of the service will be skipped.
nodePort
expose.tls.enabled
Enable TLS or not.
true
expose.tls.certSource
The source of the TLS certificate. Set as auto, secret or none and fill the information in the corresponding section: 1) auto: generate the TLS certificate automatically 2) secret: read the TLS certificate from the specified secret. The TLS certificate can be generated manually or by cert manager 3) none: configure no TLS certificate.
secret
expose.tls.auto.commonName
The common name used to generate the certificate. It’s necessary when expose.tls.certSource is set to auto
expose.tls.secret.secretName
The name of the secret which contains keys named: tls.crt - the certificate; tls.key - the private key
harbor-tls-secret
expose.nodePort.name
The name of the NodePort service
harbor
expose.nodePort.ports.http.port
The service port Harbor listens on when serving HTTP
80
expose.nodePort.ports.http.nodePort
The node port Harbor listens on when serving HTTP
30002
expose.nodePort.ports.https.port
The service port Harbor listens on when serving HTTPS
443
expose.nodePort.ports.https.nodePort
The node port Harbor listens on when serving HTTPS
30003
expose.loadBalancer.name
The name of the service
harbor
expose.loadBalancer.IP
The IP address of the loadBalancer. It only works when loadBalancer supports assigning an IP address
""
expose.loadBalancer.ports.httpPort
The service port Harbor listens on when serving HTTP
80
expose.loadBalancer.ports.httpsPort
The service port Harbor listens on when serving HTTPS
30002
expose.loadBalancer.annotations
The annotations attached to the loadBalancer service
{}
expose.loadBalancer.sourceRanges
List of IP address ranges to assign to loadBalancerSourceRanges
[]
Internal TLS
internalTLS.enabled
Enable TLS for the components (core, jobservice, portal, and registry)
true
Persistence
persistence.resourcePolicy
Setting it to keep to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart is deleted. Does not affect PVCs created for internal database and redis components.
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
""
persistence.persistentVolumeClaim.database.size
The size of the volume. If an external database is used, the setting will be ignored
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning. If an external database is used, the setting will be ignored
""
persistence.persistentVolumeClaim.redis.size
The size of the volume. If an external Redis is used, the setting will be ignored
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning. If an external Redis is used, the setting will be ignored
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
""
Trivy
trivy.enabled
The flag to enable Trivy scanner
true
trivy.vulnType
Comma-separated list of vulnerability types. Possible values os and library.
os,library
trivy.severity
Comma-separated list of severities to be checked
UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
trivy.skipUpdate
The flag to disable Trivy DB
downloads from GitHub
false
trivy.offlineScan
The flag prevents Trivy from sending API requests to identify dependencies.
false
Registry
registry.relativeurls
If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL. Needed if harbor is behind a reverse proxy
false
16.7 - v2.11.1
Configuring Harbor in EKS Anywhere package spec
The following table lists the configurable parameters of the Harbor package spec and the default values.
Parameter
Description
Default
General
externalURL
The external URL for Harbor core service
https://127.0.0.1:30003
imagePullPolicy
The image pull policy
IfNotPresent
logLevel
The log level: debug, info, warning, error or fatal
info
harborAdminPassword
The initial password of the Harbor admin account. Change it from the portal after launching Harbor
Harbor12345
secretKey
The key used for encryption. Must be a string of 16 chars
""
Expose
expose.type
How to expose the service: nodePort or loadBalancer, other values will be ignored and the creation of the service will be skipped.
nodePort
expose.tls.enabled
Enable TLS or not.
true
expose.tls.certSource
The source of the TLS certificate. Set as auto, secret or none and fill the information in the corresponding section: 1) auto: generate the TLS certificate automatically 2) secret: read the TLS certificate from the specified secret. The TLS certificate can be generated manually or by cert manager 3) none: configure no TLS certificate.
secret
expose.tls.auto.commonName
The common name used to generate the certificate. It’s necessary when expose.tls.certSource is set to auto
expose.tls.secret.secretName
The name of the secret which contains keys named: tls.crt - the certificate; tls.key - the private key
harbor-tls-secret
expose.nodePort.name
The name of the NodePort service
harbor
expose.nodePort.ports.http.port
The service port Harbor listens on when serving HTTP
80
expose.nodePort.ports.http.nodePort
The node port Harbor listens on when serving HTTP
30002
expose.nodePort.ports.https.port
The service port Harbor listens on when serving HTTPS
443
expose.nodePort.ports.https.nodePort
The node port Harbor listens on when serving HTTPS
30003
expose.loadBalancer.name
The name of the service
harbor
expose.loadBalancer.IP
The IP address of the loadBalancer. It only works when loadBalancer supports assigning an IP address
""
expose.loadBalancer.ports.httpPort
The service port Harbor listens on when serving HTTP
80
expose.loadBalancer.ports.httpsPort
The service port Harbor listens on when serving HTTPS
30002
expose.loadBalancer.annotations
The annotations attached to the loadBalancer service
{}
expose.loadBalancer.sourceRanges
List of IP address ranges to assign to loadBalancerSourceRanges
[]
Internal TLS
internalTLS.enabled
Enable TLS for the components (core, jobservice, portal, and registry)
true
Persistence
persistence.resourcePolicy
Setting it to keep to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart is deleted. Does not affect PVCs created for internal database and redis components.
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
""
persistence.persistentVolumeClaim.database.size
The size of the volume. If an external database is used, the setting will be ignored
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning. If an external database is used, the setting will be ignored
""
persistence.persistentVolumeClaim.redis.size
The size of the volume. If an external Redis is used, the setting will be ignored
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning. If an external Redis is used, the setting will be ignored
Specify the storageClass used to provision the volume, or the default StorageClass will be used (the default). Set it to - to disable dynamic provisioning
""
Trivy
trivy.enabled
The flag to enable Trivy scanner
true
trivy.vulnType
Comma-separated list of vulnerability types. Possible values os and library.
os,library
trivy.severity
Comma-separated list of severities to be checked
UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
trivy.skipUpdate
The flag to disable Trivy DB
downloads from GitHub
false
trivy.offlineScan
The flag prevents Trivy from sending API requests to identify dependencies.
false
Registry
registry.relativeurls
If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL. Needed if harbor is behind a reverse proxy
false
17 - MetalLB Configuration
MetalLB is a load-balancer implementation for on-premises Kubernetes clusters, using standard routing protocols.
Best Practice
Any supported EKS Anywhere curated package should be modified through package yaml files (with kind: Package) and applied through the command kubectl apply -f packageFileName. Modifying objects outside of package yaml files may lead to unpredictable behaviors.
Starting at eksctl anywhere version v0.12.0, packages on workload clusters are remotely managed by the management cluster.
While following this guide to install packages on a workload cluster, please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace command below, which should be run with kubeconfig pointing to the workload cluster.
Create the namespace
(If overriding targetNamespace, change metallb-system to the value of targetNamespace)
kubectl create namespace metallb-system
Install MetalLB
eksctl anywhere create packages -f metallb.yaml
Validate the installation
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
mylb metallb 22h installed 0.13.5-ce5b5de19014202cebd4ab4c091830a3b6dfea06 0.13.5-ce5b5de19014202cebd4ab4c091830a3b6dfea06 (latest)
Update
To update package configuration, update metallb.yaml file, and run the following command:
eksctl anywhere apply package -f metallb.yaml
Upgrade
MetalLB will automatically be upgraded when a new bundle is activated.
A list of IP address ranges over which MetalLB has authority. You can list multiple ranges in a single pool and they will all share the same settings. Each range can be either a CIDR prefix, or an explicit start-end range of IPs.
name
Name for the address pool.
None
addresses[]
A list of string representing CIRD or IP ranges.
None
autoAssign
AutoAssign flag used to prevent MetalLB from automatic allocation for a pool.
true
—
—
—
L2Advertisement
L2Advertisement allows MetalLB to advertise the LoadBalancer IPs provided by the selected pools via L2.
IPAddressPools[]
The list of IPAddressPools to advertise via this advertisement, selected by name.
None
—
—
—
BGPAdvertisement
BGPAdvertisement allows MetalLB to advertise the IPs coming from the selected IPAddressPools via BGP, setting the parameters of the BGP Advertisement.
aggregationLength
The aggregation-length advertisement option lets you “roll up” the /32s into a larger prefix. Defaults to 32. Works for IPv4 addresses.
32
aggregationLengthV6
The aggregation-length advertisement option lets you “roll up” the /128s into a larger prefix. Defaults to 128. Works for IPv6 addresses.
128
communities[]
The BGP communities to be associated with the announcement. Each item can be a community of the form 1234:1234 or the name of an alias defined in the Community CRD.
None
IPAddressPools[]
The list of IPAddressPools to advertise via this advertisement, selected by name.
None
localPref
The BGP LOCAL_PREF attribute which is used by BGP best path algorithm, Path with higher localpref is preferred over one with lower localpref.
None
—
—
—
BGPPeer
Peers for the BGP protocol.
bfdProfile
The name of the BFD Profile to be used for the BFD session associated to the BGP session. If not set, the BFD session won’t be set up.
None
holdTime
Requested BGP hold time, per RFC4271.
None
keepaliveTime
Requested BGP keepalive time, per RFC4271.
None
myASN
AS number to use for the local end of the session.
None
password
Authentication password for routers enforcing TCP MD5 authenticated sessions.
None
peerASN
AS number to expect from the remote end of the session.
None
peerAddress
Address to dial when establishing the session.
None
peerPort
Port to dial when establishing the session.
179
routerID
BGP router ID to advertise to the peer.
None
sourceAddress
Source address to use when establishing the session.
A list of Layer 2 advertisements. See L2Advertisement.
None
False
BGPAdvertisements[]
A list of BGP advertisements. See BGPAdvertisement.
None
False
BGPPeers[]
A list of BGP peers. See BGPPeer.
None
False
—
—
—
—
IPAddressPool
A list of IP address ranges over which MetalLB has authority. You can list multiple ranges in a single pool and they will all share the same settings. Each range can be either a CIDR prefix, or an explicit start-end range of IPs.
name
Name for the address pool.
None
True
addresses[]
A list of string representing CIRD or IP ranges.
None
True
autoAssign
AutoAssign flag used to prevent MetalLB from automatic allocation for a pool.
true
False
—
—
—
—
L2Advertisement
L2Advertisement allows MetalLB to advertise the LoadBalancer IPs provided by the selected pools via L2.
ipAddressPools[]
The list of IPAddressPool names to advertise.
None
True
name
Name for the L2Advertisement.
None
False
—
—
—
—
BGPAdvertisement
BGPAdvertisement allows MetalLB to advertise the IPs coming from the selected ipAddressPools via BGP, setting the parameters of the BGP Advertisement.
aggregationLength
The aggregation-length advertisement option lets you “roll up” the /32s into a larger prefix. Defaults to 32. Works for IPv4 addresses.
32
False
aggregationLengthV6
The aggregation-length advertisement option lets you “roll up” the /128s into a larger prefix. Defaults to 128. Works for IPv6 addresses.
128
False
communities[]
The BGP communities to be associated with the announcement. Each item can be a community of the form 1234:1234 or the name of an alias defined in the Community CRD.
None
False
ipAddressPools[]
The list of IPAddressPool names to be advertised via BGP.
None
True
localPref
The BGP LOCAL_PREF attribute which is used by BGP best path algorithm, Path with higher localpref is preferred over one with lower localpref.
None
False
peers[]
List of peer names. Limits the bgppeer to advertise the ips of the selected pools to. When empty, the loadbalancer IP is announced to all the BGPPeers configured.
None
False
—
—
—
—
BGPPeer
Peers for the BGP protocol.
holdTime
Requested BGP hold time, per RFC4271.
None
False
keepaliveTime
Requested BGP keepalive time, per RFC4271.
None
False
myASN
AS number to use for the local end of the session.
None
True
password
Authentication password for routers enforcing TCP MD5 authenticated sessions.
None
False
peerASN
AS number to expect from the remote end of the session.
None
True
peerAddress
Address to dial when establishing the session.
None
True
peerPort
Port to dial when establishing the session.
179
False
routerID
BGP router ID to advertise to the peer.
None
False
sourceAddress
Source address to use when establishing the session.
A list of Layer 2 advertisements. See L2Advertisement.
None
False
BGPAdvertisements[]
A list of BGP advertisements. See BGPAdvertisement.
None
False
BGPPeers[]
A list of BGP peers. See BGPPeer.
None
False
—
—
—
—
IPAddressPool
A list of IP address ranges over which MetalLB has authority. You can list multiple ranges in a single pool and they will all share the same settings. Each range can be either a CIDR prefix, or an explicit start-end range of IPs.
name
Name for the address pool.
None
True
addresses[]
A list of string representing CIRD or IP ranges.
None
True
autoAssign
AutoAssign flag used to prevent MetalLB from automatic allocation for a pool.
true
False
—
—
—
—
L2Advertisement
L2Advertisement allows MetalLB to advertise the LoadBalancer IPs provided by the selected pools via L2.
ipAddressPools[]
The list of IPAddressPool names to advertise.
None
True
name
Name for the L2Advertisement.
None
False
—
—
—
—
BGPAdvertisement
BGPAdvertisement allows MetalLB to advertise the IPs coming from the selected ipAddressPools via BGP, setting the parameters of the BGP Advertisement.
aggregationLength
The aggregation-length advertisement option lets you “roll up” the /32s into a larger prefix. Defaults to 32. Works for IPv4 addresses.
32
False
aggregationLengthV6
The aggregation-length advertisement option lets you “roll up” the /128s into a larger prefix. Defaults to 128. Works for IPv6 addresses.
128
False
communities[]
The BGP communities to be associated with the announcement. Each item can be a community of the form 1234:1234 or the name of an alias defined in the Community CRD.
None
False
ipAddressPools[]
The list of IPAddressPool names to be advertised via BGP.
None
True
localPref
The BGP LOCAL_PREF attribute which is used by BGP best path algorithm, Path with higher localpref is preferred over one with lower localpref.
None
False
peers[]
List of peer names. Limits the bgppeer to advertise the ips of the selected pools to. When empty, the loadbalancer IP is announced to all the BGPPeers configured.
None
False
—
—
—
—
BGPPeer
Peers for the BGP protocol.
holdTime
Requested BGP hold time, per RFC4271.
None
False
keepaliveTime
Requested BGP keepalive time, per RFC4271.
None
False
myASN
AS number to use for the local end of the session.
None
True
password
Authentication password for routers enforcing TCP MD5 authenticated sessions.
None
False
peerASN
AS number to expect from the remote end of the session.
None
True
peerAddress
Address to dial when establishing the session.
None
True
peerPort
Port to dial when establishing the session.
179
False
routerID
BGP router ID to advertise to the peer.
None
False
sourceAddress
Source address to use when establishing the session.
None
False
password
Authentication password for routers enforcing TCP MD5 authenticated sessions.
None
False
passwordSecret
passwordSecret is a reference to the authentication secret for BGP Peer. The secret must be of type ‘kubernetes.io/basic-auth’ and the password stored under the “password” key. Example:
A list of Layer 2 advertisements. See L2Advertisement.
None
False
BGPAdvertisements[]
A list of BGP advertisements. See BGPAdvertisement.
None
False
BGPPeers[]
A list of BGP peers. See BGPPeer.
None
False
—
—
—
—
IPAddressPool
A list of IP address ranges over which MetalLB has authority. You can list multiple ranges in a single pool and they will all share the same settings. Each range can be either a CIDR prefix, or an explicit start-end range of IPs.
name
Name for the address pool.
None
True
addresses[]
A list of string representing CIRD or IP ranges.
None
True
autoAssign
AutoAssign flag used to prevent MetalLB from automatic allocation for a pool.
true
False
—
—
—
—
L2Advertisement
L2Advertisement allows MetalLB to advertise the LoadBalancer IPs provided by the selected pools via L2.
ipAddressPools[]
The list of IPAddressPool names to advertise.
None
True
name
Name for the L2Advertisement.
None
False
—
—
—
—
BGPAdvertisement
BGPAdvertisement allows MetalLB to advertise the IPs coming from the selected ipAddressPools via BGP, setting the parameters of the BGP Advertisement.
aggregationLength
The aggregation-length advertisement option lets you “roll up” the /32s into a larger prefix. Defaults to 32. Works for IPv4 addresses.
32
False
aggregationLengthV6
The aggregation-length advertisement option lets you “roll up” the /128s into a larger prefix. Defaults to 128. Works for IPv6 addresses.
128
False
communities[]
The BGP communities to be associated with the announcement. Each item can be a community of the form 1234:1234 or the name of an alias defined in the Community CRD.
None
False
ipAddressPools[]
The list of IPAddressPool names to be advertised via BGP.
None
True
localPref
The BGP LOCAL_PREF attribute which is used by BGP best path algorithm, Path with higher localpref is preferred over one with lower localpref.
None
False
peers[]
List of peer names. Limits the bgppeer to advertise the ips of the selected pools to. When empty, the loadbalancer IP is announced to all the BGPPeers configured.
None
False
—
—
—
—
BGPPeer
Peers for the BGP protocol.
holdTime
Requested BGP hold time, per RFC4271.
None
False
keepaliveTime
Requested BGP keepalive time, per RFC4271.
None
False
myASN
AS number to use for the local end of the session.
None
True
password
Authentication password for routers enforcing TCP MD5 authenticated sessions.
None
False
peerASN
AS number to expect from the remote end of the session.
None
True
peerAddress
Address to dial when establishing the session.
None
True
peerPort
Port to dial when establishing the session.
179
False
routerID
BGP router ID to advertise to the peer.
None
False
sourceAddress
Source address to use when establishing the session.
None
False
password
Authentication password for routers enforcing TCP MD5 authenticated sessions.
None
False
passwordSecret
passwordSecret is a reference to the authentication secret for BGP Peer. The secret must be of type ‘kubernetes.io/basic-auth’ and the password stored under the “password” key. Example:
A list of Layer 2 advertisements. See L2Advertisement.
None
False
BGPAdvertisements[]
A list of BGP advertisements. See BGPAdvertisement.
None
False
BGPPeers[]
A list of BGP peers. See BGPPeer.
None
False
—
—
—
—
IPAddressPool
A list of IP address ranges over which MetalLB has authority. You can list multiple ranges in a single pool and they will all share the same settings. Each range can be either a CIDR prefix, or an explicit start-end range of IPs.
name
Name for the address pool.
None
True
addresses[]
A list of string representing CIRD or IP ranges.
None
True
autoAssign
AutoAssign flag used to prevent MetalLB from automatic allocation for a pool.
true
False
—
—
—
—
L2Advertisement
L2Advertisement allows MetalLB to advertise the LoadBalancer IPs provided by the selected pools via L2.
ipAddressPools[]
The list of IPAddressPool names to advertise.
None
True
name
Name for the L2Advertisement.
None
False
—
—
—
—
BGPAdvertisement
BGPAdvertisement allows MetalLB to advertise the IPs coming from the selected ipAddressPools via BGP, setting the parameters of the BGP Advertisement.
aggregationLength
The aggregation-length advertisement option lets you “roll up” the /32s into a larger prefix. Defaults to 32. Works for IPv4 addresses.
32
False
aggregationLengthV6
The aggregation-length advertisement option lets you “roll up” the /128s into a larger prefix. Defaults to 128. Works for IPv6 addresses.
128
False
communities[]
The BGP communities to be associated with the announcement. Each item can be a community of the form 1234:1234 or the name of an alias defined in the Community CRD.
None
False
ipAddressPools[]
The list of IPAddressPool names to be advertised via BGP.
None
True
localPref
The BGP LOCAL_PREF attribute which is used by BGP best path algorithm, Path with higher localpref is preferred over one with lower localpref.
None
False
peers[]
List of peer names. Limits the bgppeer to advertise the ips of the selected pools to. When empty, the loadbalancer IP is announced to all the BGPPeers configured.
None
False
—
—
—
—
BGPPeer
Peers for the BGP protocol.
holdTime
Requested BGP hold time, per RFC4271.
None
False
keepaliveTime
Requested BGP keepalive time, per RFC4271.
None
False
myASN
AS number to use for the local end of the session.
None
True
password
Authentication password for routers enforcing TCP MD5 authenticated sessions.
None
False
peerASN
AS number to expect from the remote end of the session.
None
True
peerAddress
Address to dial when establishing the session.
None
True
peerPort
Port to dial when establishing the session.
179
False
routerID
BGP router ID to advertise to the peer.
None
False
sourceAddress
Source address to use when establishing the session.
None
False
password
Authentication password for routers enforcing TCP MD5 authenticated sessions.
None
False
passwordSecret
passwordSecret is a reference to the authentication secret for BGP Peer. The secret must be of type ‘kubernetes.io/basic-auth’ and the password stored under the “password” key. Example:
A list of Layer 2 advertisements. See L2Advertisement.
None
False
BGPAdvertisements[]
A list of BGP advertisements. See BGPAdvertisement.
None
False
BGPPeers[]
A list of BGP peers. See BGPPeer.
None
False
—
—
—
—
IPAddressPool
A list of IP address ranges over which MetalLB has authority. You can list multiple ranges in a single pool and they will all share the same settings. Each range can be either a CIDR prefix, or an explicit start-end range of IPs.
name
Name for the address pool.
None
True
addresses[]
A list of string representing CIRD or IP ranges.
None
True
autoAssign
AutoAssign flag used to prevent MetalLB from automatic allocation for a pool.
true
False
—
—
—
—
L2Advertisement
L2Advertisement allows MetalLB to advertise the LoadBalancer IPs provided by the selected pools via L2.
ipAddressPools[]
The list of IPAddressPool names to advertise.
None
True
name
Name for the L2Advertisement.
None
False
—
—
—
—
BGPAdvertisement
BGPAdvertisement allows MetalLB to advertise the IPs coming from the selected ipAddressPools via BGP, setting the parameters of the BGP Advertisement.
aggregationLength
The aggregation-length advertisement option lets you “roll up” the /32s into a larger prefix. Defaults to 32. Works for IPv4 addresses.
32
False
aggregationLengthV6
The aggregation-length advertisement option lets you “roll up” the /128s into a larger prefix. Defaults to 128. Works for IPv6 addresses.
128
False
communities[]
The BGP communities to be associated with the announcement. Each item can be a community of the form 1234:1234 or the name of an alias defined in the Community CRD.
None
False
ipAddressPools[]
The list of IPAddressPool names to be advertised via BGP.
None
True
localPref
The BGP LOCAL_PREF attribute which is used by BGP best path algorithm, Path with higher localpref is preferred over one with lower localpref.
None
False
peers[]
List of peer names. Limits the bgppeer to advertise the ips of the selected pools to. When empty, the loadbalancer IP is announced to all the BGPPeers configured.
None
False
—
—
—
—
BGPPeer
Peers for the BGP protocol.
holdTime
Requested BGP hold time, per RFC4271.
None
False
keepaliveTime
Requested BGP keepalive time, per RFC4271.
None
False
myASN
AS number to use for the local end of the session.
None
True
password
Authentication password for routers enforcing TCP MD5 authenticated sessions.
None
False
peerASN
AS number to expect from the remote end of the session.
None
True
peerAddress
Address to dial when establishing the session.
None
True
peerPort
Port to dial when establishing the session.
179
False
routerID
BGP router ID to advertise to the peer.
None
False
sourceAddress
Source address to use when establishing the session.
None
False
password
Authentication password for routers enforcing TCP MD5 authenticated sessions.
None
False
passwordSecret
passwordSecret is a reference to the authentication secret for BGP Peer. The secret must be of type ‘kubernetes.io/basic-auth’ and the password stored under the “password” key. Example:
Prometheus is an open-source systems monitoring and alerting toolkit. It collects and stores metrics as time series data.
Best Practice
Any supported EKS Anywhere curated package should be modified through package yaml files (with kind: Package) and applied through the command kubectl apply -f packageFileName. Modifying objects outside of package yaml files may lead to unpredictable behaviors.
The Prometheus package creates two components by default:
Prometheus-server,
which collects metrics from configured targets, and stores the metrics as time series data;
Node-exporter,
which exposes a wide variety of hardware- and kernel-related metrics for prometheus-server (or an equivalent metrics collector, i.e. ADOT collector) to scrape.
The prometheus-server is pre-configured to scrape the following targets at 1m interval:
In this section, we cover a few frequently-asked config customizations. After determining the appropriate customization, proceed to the Prometheus installation guide
to complete the package installation. Also refer to Prometheus package spec
for additional config options.
Change prometheus-server global configs
By default, prometheus-server is configured with evaluation_interval: 1m, scrape_interval: 1m, scrape_timeout: 10s. Those values can be overwritten if preferred / needed.
The following config allows the user to do such customization:
By default, prometheus-server is created as a deployment with replicaCount equals to 1. If there is a need to increase the replicaCount greater than 1, a user should deploy prometheus-server as a statefulSet instead. This allows multiple prometheus-server pods to share the same data storage.
The following config allows the user to do such customization:
Run sample queries in Prometheus web UI to confirm the targets have been configured properly. For example, a user can run the following query to obtain the CPU utilization rate by node.
A user can install Grafana in the cluster to visualize the Prometheus metrics. We used the Grafana helm chart as an example below, though other deployment methods are also possible.
Go to http://localhost:3000
to access the web UI.
Log in with username admin, and password obtained from the Obtain Grafana login password in step 1 above.
Add Prometheus data source
Click on the Configuration sign on the left navigation bar, select Data sources, then choose Prometheus as the Data source.
Configure Prometheus data source with the following details:
Name: Prometheus as an example.
URL: http://<prometheus-server-end-point-name>.<namespace>:9090. If the package default values are used, this will be http://generated-prometheus-server.observability:9090.
Scrape interval: 1m or the value specified by user in the package config.
Select Save and test. A notification data source is working should be displayed.
Import dashboard templates
Import a dashboard template by hovering over to the Dashboard sign on the left navigation bar, and click on Import. Type 315 in the Import via grafana.com textbox and select Import.
From the dropdown at the bottom, select Prometheus and select Import.
A Kubernetes cluster monitoring (via Prometheus) dashboard will be displayed.
Perform the same procedure for template 1860. A Node Exporter Full dashboard will be displayed.
Starting at eksctl anywhere version v0.12.0, packages on workload clusters are remotely managed by the management cluster.
While following this guide to install packages on a workload cluster, please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace command below, which should be run with kubeconfig pointing to the workload cluster.
Example package file with prometheus-server deployed as a statefulSet with replicaCount 2, and set scrape config to collect Prometheus-server’s own metrics only:
Configurable parameters and default values under spec.config
Parameter
Description
Default
General
rbac.create
Specifies if clusterRole / role and clusterRoleBinding / roleBinding will be created for prometheus-server and node-exporter
true
sourceRegistry
Specifies image source registry for prometheus-server and node-exporter
"783794618700.dkr.ecr.us-west-2.amazonaws.com"
Node-Exporter
nodeExporter.enabled
Indicates if node-exporter is enabled
true
nodeExporter.hostNetwork
Indicates if node-exporter shares the host network namespace
true
nodeExporter.hostPID
Indicates if node-exporter shares the host process ID namespace
true
nodeExporter.image.pullPolicy
Specifies node-exporter image pull policy: IfNotPresent, Always, Never
"IfNotPresent"
nodeExporter.image.repository
Specifies node-exporter image repository
"prometheus/node-exporter"
nodeExporter.resources
Specifies resource requests and limits of the node-exporter container. Refer to the Kubernetes API documentation ResourceRequirements
field for more details
{}
nodeExporter.service
Specifies how to expose node-exporter as a network service
Specifies node tolerations for node-exporter scheduling to nodes with taints. Refer to the Kubernetes API documentation toleration
field for more details.
Specifies node-exporter service account annotations
{}
serviceAccounts.nodeExporter.create
Indicates if node-exporter service account will be created
true
serviceAccounts.nodeExporter.name
Specifies node-exporter service account name
""
Prometheus-Server
server.enabled
Indicates if prometheus-server is enabled
true
server.global.evaluation_interval
Specifies how frequently the prometheus-server rules are evaluated
"1m"
server.global.scrape_interval
Specifies how frequently prometheus-server will scrape targets
"1m"
server.global.scrape_timeout
Specifies how long until a prometheus-server scrape request times out
"10s"
server.image.pullPolicy
Specifies prometheus-server image pull policy: IfNotPresent, Always, Never
"IfNotPresent"
server.image.repository
Specifies prometheus-server image repository
"prometheus/prometheus"
server.name
Specifies prometheus-server container name
"server"
server.persistentVolume.accessModes
Specifies prometheus-server data Persistent Volume access modes
"ReadWriteOnce"
server.persistentVolume.enabled
Indicates if prometheus-server will create/use a Persistent Volume Claim
true
server.persistentVolume.existingClaim
Specifies prometheus-server data Persistent Volume existing claim name. It requires server.persistentVolume.enabled: true. If defined, PVC must be created manually before volume will be bound
""
server.persistentVolume.size
Specifies prometheus-server data Persistent Volume size
"8Gi"
server.remoteRead
Specifies prometheus-server remote read configs. Refer to Prometheus docs remote_read
for more details
[]
server.remoteWrite
Specifies prometheus-server remote write configs. Refer to Prometheus docs remote_write
for more details
[]
server.replicaCount
Specifies the replicaCount for prometheus-server deployment / statefulSet. Note:server.statefulSet.enabled should be set to true if server.replicaCount is greater than 1
1
server.resources
Specifies resource requests and limits of the prometheus-server container. Refer to the Kubernetes API documentation ResourceRequirements
field for more details
{}
server.retention
Specifies prometheus-server data retention period
"15d"
server.service
Specifies how to expose prometheus-server as a network service
Specifies prometheus-server service account annotations
{}
serviceAccounts.server.create
Indicates if prometheus-server service account will be created
true
serviceAccounts.server.name
Specifies prometheus-server service account name
""
Node-exporter service is exposed as a clusterIP with port: 9100 (controlled by nodeExporter.service.servicePort below) and targetPort: 9100 (controlled by nodeExporter.service.hostPort below) by default. Note the annotation prometheus.io/scrape: "true" is mandatory in order for node-exporter to be discovered by prometheus-server as a scrape target. See below specification for details.
Prometheus-server service is exposed as a clusterIP with port: 9090 (controlled by server.service.servicePort below) and targetPort: 9090 (not overridable) by default. See below specification for details.
Configurable parameters and default values under spec.config
Parameter
Description
Default
General
rbac.create
Specifies if clusterRole / role and clusterRoleBinding / roleBinding will be created for prometheus-server and node-exporter
true
sourceRegistry
Specifies image source registry for prometheus-server and node-exporter
"783794618700.dkr.ecr.us-west-2.amazonaws.com"
Node-Exporter
nodeExporter.enabled
Indicates if node-exporter is enabled
true
nodeExporter.hostNetwork
Indicates if node-exporter shares the host network namespace
true
nodeExporter.hostPID
Indicates if node-exporter shares the host process ID namespace
true
nodeExporter.image.pullPolicy
Specifies node-exporter image pull policy: IfNotPresent, Always, Never
"IfNotPresent"
nodeExporter.image.repository
Specifies node-exporter image repository
"prometheus/node-exporter"
nodeExporter.resources
Specifies resource requests and limits of the node-exporter container. Refer to the Kubernetes API documentation ResourceRequirements
field for more details
{}
nodeExporter.service
Specifies how to expose node-exporter as a network service
Specifies node tolerations for node-exporter scheduling to nodes with taints. Refer to the Kubernetes API documentation toleration
field for more details.
Specifies node-exporter service account annotations
{}
serviceAccounts.nodeExporter.create
Indicates if node-exporter service account will be created
true
serviceAccounts.nodeExporter.name
Specifies node-exporter service account name
""
Prometheus-Server
server.enabled
Indicates if prometheus-server is enabled
true
server.global.evaluation_interval
Specifies how frequently the prometheus-server rules are evaluated
"1m"
server.global.scrape_interval
Specifies how frequently prometheus-server will scrape targets
"1m"
server.global.scrape_timeout
Specifies how long until a prometheus-server scrape request times out
"10s"
server.image.pullPolicy
Specifies prometheus-server image pull policy: IfNotPresent, Always, Never
"IfNotPresent"
server.image.repository
Specifies prometheus-server image repository
"prometheus/prometheus"
server.name
Specifies prometheus-server container name
"server"
server.persistentVolume.accessModes
Specifies prometheus-server data Persistent Volume access modes
"ReadWriteOnce"
server.persistentVolume.enabled
Indicates if prometheus-server will create/use a Persistent Volume Claim
true
server.persistentVolume.existingClaim
Specifies prometheus-server data Persistent Volume existing claim name. It requires server.persistentVolume.enabled: true. If defined, PVC must be created manually before volume will be bound
""
server.persistentVolume.size
Specifies prometheus-server data Persistent Volume size
"8Gi"
server.remoteRead
Specifies prometheus-server remote read configs. Refer to Prometheus docs remote_read
for more details
[]
server.remoteWrite
Specifies prometheus-server remote write configs. Refer to Prometheus docs remote_write
for more details
[]
server.replicaCount
Specifies the replicaCount for prometheus-server deployment / statefulSet. Note:server.statefulSet.enabled should be set to true if server.replicaCount is greater than 1
1
server.resources
Specifies resource requests and limits of the prometheus-server container. Refer to the Kubernetes API documentation ResourceRequirements
field for more details
{}
server.retention
Specifies prometheus-server data retention period
"15d"
server.service
Specifies how to expose prometheus-server as a network service
Specifies prometheus-server service account annotations
{}
serviceAccounts.server.create
Indicates if prometheus-server service account will be created
true
serviceAccounts.server.name
Specifies prometheus-server service account name
""
Node-exporter service is exposed as a clusterIP with port: 9100 (controlled by nodeExporter.service.servicePort below) and targetPort: 9100 (controlled by nodeExporter.service.hostPort below) by default. Note the annotation prometheus.io/scrape: "true" is mandatory in order for node-exporter to be discovered by prometheus-server as a scrape target. See below specification for details.
Prometheus-server service is exposed as a clusterIP with port: 9090 (controlled by server.service.servicePort below) and targetPort: 9090 (not overridable) by default. See below specification for details.
Configurable parameters and default values under spec.config
Parameter
Description
Default
General
rbac.create
Specifies if clusterRole / role and clusterRoleBinding / roleBinding will be created for prometheus-server and node-exporter
true
sourceRegistry
Specifies image source registry for prometheus-server and node-exporter
"783794618700.dkr.ecr.us-west-2.amazonaws.com"
Node-Exporter
nodeExporter.enabled
Indicates if node-exporter is enabled
true
nodeExporter.hostNetwork
Indicates if node-exporter shares the host network namespace
true
nodeExporter.hostPID
Indicates if node-exporter shares the host process ID namespace
true
nodeExporter.image.pullPolicy
Specifies node-exporter image pull policy: IfNotPresent, Always, Never
"IfNotPresent"
nodeExporter.image.repository
Specifies node-exporter image repository
"prometheus/node-exporter"
nodeExporter.resources
Specifies resource requests and limits of the node-exporter container. Refer to the Kubernetes API documentation ResourceRequirements
field for more details
{}
nodeExporter.service
Specifies how to expose node-exporter as a network service
Specifies node tolerations for node-exporter scheduling to nodes with taints. Refer to the Kubernetes API documentation toleration
field for more details.
Specifies node-exporter service account annotations
{}
serviceAccounts.nodeExporter.create
Indicates if node-exporter service account will be created
true
serviceAccounts.nodeExporter.name
Specifies node-exporter service account name
""
Prometheus-Server
server.enabled
Indicates if prometheus-server is enabled
true
server.global.evaluation_interval
Specifies how frequently the prometheus-server rules are evaluated
"1m"
server.global.scrape_interval
Specifies how frequently prometheus-server will scrape targets
"1m"
server.global.scrape_timeout
Specifies how long until a prometheus-server scrape request times out
"10s"
server.image.pullPolicy
Specifies prometheus-server image pull policy: IfNotPresent, Always, Never
"IfNotPresent"
server.image.repository
Specifies prometheus-server image repository
"prometheus/prometheus"
server.name
Specifies prometheus-server container name
"server"
server.persistentVolume.accessModes
Specifies prometheus-server data Persistent Volume access modes
"ReadWriteOnce"
server.persistentVolume.enabled
Indicates if prometheus-server will create/use a Persistent Volume Claim
true
server.persistentVolume.existingClaim
Specifies prometheus-server data Persistent Volume existing claim name. It requires server.persistentVolume.enabled: true. If defined, PVC must be created manually before volume will be bound
""
server.persistentVolume.size
Specifies prometheus-server data Persistent Volume size
"8Gi"
server.remoteRead
Specifies prometheus-server remote read configs. Refer to Prometheus docs remote_read
for more details
[]
server.remoteWrite
Specifies prometheus-server remote write configs. Refer to Prometheus docs remote_write
for more details
[]
server.replicaCount
Specifies the replicaCount for prometheus-server deployment / statefulSet. Note:server.statefulSet.enabled should be set to true if server.replicaCount is greater than 1
1
server.resources
Specifies resource requests and limits of the prometheus-server container. Refer to the Kubernetes API documentation ResourceRequirements
field for more details
{}
server.retention
Specifies prometheus-server data retention period
"15d"
server.service
Specifies how to expose prometheus-server as a network service
Specifies prometheus-server service account annotations
{}
serviceAccounts.server.create
Indicates if prometheus-server service account will be created
true
serviceAccounts.server.name
Specifies prometheus-server service account name
""
Node-exporter service is exposed as a clusterIP with port: 9100 (controlled by nodeExporter.service.servicePort below) and targetPort: 9100 (controlled by nodeExporter.service.hostPort below) by default. Note the annotation prometheus.io/scrape: "true" is mandatory in order for node-exporter to be discovered by prometheus-server as a scrape target. See below specification for details.
Prometheus-server service is exposed as a clusterIP with port: 9090 (controlled by server.service.servicePort below) and targetPort: 9090 (not overridable) by default. See below specification for details.
Configurable parameters and default values under spec.config
Parameter
Description
Default
General
rbac.create
Specifies if clusterRole / role and clusterRoleBinding / roleBinding will be created for prometheus-server and node-exporter
true
sourceRegistry
Specifies image source registry for prometheus-server and node-exporter
"783794618700.dkr.ecr.us-west-2.amazonaws.com"
Node-Exporter
nodeExporter.enabled
Indicates if node-exporter is enabled
true
nodeExporter.hostNetwork
Indicates if node-exporter shares the host network namespace
true
nodeExporter.hostPID
Indicates if node-exporter shares the host process ID namespace
true
nodeExporter.image.pullPolicy
Specifies node-exporter image pull policy: IfNotPresent, Always, Never
"IfNotPresent"
nodeExporter.image.repository
Specifies node-exporter image repository
"prometheus/node-exporter"
nodeExporter.resources
Specifies resource requests and limits of the node-exporter container. Refer to the Kubernetes API documentation ResourceRequirements
field for more details
{}
nodeExporter.service
Specifies how to expose node-exporter as a network service
Specifies node tolerations for node-exporter scheduling to nodes with taints. Refer to the Kubernetes API documentation toleration
field for more details.
Specifies node-exporter service account annotations
{}
serviceAccounts.nodeExporter.create
Indicates if node-exporter service account will be created
true
serviceAccounts.nodeExporter.name
Specifies node-exporter service account name
""
Prometheus-Server
server.enabled
Indicates if prometheus-server is enabled
true
server.global.evaluation_interval
Specifies how frequently the prometheus-server rules are evaluated
"1m"
server.global.scrape_interval
Specifies how frequently prometheus-server will scrape targets
"1m"
server.global.scrape_timeout
Specifies how long until a prometheus-server scrape request times out
"10s"
server.image.pullPolicy
Specifies prometheus-server image pull policy: IfNotPresent, Always, Never
"IfNotPresent"
server.image.repository
Specifies prometheus-server image repository
"prometheus/prometheus"
server.name
Specifies prometheus-server container name
"server"
server.persistentVolume.accessModes
Specifies prometheus-server data Persistent Volume access modes
"ReadWriteOnce"
server.persistentVolume.enabled
Indicates if prometheus-server will create/use a Persistent Volume Claim
true
server.persistentVolume.existingClaim
Specifies prometheus-server data Persistent Volume existing claim name. It requires server.persistentVolume.enabled: true. If defined, PVC must be created manually before volume will be bound
""
server.persistentVolume.size
Specifies prometheus-server data Persistent Volume size
"8Gi"
server.remoteRead
Specifies prometheus-server remote read configs. Refer to Prometheus docs remote_read
for more details
[]
server.remoteWrite
Specifies prometheus-server remote write configs. Refer to Prometheus docs remote_write
for more details
[]
server.replicaCount
Specifies the replicaCount for prometheus-server deployment / statefulSet. Note:server.statefulSet.enabled should be set to true if server.replicaCount is greater than 1
1
server.resources
Specifies resource requests and limits of the prometheus-server container. Refer to the Kubernetes API documentation ResourceRequirements
field for more details
{}
server.retention
Specifies prometheus-server data retention period
"15d"
server.service
Specifies how to expose prometheus-server as a network service
Specifies prometheus-server service account annotations
{}
serviceAccounts.server.create
Indicates if prometheus-server service account will be created
true
serviceAccounts.server.name
Specifies prometheus-server service account name
""
Node-exporter service is exposed as a clusterIP with port: 9100 (controlled by nodeExporter.service.servicePort below) and targetPort: 9100 (controlled by nodeExporter.service.hostPort below) by default. Note the annotation prometheus.io/scrape: "true" is mandatory in order for node-exporter to be discovered by prometheus-server as a scrape target. See below specification for details.
Prometheus-server service is exposed as a clusterIP with port: 9090 (controlled by server.service.servicePort below) and targetPort: 9090 (not overridable) by default. See below specification for details.
Configurable parameters and default values under spec.config
Parameter
Description
Default
General
rbac.create
Specifies if clusterRole / role and clusterRoleBinding / roleBinding will be created for prometheus-server and node-exporter
true
sourceRegistry
Specifies image source registry for prometheus-server and node-exporter
"783794618700.dkr.ecr.us-west-2.amazonaws.com"
Node-Exporter
nodeExporter.enabled
Indicates if node-exporter is enabled
true
nodeExporter.hostNetwork
Indicates if node-exporter shares the host network namespace
true
nodeExporter.hostPID
Indicates if node-exporter shares the host process ID namespace
true
nodeExporter.image.pullPolicy
Specifies node-exporter image pull policy: IfNotPresent, Always, Never
"IfNotPresent"
nodeExporter.image.repository
Specifies node-exporter image repository
"prometheus/node-exporter"
nodeExporter.resources
Specifies resource requests and limits of the node-exporter container. Refer to the Kubernetes API documentation ResourceRequirements
field for more details
{}
nodeExporter.service
Specifies how to expose node-exporter as a network service
Specifies node tolerations for node-exporter scheduling to nodes with taints. Refer to the Kubernetes API documentation toleration
field for more details.
Specifies node-exporter service account annotations
{}
serviceAccounts.nodeExporter.create
Indicates if node-exporter service account will be created
true
serviceAccounts.nodeExporter.name
Specifies node-exporter service account name
""
Prometheus-Server
server.enabled
Indicates if prometheus-server is enabled
true
server.global.evaluation_interval
Specifies how frequently the prometheus-server rules are evaluated
"1m"
server.global.scrape_interval
Specifies how frequently prometheus-server will scrape targets
"1m"
server.global.scrape_timeout
Specifies how long until a prometheus-server scrape request times out
"10s"
server.image.pullPolicy
Specifies prometheus-server image pull policy: IfNotPresent, Always, Never
"IfNotPresent"
server.image.repository
Specifies prometheus-server image repository
"prometheus/prometheus"
server.name
Specifies prometheus-server container name
"server"
server.persistentVolume.accessModes
Specifies prometheus-server data Persistent Volume access modes
"ReadWriteOnce"
server.persistentVolume.enabled
Indicates if prometheus-server will create/use a Persistent Volume Claim
true
server.persistentVolume.existingClaim
Specifies prometheus-server data Persistent Volume existing claim name. It requires server.persistentVolume.enabled: true. If defined, PVC must be created manually before volume will be bound
""
server.persistentVolume.size
Specifies prometheus-server data Persistent Volume size
"8Gi"
server.remoteRead
Specifies prometheus-server remote read configs. Refer to Prometheus docs remote_read
for more details
[]
server.remoteWrite
Specifies prometheus-server remote write configs. Refer to Prometheus docs remote_write
for more details
[]
server.replicaCount
Specifies the replicaCount for prometheus-server deployment / statefulSet. Note:server.statefulSet.enabled should be set to true if server.replicaCount is greater than 1
1
server.resources
Specifies resource requests and limits of the prometheus-server container. Refer to the Kubernetes API documentation ResourceRequirements
field for more details
{}
server.retention
Specifies prometheus-server data retention period
"15d"
server.service
Specifies how to expose prometheus-server as a network service
Specifies prometheus-server service account annotations
{}
serviceAccounts.server.create
Indicates if prometheus-server service account will be created
true
serviceAccounts.server.name
Specifies prometheus-server service account name
""
Node-exporter service is exposed as a clusterIP with port: 9100 (controlled by nodeExporter.service.servicePort below) and targetPort: 9100 (controlled by nodeExporter.service.hostPort below) by default. Note the annotation prometheus.io/scrape: "true" is mandatory in order for node-exporter to be discovered by prometheus-server as a scrape target. See below specification for details.
Prometheus-server service is exposed as a clusterIP with port: 9090 (controlled by server.service.servicePort below) and targetPort: 9090 (not overridable) by default. See below specification for details.
Starting at eksctl anywhere version v0.12.0, packages on workload clusters are remotely managed by the management cluster.
While following this guide to install packages on a workload cluster, please make sure the kubeconfig is pointing to the management cluster that was used to create the workload cluster. The only exception is the kubectl create namespace command below, which should be run with kubeconfig pointing to the workload cluster.
eksctl anywhere get packages --cluster <cluster-name>
Example command output
NAME PACKAGE AGE STATE CURRENTVERSION TARGETVERSION DETAIL
metrics-server metrics-server 8h installed 0.6.1-eks-1-23-6-b4c2524fabb3dd4c5f9b9070a418d740d3e1a8a2 0.6.1-eks-1-23-6-b4c2524fabb3dd4c5f9b9070a418d740d3e1a8a2 (latest)
Update
To update package configuration, update metrics-server.yaml file, and run the following command: