Secure your Microservices Ingress in Istio with Let’s Encrypt


Let’s say you have a Microservice that you would like to expose to the Internet. One of the key requirements before you expose the service is to make it available at a secure endpoint (such as

Even though this is a simple requirement, this is often thought out towards the end of the deployment steps (typically when you are about to make DNS changes). This is primarily because the steps involved in procuring certificates and configuring them have been hard. And then once the certificates are deployed, someone got to make sure they are renewed on time. These challenges are amplified when you operate a number of Microservices and there are many disparate teams managing them.

What if there is a simple and automated way to take care of this?

In this article, we will look at how to automate this entire process so that whenever you deploy a Microservice, a TLS certificate is automatically provisioned and the Microservice is mapped to a DNS entry.


We will be using the following components to automatically provision a TLS certificate for our Microservice and map the ingress to a DNS endpoint.


cert-manager is a native Kubernetes certificate management controller. It can help with issuing certificates from a variety of sources, such as Let’s Encrypt, HashiCorp Vault, Venafi, a simple signing key pair, or self-signed.


external-dns sets up DNS records at DNS providers that are external to Kubernetes such that Kubernetes services are discoverable via the external DNS providers and allows the controlling of DNS records to be done dynamically, in a DNS provider agnostic way.

external-dns will be used to create Amazon Route53 entries for our Microservice.

Istio Service Mesh

Istio is an open-source service mesh that layers transparently onto existing distributed applications.

Step By Step Guide

The below setup is performed on an EKS Cluster running Kubernetes version 1.21 with DNS hosted on Route53. IRSA (IAM Roles for Service Accounts) is used for accessing AWS services from EKS.


You need to have the following installed on your laptop:

  • eksctl – We will be creating an Amazon EKS Cluster to deploy our Microservice
  • Helm – To deploy all required components
  • kubectl – To connect to our Kubernetes cluster

Create an EKS Cluster

Note: You can skip Step 1 and Step 2 if you already have a Kubernetes cluster

Step 1: Create a file called cluster.yaml with the following content.

kind: ClusterConfig
  name: istio-demo
  region: us-east-1
  version: '1.21'
      us-east-1a: { id: subnet-xxxxxx }
      us-east-1e: { id: subnet-xxxxxx }
    publicAccess:  true
    privateAccess: true
  withOIDC: true
  - metadata:
      name: sa-external-dns
      namespace: external-dns
      - "arn:aws:iam::aws:policy/AmazonRoute53FullAccess"
  - metadata:
      name: sa-cert-manager
      namespace: cert-manager
      - "arn:aws:iam::aws:policy/AmazonRoute53FullAccess"
  - name: istio-test-nodegroup
    labels: { role: workers }
    instanceType: t3a.medium
    desiredCapacity: 3
    volumeSize: 50
    privateNetworking: true

This YAML file creates an EKS cluster in existing Subnets with cluster access as Public and Private, nodes in Private Subnets and OIDC enabled. 

While creating the EKS cluster, service accounts for cert-manager (sa-certmanager)  and external-dns (sa-external-dns) are created with the Route53 access. While creating the service account, a Namespace is automatically created by eksctl.

Step 2: Create an Amazon EKS cluster using eksctl after updating the Subnet values

$ eksctl create cluster -f cluster.yaml

This will create the following:

  • EKS Cluster – An EKS Cluster with Kubernetes version 1.21, respective tags and permissions needed for the cluster.
  • ManagedNodeGroups – Creates a Node Group with three nodes of the type t3a.medium with an EBS Volume of 50GB each.
  • IAM – Creates an IAM role with Route53 full access permission and attaches it to the service account and creates a service account in the respective namespace.
  • Namespaces – Creates two namespaces (external-dns, cert-manager) in the cluster.

Once the cluster creation is complete, eksctl downloads and exports the kubeconfig. Verify you can connect to the cluster by running the following command:

$ kubectl get nodes

Install Istio Service Mesh using Istioctl

We will install the Istio service mesh with demo configuration profile for this exercise. Demo profile of Istio deploys Istiod, Istio Ingress, and Egress gateway components. A Load Balancer is created and attached to the Ingress Gateway.

Step 3: Istioctl is used to install and configure the Istio service mesh. Use the below command to download the latest version of Istio and Istioctl.

curl -L | sh - 
cd istio-<version>
export PATH=$PWD/bin:$PATH
cp -v bin/istioctl /usr/local/bin/

Step 4: Install Istio by selecting the demo configuration profile.

istioctl install --set profile=demo -y

Verify the installation by running:

kubectl -n istio-system get svc
kubectl -n istio-system get pods

Install cert-manager

cert-manager is used to request certificates from Let’s Encrypt. Certificates are issued and renewed automatically. DNS-01 challenge is used to verify the domain hosted in Route53.

Helm is used to install and configure the cert-manager. Since the service account is created (sa-cert-manager) during the cluster creation, we will use the same and disable automatic service account creation.

Step 5: Install cert-manager through Helm by running the following commands

helm repo add jetstack

helm install cert-manager --namespace cert-manager \
--version v1.5.4  jetstack/cert-manager \
--set serviceAccount.create=false \
--set \
--set prometheus.enabled=false \
--set webhook.timeoutSeconds=4  \
--set installCRDs=true \
--set securityContext.fsGroup=1001 \
--set securityContext.runAsUser=1001

Step 6: Create a file called “cluster-issuer.yaml” with the following content. Make sure to update the email address field.

kind: ClusterIssuer
  name: letsencrypt
  namespace: cert-manager
      name: letsencrypt
    - dns01:
          region: us-east-1

Run the below command to deploy the cluster issuer

kubectl apply -f cluster-issuer.yaml

Use the below command to verify if the cluster issuer is installed successfully.

kubectl get ClusterIssuer -n cert-manager letsencrypt 

NAME          READY   AGE
letsencrypt   True    4h32m 

Install ExternalDNS

ExternalDNS is used to create DNS entries automatically in Amazon Route53 based on the hostname provided in the configuration. 

Helm is used to deploy and configure ExternalDNS. Required values are provided as input during helm installation. Istio gateway and VirtualService are set as the source to check for new DNS entries. “txtOwnerId” value is used to identify the entries in Route53 through this ExternalDNS.

Here also we disable the automatic service account creation as we have already created sa-external-dns during cluster creation.

Step 7: Install external-dns through Helm by running the following commands.

helm repo add bitnami

helm install external-dns bitnami/external-dns -n external-dns \
--set serviceAccount.create=false \
--set \
--set provider=aws \
--set txtOwnerId=istio-demo \
--set aws.zoneType=public \
--set sources[0]=istio-gateway \
--set sources[1]=istio-virtualservice

Check the logs of the pod for any errors. If the IRSA is not assigned properly it won’t work as expected and you can see access denied errors to Route53.

Deploy Bookinfo sample application

Istio provides a sample application with four separate Microservices. We will use the same for our demo. To ease the deployment, we have created a simple helm chart that you can use to deploy the application.

Step 8: Replace FQDN in the below set of commands with the full DNS hostname (such as that you own and run the below commands.

$ git clone gravity-bookinfo-demo
$ kubectl create ns bookinfo
$ kubectl label namespace/bookinfo istio-injection=enabled
$helm install -n bookinfo bookinfo gravity-bookinfo-demo/ --set gateway.hostname=<FQDN>

If you look at the “gravity-bookinfo-demo” directory (where you git cloned from the repo), you will find a structure similar to this:

├── ./gravity-bookinfo-demo/Chart.yaml
├── ./gravity-bookinfo-demo/templates
│   ├── ./gravity-bookinfo-demo/templates/bookinfo.yaml
│   ├── ./gravity-bookinfo-demo/templates/certificate.yaml
│   ├── ./gravity-bookinfo-demo/templates/gateway.yaml
│   ├── ./gravity-bookinfo-demo/templates/_helpers.tpl
│   └── ./gravity-bookinfo-demo/templates/virtualservice.yaml
└── ./gravity-bookinfo-demo/values.yaml

Here are the key YAMLs that Helm would have generated when you ran the above helm install command.


The sample application provided by Istio. It can be found under the samples folder of Istio (eg: istio-1.11.3/samples/bookinfo/platform/kube/bookinfo.yaml)


This is the YAML that generates a TLS certificate for the FQDN that you provided. The generated TLS certificate is stored as a Secret in the istio-system namespace

kind: Certificate
  name: bookinfo
  namespace: istio-system
  commonName: <FQDN DNS Name that you provided>
  - <FQDN DNS Name that you provided>
    kind: ClusterIssuer
    name: letsencrypt
  renewBefore: 720h0m0s
  secretName: bookinfo-tls

Run the following command to check whether the TLS Certificate is issued successfully.

kubectl get certificate -n istio-system bookinfo


Creates an Istio gateway for the incoming request. HTTP to HTTPS redirection is enabled and TLS is configured with the values of credentialName. Secret name created in the Certificate (last line of the above Certificate.yaml file) and credentialName should match for TLS to work.

apiVersion: v1
- apiVersion:
  kind: Gateway
    name: bookinfo
    namespace: bookinfo
      istio: ingressgateway
    - hosts:
      - <FQDN>
        name: http
        number: 80
        protocol: HTTP
        httpsRedirect: true
    - hosts:
      - <FQDN >
        name: https
        number: 443
        protocol: HTTPS
        credentialName: bookinfo-tls
        mode: SIMPLE


This creates the Istio Virtual Services for all the bookinfo Microservices with appropriate ingress routes.

kind: VirtualService
  name: bookinfo
  namespace: bookinfo
  - bookinfo
  - <FQDN>
  - match:
    - uri:
        exact: /
      uri: /productpage
  - match:
    - uri:
        exact: /productpage
    - uri:
        prefix: /static
    - uri:
        exact: /login
    - uri:
        exact: /logout
    - uri:
        prefix: /api/v1/products
    - destination:
        host: productpage
          number: 9080

Test the DNS endpoint

Hit the DNS endpoint in your browser and you should see the sample book-info application rendered successfully. When the same URL is called with HTTP it should redirect to HTTPS automatically.


cert-manager, external-dns, and lets-encrypt have become essential components in any Kubernetes deployments. With the automation in place whenever your Microservices are deployed, you can be rest assured that TLS is automatically provisioned and DNS entries are automatically created at the DNS provider of your choice.

You can have this as part of your GitOps pipeline so that whenever new Kubernetes clusters are created, these components are automatically installed and configured.

Gravity, the Kubernetes platform offered by Invisibl Cloud automatically takes care of this for you. Clusters are bootstrapped with the above components and when you deploy your Microservices through Gravity, TLS and DNS are automatically managed. If you are interested to learn more about Gravity, please get in touch with us.

3 thoughts on “Secure your Microservices Ingress in Istio with Let’s Encrypt

  1. I tried following this article…but the following command is failing with error “zsh: no matches found: sources[0]=istio-gateway”. Could you please suggest ?

    % helm install external-dns bitnami/external-dns -n external-dns \
    –set serviceAccount.create=false \
    –set \
    –set provider=aws \
    –set txtOwnerId=istio-demo \
    –set aws.zoneType=public \
    –set sources[0]=istio-gateway \
    –set sources[1]=istio-virtualservice
    zsh: no matches found: sources[0]=istio-gateway

  2. It looks like external-dns is not able to find istio-gateway. can you check if your istio installation went through successfully as part of step 3? when you run “kubectl -n istio-system get pods” do you see istio-gateway listed?

    1. Here is the output :-
      % kubectl -n istio-system get pods
      istio-egressgateway-6f9d4548b-lbn95 1/1 Running 0 2m6s
      istio-ingressgateway-5dc645f586-htq7z 1/1 Running 0 2m6s
      istiod-79b65d448f-482xk 1/1 Running 0 2m19s

      Do I need to mention “istio-ingressgateway-5dc645f586-htq7z” instead of “istio-gateway” ?

      And also what about “–set sources[1]=istio-virtualservice” in helm command, shall I leave it as it is ?

      I am available at and +61405155584 (whatsapp). Please contact to help me sort this out.

Leave a Reply