Sunday, January 4, 2026

Istio Ingress Gateway for direct node ingress with Kubernetes applications

Recently I migrated a few market data applications from nomad to EKS. 

These services are optimised to run within a small, simple footprint where SSL is offloaded to other components. Before the migration, this used to be an nginx companion using a sidecar pattern. On the new setup, Istio is the ingress tool of choice so there is no need to keep a sidecar.

With a traditional Istio setup, pretty soon we came to the realisation that we are introducing additional hops in our network and increasing service latency:




The network circuit was:
  • Consumers connect to the NLB / Service definition in Istio
  • The NLB sends the requests to the worker nodes running Istio Ingress
  • Istio Ingress accesses the application service definition and routes the traffic to one of the instances
  • The MD instance receives the request, processes the information
  • The HTTP response is returned to the client

The new platform would be slower, since now we need to contact a newly introduced NLB and the intermediate Istio worker node.

Looking around at what was possible with Istio, there is a feature that allows you to configure edge ingress with Istio Ingress Gateway Summing up the additional configuration:
  • Make use of external-dns for direct service resolution
  • Make use of nodeport to proxy into this service type with automatic pod resolution
  • Make use of Ingress Gateway helm deployment to have Istio running on the application nodes

The resulting picture is:



The resulting network circuit is now:
  • Consumers resolve the endpoint via DNS which external-dns populates 
  • The Edge Istio container receives the request, routes it internally or sends it to the next MD APP node
  • The MD instance receives the request, processes the information
  • The HTTP response is returned to the client

The implementation is relatively simple. Since I already had an Istio deployment, I created a new helm chart deploying the ingressgateway component of Istio:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: istio-ingressgateway
  namespace: istio-system
  labels:
    app: istio-ingressgateway
    istio: ingressgateway
spec:
  selector:
    matchLabels:
      app: istio-ingressgateway
      istio: ingressgateway
  template:
    metadata:
      labels:
        app: istio-ingressgateway
        istio: ingressgateway
      annotations:
        inject.istio.io/templates: gateway
    spec:
      serviceAccountName: istio-ingress # shared account with Istio
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet # This will be needed for edge access

      nodeSelector:
        mylabels/ingressgateway: "true" # What labels will run Istio Ingress Gateway
      tolerations:
        # Any tolerations?

      containers:
  [... from here pretty much same as the original Istio ...]


After getting Istio Ingress Gateway running on the desired labels / nodes, we need to add the necessary settings to on the Market Data application deploymenty side. The application previously had these relevant components:

  • Service definition (ClusterIP)
  • Gateway
  • VirtualService

First, modified the service to add the external-dns annotation adding the additional internal DNS. For this setup the application will have two endpoints:

  • mymarketdata.localdomain.service.internal - the original dns endpoint
  • igw-mymarketdata.localdomain.service.internal - Istio Ingress Gateway dns endpoint

Note: This is an addition to an existing service, so I will only highlight the modifications to a traditional service setup and not the entire setup.

On the existing Service definition, we need to add an annotation for external-dns:

apiVersion: v1
kind: Service
metadata:
  name: market-data-v1
  namespace: marketdata
  annotations:
    external-dns.alpha.kubernetes.io/hostname: igw-mymarketdata.localdomain.service.internal

 [...]

Since this is is a ClusterIP service type we will use nodeport to automatically map the pod IPs on the target DNS records. Sample nodeport configuration:

apiVersion: v1
kind: Service
metadata:
  name: nodeport-market-data-v1
  namespace: istio-system
  annotations:
    external-dns.alpha.kubernetes.io/hostname: igw-mymarketdata.localdomain.service.internal
spec:
  type: NodePort
  externalTrafficPolicy: Local
  selector:
    istio: ingressgateway
  ports:
  - name: https
    port: 443 
    targetPort: 443 # 8443 is the default unless we bind envoy to 443
    nodePort: 38443   # valid NodePort

Note: Nodeport operates on the range 30000-32767, and by default will bind to port 8443 unless envoy runs as root or is allowed to bind to low ports such as 443.

Now that we have the service modification and nodeport, we need to create the new Gateway:

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: gateway-market-data-v1
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 443
        name: https
        protocol: HTTPS
      hosts:
        - igw-mymarketdata.localdomain.service.internal
      tls:
        mode: SIMPLE
        credentialName: [your cert config]

 and VirtualService:

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: vs-market-data-v1
  namespace: marketdata
spec:
  hosts:
    - igw-mymarketdata.localdomain.service.internal
  gateways:
    - istio-system/gateway-market-data-v1
  http:
    - route:
        - destination:
            host: market-data-v1.marketdata.svc.cluster.local
            port:
              name: [your container port label or port number with "number:"]


Now Istio should start to publish the running pod's IP on the new dns name. By default it will be a round robin, but we can expand this to least connections, random, etc with a DestinationRule.




No comments:

Post a Comment