The Ultimate Guide to Kubernetes Services, LoadBalancers, and Ingress

Exposing Ports in Kubernetes Applications

In this article, we will show how to expose applications running in Kubernetes Pods to other applications inside the cluster and the external world. We will discuss the three most common ways of doing so which are Kubernetes Services, LoadBalancers, and Ingress. We will show how they differ from one another and which one to choose according to  your application’s requirements.

Note: LoadBalancers are a technically a subtype of Service, but they have unique characteristics and deserve special attention.

Don't have time to read the whole post? Jump to the bottom for a quick table comparison!

Why Services, LoadBalancers and Ingress?

On traditional servers without Kubernetes, communication happens between applications using DNS which is mapped directly to IP addresses. However, in Kubernetes it's not so simple! Applications don't have a single permanent IP address any more.

Normally, pods are deployed using higher level constructs such as Deployments, StatefulSets and DaemonSets. These objects can create and destroy pods dynamically. Hence, pods are an ephemeral resource. Each pod in the Kubernetes cluster gets an IP address that is unique.

Because every new pod gets a new IP address, communication via IP address or DNS of individual pods is impractical.

Kubernetes provides a solution to this through Service and Ingress resources. As mentioned, LoadBalancers are a subtype of Service. This post will clarify for once and for all the differences between Service vs. LoadBalancer vs. Ingress.

Kubernetes Services

A Kubernetes Service is a logical abstraction that makes it possible to reach applications running inside ephemeral pods. Services create a single and constant point of entry to a group of pods. Each Service has an IP address and port that never changes while the Service exists. Clients can open connections to that IP and port. Those connections are then routed to one of the pods backing that service. Behind the scenes, pods are added and removed to the mapping as they are created and deleted

There are three primary types of Kubernetes services:

  • ClusterIP
  • NodePort
  • LoadBalancer

The Ingress resource is not a type of Service, although it serves a somewhat similar purpose. It will be described later.

Kubernetes Service vs Deployment

This is a trick question. You will never need to choose between a Kubernetes Service and a Kubernetes Deployment.

A Deployment makes the application run in your cluster. You still need a Service resource to make it accessible over the network.

The Simplest Possible Example of a Kubernetes Service

Let’s look at how Services are defined before we go into details about each type of Service:

The above Service will provide access to an application running inside pods labeled nginx. The selector defines which backend pods the request will be sent to. The pods that become part of the service are called Endpoints.

The parameter spec.port.targetPort defines on which port those pods are listening. Of course you need to actually have pods listening on that port which match the service’s selector! Otherwise you’re knocking on the door of an empty house where no one is home.

The parameter spec.type defines which type of service should be deployed. If not defined, ClusterIP is used by default. The other options are NodePort and LoadBalancer.

Service Types - ClusterIP vs NodePort vs LoadBalancer

The three Kubernetes Service types and their behaviors are:

  • ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster.
  • NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
  • LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.

In Kubernetes you can expose a TCP Service or a UDP Service in any of the above ways. Not all LoadBalancers support UDP though, as it depends on the cloud provider!

ClusterIP Services

This type of Service is the default and exists on an IP that is only visible within the cluster. All other pods running within the cluster can access the app using ClusterIP service.


NodePort Services

A NodePort service defines a port ( spec.ports.nodePort ) which is exposed on all nodes in the cluster. A combination of Node IP address and the port defined can then be used to access the application from outside the cluster.


LoadBalancer Services

A load balancer is just a type of a service with some external dependencies. Kubernetes clusters running on cloud providers support the automatic provision of a load balancer from the cloud infrastructure. All you need to do is set the service’s type to LoadBalancer instead of NodePort. This then gets handled by the cloud controller manager and a cloud provided load balancer is provisioned whenever the service is created on the Kubernetes cluster.

The load balancer will have its own unique, publicly accessible IP address and will redirect all connections to service. The service can be accessed through the load balancer’s IP address, which functions as a service entry to Kubernetes.

Additional load balancer features can be enabled using annotations made available by the cloud provider.


Ingress

Each time a LoadBalancer type Service is created in a cloud environment, a new load balancer is created. This can add cost for each load balancer, which may not be a very feasible option.

An ingress resource is a standalone construct in a Kubernetes that only requires one Load Balancer, even when providing access to dozens of services. When a client sends an HTTP request to the Ingress, the host and path in the request determine which service the request is forwarded.

Ingresses are used together with Services to expose applications running in Pods. An Ingress alone cannot route traffic to a Pod! It must route traffic to a Service which points to the Pod.

Ingresses operate at the application layer of the network stack (HTTP or HTTPS) and can provide features such as host and path-based routing, TLS, cookie-based session affinity and others, which services can’t. For example, imagine that Kubernetes.io ran Kubernetes and you wanted to route HTTP requests for the url https://kubernetes.io/service/?param=true to one pod and requests to https://kubernetes.io/service/?param=false to another pod. This would be impossible with a Service because the two HTTP requests both speak to the same IP and DNS. They are identical at layer 4 and only differ if you parse the actual HTTP request after terminating the SSL.

To make Ingress resources work, an Ingress controller needs to be running in the cluster. Different Kubernetes environments use different implementations of the controller, but several don’t provide a default controller at all.

Often, Ingress controllers are not started automatically with a cluster. There are various options to choose from and can be found at Kubernetes official documentation.

Let’s have a look at a simple Ingress definition:

This definition routes any request that starts with /example to Service named nginx-example. We can also perform virtual host-based routing using an ingress definition. For example:


Common Questions About K8s Services

On Kubernetes can you have multiple versions of same service?

Sort of. You can create multiple Deployments to run different versions of the same application. Then you can create a Service for each version of the application, or a single ingress that routes to all of them based on the url path.

If you do this, make sure each Service defines a selector that is unique to the correct Deployment with the version you want.

In Kubernetes Can You Reach a Service in Another Namespace?

Yes! We take advantage of this in Robusta.dev all the time, when forwarding Prometheus alerts from an external Prometheus to Robusta. You can see an example in our docs for configuring Prometheus runbooks.

When accessing a service in another namespace, use the following format: http://<service-name>.<namespace-name>.svc.cluster.local

In Kubernetes Can You Ping a Service?

No, at least not for all Service types. Services can pass UDP and TCP traffic from the Service to the relevant pods, but they don't pass ICMP traffic which is required for ping.

If you are trying to debug an HTTP(S) service, you can use wget or curl instead.

What is a Kubernetes Headless Service?

A Kubernetes headless service allows tracking which pods are logically part of a service, without creating a unique entry point for that service in way of a single IP address and port.

Headless services are useful for edge cases when you need Kubernetes to interopt with external systems and other advanced use cases.

Which Kubernetes Service Type or Ingress Should You Use?

A ClusterIP service is a good default choice for internal Services that don't need to be accessed outside the cluster.

For Services that you'd like to expose externally, a LoadBalancer is a good choice unless you specifically need layer 7 functionality that only an Ingress allows.

Each case is unique though, so we've put together a convenient table summarizing the differences between all the Service types and Ingress features.

Kubernetes Services Comparison - ClusterIP vs NodePort vs LoadBalancer vs Ingress

Next Steps

Now the fun part begins - serving traffic from your application! However, there still is some work to do.


Monitoring Kubernetes Services

The industry standard for monitoring Kubernetes is Prometheus and Alert Manager, often in combination with a service mesh.

To get started, you will need to install Prometheus on Kubernetes and define your alerts.

To help you get started, we've created an all-in-one bundle for Kubernetes monitoring with Prometheus, AlertManager, Grafana, and Robusta with good default alerts that don't require fine-tuning. Have a question about one of the alerts, what it means, or how to fix it? Just open an issue on GitHub to ask or reach out on LinkedIn/Twitter. We also have a community Slack channel to answer all your Kubernetes monitoring questions!


Credits

Thank you to Avinash Nadendla and Daniel Finneran for providing feedback on this post! If you're looking for a Kubernetes Load Balancer, be sure to check out kube-vip which Daniel works on!

Never miss a blog post

Create your account to get started

Email us, and we'll provide you with a login link to complete your onboarding from your computer, where Robusta performs at its best.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.