How to Expose Deployed Applications in Kubernetes?

By Rohit Ghumare 7 min read
How to Expose Deployed Applications in Kubernetes?

Kubernetes has emerged as the definitive container orchestration platform. Its ability to manage and scale containerized applications has revolutionized how software is developed, deployed, and managed. According to CNCF’s Annual Survey 2021, 96% of organizations have either implemented or are currently assessing Kubernetes as their container orchestration platform, the highest record since 2016.

However, one fundamental challenge that many organizations face when working with Kubernetes is how to effectively expose their deployed applications to the outside world. (Learn Kubernetes in Production: Tips and Tricks for Managing High Traffic Loads)

In this article, we will delve into the various methods for exposing applications in Kubernetes, as well as challenges faced by organizations, equipping you with the knowledge to securely and efficiently make your applications accessible to end users. 

Challenges of Exposing Deployed Applications in Kubernetes

When working with Kubernetes, organizations face several challenges when effectively exposing their deployed applications. These challenges include:

  • Connectivity: Ensuring that the application is accessible to users outside the Kubernetes cluster and establishing connectivity between the application and external networks.
  • Security: Implementing secure access mechanisms and protecting the application from unauthorized access or potential vulnerabilities.
  • Routing and Load Balancing: Efficiently routing incoming requests to the appropriate application instances and distributing traffic evenly across multiple replicas or nodes.
  • Configuration Complexity: Dealing with the complexity of configuring and managing the various components involved in exposing applications, such as services, load balancers, and ingress controllers.

How Does Kubernetes Help Overcome These Challenges?

By offering scalability, reliability, security, load balancing, and automation, Kubernetes addresses the challenges organizations face when exposing applications. It offers numerous benefits, which are given below. (Also, learn Kubernetes Deployment: How to Run a Containerized Workload on a Cluster.)

  • Scalability: By leveraging Kubernetes, applications can dynamically scale in response to demand, effectively managing hosting expenses.
  • Reliability: Kubernetes’ automated pod restarts guarantee continuous availability, even in the face of pod-related complications.
  • Security: Kubernetes offers robust security measures such as role-based access control (RBAC) and network policies, shielding applications against unauthorized entry.
  • Load Balancing and Traffic Management: Kubernetes simplifies load balancing and traffic management for exposed applications. Kubernetes evenly distributes incoming traffic across multiple application replicas or nodes by utilizing services and load balancers. 
  • Automation: Kubernetes embraces the principles of automation, enabling organizations to define and manage the entire application deployment and exposure process declaratively.

Methods for Exposing Applications in Kubernetes

Kubernetes, the system responsible for arranging containers, provides a range of options for deploying and managing containerized applications. However, deploying an application alone does not automatically ensure its accessibility to end users. To make an application accessible, it must be exposed appropriately.

Kubernetes offers different Service types for exposing applications, each with its own purpose and use cases. The three primary methods for exposing applications are NodePort, LoadBalancer, and Ingress.

The choice of exposure method depends on specific requirements. When internet access is necessary, services, load balancers, or Ingress controllers serve as viable options. On the other hand, services typically suffice if the objective is to limit exposure to other applications within the cluster.

Let’s explore each method in detail.

Method 1: NodePort

NordPord represents a service that enables the exposure of a specific element through a designated port on each node within the cluster. This functionality allows access to the application from any node within the cluster by utilizing the IP address of the respective node and the corresponding port number on which the service is actively listening.

To utilize NodePort for application exposure, the following steps should be followed:

  • Establish a Deployment: Deploy the application as a Kubernetes Deployment, which effectively manages the life-cycle of the application’s Pods, ensuring their continuous operation.
  • Define a Service: Create a Service YAML file and designate its type as `NodePort`. Within the Service definition, establish a connection to the Deployment through the use of labels and selectors.

An instance of a Service YAML file named `myapp-service.yaml` is provided below, demonstrating the exposure of an application on port `8080`:

  • Utilize the Service: Apply the Service YAML by executing the `kubectl apply` command.

Check the services and make sure that it is in the `NodePort` type and is associated with your application’s deployment. You can use the `kubectl get services` command to list the Services and verify the configuration.

After creating the Service, Kubernetes automatically designates a random high port (typically between 30000-32767) on each node and redirects traffic from that port to the Service.

To reach the application, utilize the node’s IP address and the assigned node port. In the case of a cloud provider, you can obtain the external IP address of any node and access the application using http://<node-ip>:<node-port>.

For instance, if the node IP is 192.168.1.100 and the assigned node port is 32400, you can access the application at http://192.168.1.100:32400.

Although NodePort offers a straightforward method to expose applications, it may not be optimal for production scenarios as it necessitates direct access to specific nodes.

Method 2: LoadBalancer

A LoadBalancer is a specialized service that expands the functionality of NodePort by providing a means to expose an application on a specific IP address and port. This enables access to the application from any location on the internet using the load balancer’s unique IP address and port number.

To use LoadBalancer to expose an application, follow these steps:

·    Create a Deployment: Deploy the application as a Kubernetes Deployment. This facilitates the management of Pod instances responsible for running the application, ensuring their continuous operation.

·    Define a Service: Create a Service YAML file and designate its type as `LoadBalancer.` Within the Service definition, establish a connection to the Deployment through the use of labels and selectors.

An illustration of a Service YAML file named `myapp-service.yaml` is provided below, which exposes an application using the LoadBalancer method on port 80:

  •  Apply the Service: Apply the Service YAML using the `kubectl apply` command:

Once you have created the Service, Kubernetes establishes communication with the cloud provider in order to set up a load balancer. This load balancer is responsible for receiving external traffic and evenly distributing it among the nodes that are running your application.

To access the application, you can make use of the external IP address or DNS name associated with the load balancer. Obtaining the external IP address can be done by using the command `kubectl get services` or by referring to the load balancer configuration provided by the cloud provider.

For instance, if the external IP address is `203.0.113.10`, you can access the application by visiting `http://203.0.113.10`.

The utilization of a LoadBalancer represents an efficient and user-friendly method for exposing applications in cloud environments. It eliminates the need for manual load-balancing configuration, as this automated mechanism efficiently distributes network traffic across multiple servers, optimizing resource allocation and improving overall system performance.

It’s important to note that the availability of LoadBalancer functionality relies on the support offered by the cloud provider. In cases where this support is lacking, alternative approaches may need to be explored to achieve similar load-balancing capabilities. Nevertheless, LoadBalancer provides undeniable advantages and convenience, making it an invaluable tool for businesses aiming to streamline their cloud infrastructure and ensure optimal availability and responsiveness of their applications.

Method 3: Ingress

In Kubernetes, Ingress is a powerful API object that facilitates external access to services within the cluster. It functions as an intelligent router, enabling the establishment of traffic routing rules based on the requested host or path. For Ingress to work effectively, an Ingress controller must be deployed in the cluster to handle incoming traffic and direct it to the appropriate services.

To expose an application using Ingress, adhere to the following steps:

  •  Deploy an Ingress Controller: Begin by deploying an Ingress controller in your cluster. Several options are available, including Nginx Ingress Controller, Traefik, or HAProxy. The installation process varies depending on the chosen controller. Consult the documentation provided by the controller’s maintainers for precise installation instructions.
  • Create an Ingress Resource: Formulate an Ingress resource YAML file to configure the routing rules for your application. The Ingress resource defines the desired host, paths, and the corresponding service responsible for handling the incoming traffic. 

Here’s an example of an Ingress resource YAML file named `myapp-ingress.yaml`:

In this example, the Ingress resource routes traffic for myapp.example.com to the Service named `myapp-service` on port `80`

  • Apply the Ingress Resource: Apply the Ingress resource YAML using the kubectl apply command:

After creating the Ingress resource and activating the Ingress controller, it continuously monitors for new Ingress resources and automatically adjusts the routing rules accordingly.

To reach the application, proper configuration of DNS or host file entries is necessary, directing the specified host (`myapp.example.com` in this case) to the IP address of the cluster or load balancer.

Ingress offers advanced routing functionalities like SSL termination, path-dependent routing, and load balancing, enabling dynamic rules for exposing applications within Kubernetes.

Overall, Ingress is a versatile and robust approach to expose applications effectively in Kubernetes.

Expose and Scale Applications: the Taikun Way

Deploying a Kubernetes application to the end users can be a tiresome task in itself since it requires a lot of precision from the developers. The generally observed challenges involve security issues, network deployment issues, scaling and support issues, debugging, and time consumption issues. The issues can be more scenario-specific, too, depending on different functional patterns. 

To help avoid such issues, Taikun provides you with an effortless and effective way to deploy applications to the end user. It helps you optimize your cloud operations and streamline the process of exposing deployed applications. With its real-time monitoring and cloud automation capabilities, Taikun empowers businesses to deliver an exceptional user experience while enhancing security and data protection.

By leveraging Taikun’s comprehensive suite of cloud management services, users can effortlessly orchestrate applications and Kubernetes, provision and manage virtual machines, optimize costs, and gain observability into their cloud infrastructure. Start your journey with Taikun today and unlock the full potential of your cloud infrastructure. Try Taikun for free, or schedule a call with us today.