Guides

2023 Guide to Load Balancers: A Comprehensive Overview by NexGen Architects

NEXGEN ARCHITECTS GUIDE IN 2023 TO LOAD BALANCERS 101: Deals with all types of load balancers and how to create and set up dedicated load balancers. The ways of mapping are explained clearly.

Posted on
June 28, 2023
2023 Guide to Load Balancers: A Comprehensive Overview by NexGen Architects

In the realm of MuleSoft, load balancing serves as a critical mechanism for optimizing performance and ensuring scalability. By evenly distributing incoming network traffic across multiple servers or instances, load balancers enable efficient resource utilization and enhance the overall reliability of integration deployments.

In this blog post, we will explore the concept of load balancing in MuleSoft, its significance in achieving optimal performance, the different types of load balancers available, and the process of creating load balancers. Let's dive in!

Image source: MuleSoft

Types of Load Balancers

Primarily there are two types of Load Balancers that are available in MuleSoft:

Dedicated Load Balancer

A dedicated load balancer refers to a load balancer that is exclusively dedicated to a specific application or set of applications. It is deployed and configured specifically for the purpose of load-balancing traffic for a single application or a group of closely related applications. A dedicated load balancer provides dedicated resources and is not shared with other applications or services. This type of load balancer ensures consistent performance and isolation for the application it serves.  

A VPC (Virtual Private Cloud) is a virtual network infrastructure within a cloud computing environment that provides isolated and customizable network settings for resources deployed in the cloud. Also ensures secure communication and control over networking aspects. DLB (Distributed Load Balancer) is a VPC feature that evenly distributes incoming network traffic across multiple instances, improving availability and scalability of applications while maintaining high performance and fault tolerance.

Shared Load Balancer

A shared load balancer, on the other hand, is a load balancer that is shared among multiple applications or services. It is a centralized load-balancing solution that serves multiple applications simultaneously. In a shared load balancer setup, multiple applications or services share the same load balancers resources, such as network capacity, processing power, and configurations. This approach allows for efficient resource utilization, as multiple applications can benefit from a single load-balancing infrastructure.  

Speaking of the limitations that Shared Load Balancer has is that it has a maximum limit on the number of requests it can process per second, which can impact the overall performance of the system during peak traffic. Secondly, port availability is limited, so it's important to consider potential conflicts when configuring services. Lastly, the shared load balancer may use self-signed certificates, which can cause issues with client trust and security. It's crucial to address these limitations to ensure the smooth operation and reliable user experience.

Due to the limited control over load balancing settings, potential performance and scalability issues due to shared resources are the limitations of a shared load balancer. Also, the dependency on the provider's infrastructure, and potential contention among multiple users or applications that are sharing the load balancer affects customisation options and fine-grained traffic management.

What is the other kind of Load Balancer?

External Load Balancer

NGINX is an external load balancer commonly used for distributing incoming network traffic across multiple servers. It acts as a reverse proxy, optimizing performance and improving scalability. NGINX offers advanced load balancing algorithms, caching capabilities, and high availability features, making it a popular choice for handling web traffic efficiently.  

Image Source: MuleSoft

By distributing incoming traffic across multiple servers or data centres, External load balancers provide high availability and disaster recovery capabilities. They ensure continuous service availability, mitigate server failures, besides enabling quick recovery from disasters. Also improves overall system reliability and minimizes downtime.

Difference between a Dedicated and Shared Load Balancer

The choice between a dedicated load balancer and a shared load balancer depends on the specific requirements and characteristics of the applications or services being deployed. Dedicated load balancers are often used when applications have unique requirements, need maximum performance, or require strict isolation (VPC) from other applications. On the other hand, shared load balancers are suitable when multiple applications can coexist and share resources efficiently, reducing operational costs and simplifying infrastructure management.  

It's important to carefully consider factors such as performance requirements, scalability, isolation needs, and cost-effectiveness when deciding between a dedicated or shared load balancer. The choice should align with the specific needs of the applications and the overall architecture and goals of the deployment.

The choice should align with the specific needs of the applications and the overall architecture and goals of the deployment. On this note, it's important to carefully consider factors such as performance requirements, scalability, isolation needs, and cost-effectiveness when deciding between a dedicated or shared load balancer.

Creating Load Balancers in MuleSoft

Creating load balancers in MuleSoft involves several steps to ensure optimal performance and functionality. Here's a high-level overview of the process:

  • Firstly, Define the Backend Servers, Identify and configure the backend servers or instances that will handle the incoming requests. These servers can be physical machines, virtual instances, or containers.
  • Then Set Up the Load Balancer. For this configure the load balancer in MuleSoft, specifying the type of load balancer and the mapping rules that will determine how requests are distributed.  

Setting up a Dedicated Load Balancer

Setting up a dedicated load balancer for your web applications can be a bit complex but it doesn't have to be complex anymore. Allow me to walk you through a process of setting up Dedicated Load Balancers is as simple as following these 10 steps.

1. Access the Runtime Manager in Anypoint Platform.

2. Navigate to Load Balancers and select "Create Load Balancer."

3. Provide a unique name for your load balancer, optionally including your organization's name.

4. Choose an Anypoint VPC as the target for your load balancer.

5. Set the timeout value for Mule application responses (default is 300 seconds).

6. Add any necessary allow listed CIDR IP addresses for access control (default is 0.0.0.0/0).

7. Select the inbound HTTP mode: "Off," "On" (using HTTP), or "Redirect" (redirecting to HTTPS).

8. Specify additional options, such as enabling static IPs, URL encoding, TLS versions, and client certificate forwarding.

9. Add a certificate, either by uploading a file or configuring client certificate verification.

10. If desired, add URL mapping rules for routing requests, ordering them by priority.

11. Save the certificate.

12. Finally, click "Create Load Balancer" to complete the process.

Mapping Rules in Dedicated Load Balancers:

Mapping rules in dedicated load balancers enable to define many criteria such as URL paths, request methods, or headers to determine the appropriate routing. This allows for efficient load balancing, improved scalability, and optimized resource allocation. Mapping rules provide you with the flexibility needed to handle different routing scenarios, ensuring that requests are appropriately distributed and processed for a seamless user experience.

Firewall Mapping in MuleSoft VPC:

Firewall mapping in MuleSoft's VPC (Virtual Private Cloud) allows you to define rules that control network traffic, ensuring secure and efficient communication within your MuleSoft applications. By configuring firewall rules, you can permit or restrict access based on IP addresses, ports, or protocols.  

This helps protect your applications from unauthorized access and potential security threats. Firewall mapping provides granular control over incoming and outgoing traffic, allowing you to establish secure connections and enforce communication policies. With proper firewall mapping in MuleSoft's VPC, you can safeguard your applications, maintain data integrity, and ensure seamless and secure interactions between your systems and external entities.

URL Mapping

This is the process of associating specific URLs with corresponding resources or actions on a web server. It ensures that incoming requests to specific URLs are routed correctly, allowing efficient handling of web page requests, API calls, or content delivery based on predefined rules and configurations.  

URL mapping simplifies website organization, aiding user navigation and improving search engine optimization. It connects each URL to the right content, making it easy for visitors to find information.

Rule # - 1

Input Path - /{app}/

Target App - {app}

Output Path - /

Protocol - http

Host Mapping

Host mapping involves associating domain names with specific IP addresses or server locations. It enables proper resolution of domain names to their corresponding servers, ensuring that incoming requests reach the correct destination. Host mapping plays a crucial role in website hosting, load balancing, and directing traffic to the appropriate server based on the requested domain.

It also enables load balancing across servers, enhancing performance and delivering an optimal user experience and simplifies website access by associating domain names with IP addresses.

Rule # - 1

Input Path - /

Target App - {subdomain}

Output Path - /

Protocol - https

Stay tuned for more such insights on MuleSoft components and Architecture as a Service (AaaS). If you are stuck behind or willing to know more about our MuleSoft Architect services, then we are just a click away. Reach out to our NexGen’s Chief Architect at hello@nexgenarchitects.com.