Key Points Which We Should to Know for Better Understanding the App Service Plan

Introduction

Correct understanding of the App Service Plan is essential to the correct use of App Service in production. However, we often see cases where detailed behavior is not understood or causes trouble, probably because there are many rough explanations that App Service Plan is like the specifications of the machine. I will write the key points here.

App Service Plan

Let’s consider if App Service Plan == Box for apps ??

Whenever you create a new web app, you need to select the App Service Plan or create a new one. This is to specify what kind of hardware you want to use for Azure. Yes, it’s a rough explanation. Let’s take a closer take look from here.

What exactly is an instance?

To put it simply, it is a machine. Simply it’s a VM

What difference between Free plan and Basic plan and above ?

With the Free plan, your app runs on the same instance as other users’ apps (no data is visible). In other words, it is influenced by another app. In Basic and above, a dedicated instance is assigned, so other users will not use it without permission. The disk is reimaged before the instance is allocated, so no other user’s data is visible, and your data is not visible to other users.

What happens if you put multiple apps in service plan ?

If you have multiple apps in one App Service Plan, then first, if the number of instances is 1, all apps will run on the same machine. In other words, all apps share one machine’s CPU, memory, and network bandwidth (hereafter simply referred to as computational resources) . In other words, if one app uses a lot of memory, the amount of memory available to other apps will be reduced accordingly.

What is Kudu ?

Kudu (Advanced Tools on the portal) is a useful feature for deploying to App Services and for quick diagnostics. This Kudu is an independent web app, so when you deploy it or use a Kudu session, the app itself and the Kudu app will share their computational resources .

Case: What if you run three apps in one App Service Plan and deploy them to all at the same time?

Kudu runs individually for each app, which means that a total of 6 apps share computational resources. Simply think of it as having twice as many apps running.

Slots in App Service

One of the useful features of App Service is the slot function. In short, this is just another app with the same name . So, for example, if you have 3 slots (DEV, STAGE, PRD), it means that 3 apps share computational resources. Kudu also starts separately.

Simple Example

Environment:

  • I created three apps (app1, app2, app3) in an App Service Plan with w3wp
  • app1 and app2 have two slots (PRD, STAGE)
  • app3 has three slots (DEV, STAGE, PRD)
  • We are constantly launching Kudu for all three apps.

Explanation:

  • In this case we have (2 + 2 + 3) * 2 = 14 w3wp share computational resources.
  • If any one of them issues some extremely large I/O request or consumes a lot of SNAT port resources, the other 13 will be affected all at once.

Automatic restart of w3wp in App Service

The app, to be exact, the w3wp.exe process is automatically restarted for a variety of reasons. When multiple apps are running, these processes are restarted at about the same time. If Kudu was running, they will also be restarted. The computational resources required for all of these simultaneous restarts depend on the number of apps and the computational resources used by each app. To put it plainly , if you have a lot of heavy apps running, the restart will be slow . If you write it like this, you may think “That’s right”, but in fact, there are many cases where many apps are packed in one App Service Plan and “Startup is slow!”.

✏️ It is reasonable that the platform does not start all at once, but from the platform side, that is the only way. If you try to start them one by one, you will get unexpected results depending on the startup order. For example, if the first app you launch takes a long time, other apps will have to wait forever.

Multiple instances of Apps

Below I show lists of behavior when we launched multiple instances.

  • Instance allocation
  • File system
  • Health Check

Instance allocation

All apps will be launched on all instances (default behavior). However, not all are started at the same time, and the start sequence is triggered when the HTTP request arrives. It’s a so-called cold start. For example, suppose you scale out from a state of only one instance to two instances. In this case, it is not exactly when traffic will start flowing to this new instance. It may be soon after booting, or it may be long after. Traffic distribution is determined by the Front End (reverse proxy within the App Service) based on the load on each instance, so it is not exactly 1/n distributed to each instance.

File system

All instances refer to the same directory. If Local Cache is not enabled, writes from one instance will be immediately visible to other instances.

However, when the file server that provides that directory is switched, all apps will be affected at once. Although the downtime itself has been devised to be as short as possible, w3wp is often restarted. It is categorized as “user unexpected restart”. When this restart occurs, all apps will cold start all at once as described above. At this time, access to a large amount of computational resources tends to be concentrated, which may lead to long startup time, 502, 503 errors, and so on.

To completely avoid this effect, you need to create another app in another region and distribute the file system. Therefore, you will use Traffic Manager, Azure Front Door, or some other service to to control the traffic.

Health Check

The Health Check feature has been available for some time.

When enabled, no requests will be sent to instances that return consecutive 5xx errors. Once the request is no longer sent, HTTP requests will continue to be sent periodically only from the internal Front End, and when 200 is returned, external requests will be sent again.

This is a convenient function, but there is one caveat. For example, let’s say you have two instances assigned and one instance is returning a 5xx error. No traffic is sent to that instance, but the remaining instance must continue to handle all traffic instead . Keep in mind that the App Service Plan itself hasn’t scaled, just limiting traffic.

Use wisely App Service Plan

Based on the features described above, the following usage is recommended because it provides high stability, of course, sometimes there is a trade-off between budget and consultation, however I think the three key points are:

  • For Business-critical apps that require high SLAs should be separated into a single app or an App Service Plan with only a few apps. Do not mix with test apps.
  • Health Check should be enabled.
  • Deploy the same app for each region and distribute the risk with Traffic Manager, Azure Font Door.

 by the author.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Maciej

Maciej

DevOps Consultant. I’m strongly focused on automation, security, and reliability.