4 – Running Oqtane as a Ubuntu Linux service

4 – Running Oqtane as a Ubuntu Linux service

In this post we will learn how to publish Oqtane as a Linux service, but before we continue let’s do a recap of what we have learned so far

  1. How to compile Oqtane for an O.S other than Windows: HTTPs://www.jocheojeda.com/2023/03/20/1-compiling-oqtane-3-4-to-target-ubuntu-linux/
  2. The process of publish Oqtane installation files :https://www.jocheojeda.com/2023/03/20/2-publishing-oqtane-3-4-to-target-ubuntu-linux/
  3. How to change the binding URLs for kestrel, so we can run multiple instances of Oqtane :https://www.jocheojeda.com/2023/03/21/3-running-multiple-instances-of-oqtane-for-virtual-hosting-environments-in-apache-webserver/

At this moment, you might be asking yourself why should we run oqtane as a service? well this can be really long to answer but I’ll try to make it as short as possible.

The first thing that we need to understand is the difference between IIS (Internet Information Services for windows) and Apache or NGINX

In an IIS web service, the activation process refers to the series of steps that occur when a client request is received by the web server, and the corresponding web service code is executed to process the request and generate a response. The activation process in IIS typically involves the following steps:

  1. Client request: A client, such as a web browser or another application, sends an HTTP request to the IIS web server, targeting a specific web service endpoint (e.g., a URL).
  2. Routing: The IIS server routes the request to the appropriate web service application based on the requested URL and other configuration settings.
  3. Application pool: The request is handled by an application pool, which is a group of worker processes (w3wp.exe) that manage the execution of one or more web applications. Application pools provide isolation and resource management for web applications, helping to improve the overall performance and stability of the IIS server.
  4. Worker process: A worker process within the application pool receives the request and begins processing it. If there is no available worker process, the application pool may create a new one, or queue the request until a worker process is available.
  5. HTTP pipeline: The worker process processes the request through the IIS HTTP pipeline, which is a series of events and modules that handle various aspects of the request, such as authentication, caching, and logging.
  6. Handler mapping: Based on the request’s file extension or URL pattern, IIS maps the request to a specific handler, which is a component responsible for processing the request and generating a response. In the case of a web service, this handler is typically an ASP.NET handler or another custom handler.
  7. Service activation: The handler activates the web service, instantiating the required objects and executing the service’s code to process the client request. This may involve parsing input data, performing calculations, accessing databases, or interacting with other services.
  8. Response generation: The web service generates an appropriate response, such as an XML or JSON document, and returns it to the handler.
  9. HTTP pipeline (response): The response travels back through the HTTP pipeline, where additional processing, such as caching or compression, may occur.
  10. Client response: The IIS server sends the generated response back to the client, completing the activation process.

The activation process in IIS is designed to provide a robust and efficient way to handle client requests, offering features like application isolation, resource management, and extensibility through custom handlers and modules.

+-------------+       +-------------+       +-----------------+       +-------------+       +------------------+
|             |       |             |       |                 |       |             |       |                  |
|  Client     +------->    IIS      +-------> Application Pool +-------> Worker     +------->  Web Service     |
|(Web browser,|       |  Web Server |       |                 |       |  Process    |       |                  |
|  app, etc.) |       |             |       | (w3wp.exe)      |       |             |       |                  |
+------+------+       +-------+------+       +--------+--------+       +------+------+       +------+-----------+
       ^                      |                       |                         |                       |
       |                      |                       |                         |                       |
       |                      v                       v                         v                       v
       |               +-------+------+       +--------+--------+       +------+------+       +------+-----------+
       |               |             |       |                 |       |             |       |                  |
       +---------------+   HTTP      |       |  Handler        |       |  HTTP       |       |  Response        |
                       |  Pipeline   |       |  Mapping        |       |  Pipeline   |       | (XML, JSON, etc.)|
                       | (Request)   |       |                 |       |  (Response) |       |                  |
                       |             |       |                 |       |             |       |                  |
                       +-------------+       +-----------------+       +-------------+       +------------------+

Now, let’s discuss the Apache web server. Unlike IIS, Apache does not have an activation process specifically designed for .NET applications. This implies that the server is unable to initiate a new process to handle incoming requests or restart the process in the event of a crash.

According to Microsoft documentation, hosting an ASP.NET Core application on servers other than IIS involves using a reverse proxy server. In this setup, the ASP.NET Core app runs on the built-in web server, Kestrel, which is only accessible via localhost. An external web server, such as Apache or NGINX, acts as a reverse proxy, forwarding requests between the client and the ASP.NET Core app seamlessly.

+-----------+     +-----------+     +-----------+     +---------------------+
|           |     |           |     |           |     |                     |
|  Client   +----->  Reverse  +----->  Kestrel  +----->  ASP.NET Core       |
| (Browser, |     |  Proxy    |     | (Built-in |     |  Application        |
|   App)    |     | (Apache,  |     |  Server)  |     |                     |
|           |     |  NGINX)   |     |           |     |                     |
+-----+-----+     +-----+-----+     +-----+-----+     +---------------------+
      ^                 ^                 ^                 ^
      |                 |                 |                 |
      +-----------------+-----------------+-----------------+

 

  1. The client (browser, app, etc.) sends a request to the reverse proxy server (Apache, NGINX, etc.).
  2. The reverse proxy server forwards the request to the Kestrel server, which is the built-in web server for the ASP.NET Core application.
  3. The Kestrel server processes the request and passes it to the ASP.NET Core application.
  4. The ASP.NET Core application processes the request and generates a response.
  5. The response is sent back to the Kestrel server.
  6. The Kestrel server forwards the response to the reverse proxy server.
  7. The reverse proxy server sends the response back to the client.

As demonstrated, handling requests and maintaining the application’s availability are two distinct processes in non-IIS servers, such as our scenario with an Ubuntu 22.04 server and Apache. Consequently, we must explore strategies for keeping our application process continuously running on an Ubuntu server.

On Microsoft official documentation on how to publish AspNetCore Apps in Linux, there is a section called “Monitoring the app” which describe how to create a Linux service that is able to restart automatically if the application crashes. Here is the link to the official documentation https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-apache?view=aspnetcore-7.0#monitor-the-app

In Ubuntu Linux, services (also known as daemons) are background processes that run continuously and perform specific tasks, such as listening for incoming connections, managing system resources, or running scheduled tasks. They are designed to start automatically during system boot, run in the background, and stop during system shutdown.

Ubuntu uses the systemd system and service manager as its default init system, which is responsible for bootstrapping the user space and managing system services. Here’s how services work in Ubuntu Linux using systemd:

  1. Service unit files: Each service has a corresponding unit file with a .service extension, typically located in /lib/systemd/system/ or /etc/systemd/system/. These files contain the configuration and instructions for starting, stopping, and managing the service.
  2. Service management: You can manage services using the systemctl command. Some common tasks include starting, stopping, enabling, disabling, and checking the status of services. For example:
    • Start a service: sudo systemctl start service-name
    • Stop a service: sudo systemctl stop service-name
    • Enable a service to start at boot: sudo systemctl enable service-name
    • Disable a service from starting at boot: sudo systemctl disable service-name
    • Check the status of a service: systemctl status service-name
  3. Logging: systemd services use the journalctl command for logging, which allows you to view and manage logs for services and the entire system. You can access logs for a specific service by running journalctl -u service-name.
  4. Custom services: You can create custom services by writing your own service unit files and placing them in the /etc/systemd/system/ directory. This is useful when you want to run your own applications or scripts as services.

To maintain the availability of an application process on an Ubuntu server, you can create a custom service using systemd. This will enable the application to start automatically during system boot, restart if it crashes, and be managed using standard systemctl commands.

Most linux users will the create the unit (service file) using a command line text editor, this can be a bit challenging for a DotNet programmer since we are used to user interfaces. In our company (Xari) we often deploy AspNetCore applications to linux servers, we do it so often that I had to create a tool to generate the unit files, the tool is publicly available at https://linux4dotnet.jocheojeda.com/

Using the tool you just need to fill the gaps and it will generate the text of each file that you need to use, so here are the files that I generated for my test of running oqtane in an ubuntu server

InstalService.sh

sudo apt-get update -y
sudo apt-get install -y libgdiplus
chmod +x /home/oqtane/Oqtane.Framework.3.4.0.Install/Oqtane.Server
chmod +x /home/oqtane/Oqtane.Framework.3.4.0.Install/UninstallService.sh
chmod +x /home/oqtane/Oqtane.Framework.3.4.0.Install/Start.sh
chmod +x /home/oqtane/Oqtane.Framework.3.4.0.Install/Stop.sh
chmod +x /home/oqtane/Oqtane.Framework.3.4.0.Install/Status.sh
sudo yes | cp /home/oqtane/Oqtane.Framework.3.4.0.Install/Oqtane.Server.service /etc/systemd/system/
sudo systemctl enable Oqtane.Server.service
sudo systemctl start Oqtane.Server.service
sudo systemctl status Oqtane.Server.service

Oqtane.Server.Service

[Unit]
Description=Oqtane.Server

[Service]
WorkingDirectory=/home/oqtane/Oqtane.Framework.3.4.0.Install
ExecStart=/home/oqtane/Oqtane.Framework.3.4.0.Install/Oqtane.Server
Restart=always
#Restart service after 10 seconds if the dotnet service crashes
RestartSec=10
KillSignal=SIGINT
SyslogIdentifier=Oqtane.Server
User=oqtane
Environment=ASPNETCORE_ENVIRONMENT=Production

[Install]
WantedBy=user. Target

 

Start.sh

sudo systemctl start Oqtane.Server.service

Stop.sh

sudo systemctl stop Oqtane.Server.service

 

Status.sh

sudo systemctl status OqtaneServer.service

Uninstall.sh

sudo systemctl stop Oqtane.Server.service
sudo systemctl disable Oqtane.Server.service
sudo rm /etc/systemd/system/Oqtane.Server.service
ps -ef | grep Oqtane.Server | grep -v grep | awk '{print }' | xargs kill

 

And last but not least the install instructions ))

These are the commands you need to run to install the app as a linux service
1) Go to the app directory
cd /home/oqtane/Oqtane.Framework.3.4.0.Install
2) First change the permissions of the InstallService.sh
chmod +x InstallService.sh
3) Run the installation file
./InstallService.sh



If you are using virtualmin(appache) and you want to for https redirection, here are the instructions
Virtualmin --> Services --> Configure Website(the one on port 80) --> Edit Directives
Under 'RewriteEngine On' add the next line
RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [R]
Restart Apache

Now we only need to copy the generated files to the Oqtane folder and run “InstallService.sh” and “voila” your oqtane app is a linux service now, you can check the results here https://oqtane.jocheojeda.com/

 

 

 

 

 

 

 

 

 

3 – Running multiple instances of oqtane for virtual hosting environments in apache webserver

3 – Running multiple instances of oqtane for virtual hosting environments in apache webserver

Virtual hosting is a technique used by Apache (and other web servers) to host multiple websites on a single server. With virtual hosting, a single physical server can host multiple virtual servers, each with its own domain name, IP address, and content.

Virtual hosting can be implemented in two ways:

  1. Name-based virtual hosting: In this approach, the server uses the domain name provided in the HTTP request to determine which virtual host should serve the request. For example, if a user requests a page from “example.com”, the server will use the virtual host configured for that domain and serve the appropriate content.
  2. IP-based virtual hosting: In this approach, each virtual host is assigned a separate IP address, and the server uses the IP address in the HTTP request to determine which virtual host should serve the request. For example, if a user requests a page from the IP address assigned to “example.com”, the server will use the virtual host configured for that IP address and serve the appropriate content.

Virtual hosting allows a server to serve multiple websites, each with its own domain name and content, using a single physical server. This makes hosting more efficient and cost-effective, especially for smaller websites that don’t require dedicated servers.

The following diagram represents the most common virtual hosting setup

+-----------------------+
|   Apache Web Server   |
+-----------------------+
         |
         |
         |      +---------------------+
         |      |   Virtual Host A     |
         +------|   (example.com)     |
                |                     |
                |  Document Root:     |
                |    /var/www/A/      |
                |                     |
                +---------------------+
         |
         |
         |
         |      +---------------------+
         |      |   Virtual Host B     |
         +------|   (example.net)     |
                |                     |
                |  Document Root:     |
                |    /var/www/B/      |
                |                     |
                +---------------------+

 

ASP.NET Core and Blazor applications have the capability to run their own in-process web server, Kestrel. Kestrel can be bound to a specific IP address or port number, enabling the applications to be hosted in virtual environments. To accomplish this, each application can be bound to a unique port number.

 

+-----------------------+
|   Apache Web Server   |
+-----------------------+
         |
         |
         |      +---------------------+
         |      |   Virtual Host A    |
         +------|   (example.com)     |
                |                     |
                |  Proxy to:          |
                |http://localhost:8016|
                |                     |
                +---------------------+
         |
         |
         |      +---------------------+
         |      |   Virtual Host B    |
         +------|   (example.net)     |
                |                     |
                |  Proxy to:          |
                |http://localhost:8017|
                |                     |
                +---------------------+

As shown in the diagram, physical folders for the document root are no longer utilized. Rather, a proxy is created to the Kestrel web server, which runs our ASP.NET Core applications

To bind our ASP.NET Core applications to a specific IP address or port number, there are multiple methods available. Detailed documentation on this subject can be found at the following link: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel/endpoints?view=aspnetcore-7.0#configureiconfiguration

There are various approaches that can be used based on the specific use case. For the sake of simplicity in this example, we will be utilizing the configuration method. This involves appending the configuration JSON for the Kestrel web server, as shown in the following example.

 

{
  "Kestrel": {
    "Endpoints": {
      "Http": {
        "Url": "http://localhost:8016"
      }
    }
  }
}

 

So here is how our configuration files should look like

Example.com (Host A)

{
  "Runtime": "Server",
  "RenderMode": "ServerPrerendered",
  "Database": {
    "DefaultDBType": "Oqtane.Database.PostgreSQL.PostgreSQLDatabase, Oqtane.Database.PostgreSQL"
  },
  "ConnectionStrings": {
    "DefaultConnection": "Server=127.0.0.1;Port=5432;Database=example.com;User ID=example.com;Password=1234567890;"
  },
  "Kestrel": {
  "Endpoints": {
    "Http": {
      "Url": "http://localhost:8016"
    }
   }
  },
  "Installation": {
    "DefaultAlias": "",
    "HostPassword": "",
    "HostEmail": "",
    "SiteTemplate": "",
    "DefaultTheme": "",
    "DefaultContainer": ""
  },
  "Localization": {
    "DefaultCulture": "en"
  },
  "AvailableDatabases": [
    {
      "Name": "LocalDB",
      "ControlType": "Oqtane.Installer.Controls.LocalDBConfig, Oqtane.Client",
      "DBType": "Oqtane.Database.SqlServer.SqlServerDatabase, Oqtane.Database.SqlServer"
    },
    {
      "Name": "SQL Server",
      "ControlType": "Oqtane.Installer.Controls.SqlServerConfig, Oqtane.Client",
      "DBType": "Oqtane.Database.SqlServer.SqlServerDatabase, Oqtane.Database.SqlServer"
    },
    {
      "Name": "SQLite",
      "ControlType": "Oqtane.Installer.Controls.SqliteConfig, Oqtane.Client",
      "DBType": "Oqtane.Database.Sqlite.SqliteDatabase, Oqtane.Database.Sqlite"
    },
    {
      "Name": "MySQL",
      "ControlType": "Oqtane.Installer.Controls.MySQLConfig, Oqtane.Client",
      "DBType": "Oqtane.Database.MySQL.MySQLDatabase, Oqtane.Database.MySQL"
    },
    {
      "Name": "PostgreSQL",
      "ControlType": "Oqtane.Installer.Controls.PostgreSQLConfig, Oqtane.Client",
      "DBType": "Oqtane.Database.PostgreSQL.PostgreSQLDatabase, Oqtane.Database.PostgreSQL"
    }
  ],
  "Logging": {
    "FileLogger": {
      "LogLevel": {
        "Default": "Error"
      }
    },
    "LogLevel": {
      "Default": "Information"
    }
  },
  "InstallationId": "f5789fa4-895c-45d7-bc26-03eb166e008a"
}

 

Example.net (Host B)

{
  "Runtime": "Server",
  "RenderMode": "ServerPrerendered",
  "Database": {
    "DefaultDBType": "Oqtane.Database.PostgreSQL.PostgreSQLDatabase, Oqtane.Database.PostgreSQL"
  },
  "ConnectionStrings": {
    "DefaultConnection": "Server=127.0.0.1;Port=5432;Database=example.net;User ID=example.net;Password=1234567890;"
  },
  {
    "Kestrel": {
    "Endpoints": {
      "Http": {
        "Url": "http://localhost:8017"
      }
     }
    }
  },
  "Installation": {
    "DefaultAlias": "",
    "HostPassword": "",
    "HostEmail": "",
    "SiteTemplate": "",
    "DefaultTheme": "",
    "DefaultContainer": ""
  },
  "Localization": {
    "DefaultCulture": "en"
  },
  "AvailableDatabases": [
    {
      "Name": "LocalDB",
      "ControlType": "Oqtane.Installer.Controls.LocalDBConfig, Oqtane.Client",
      "DBType": "Oqtane.Database.SqlServer.SqlServerDatabase, Oqtane.Database.SqlServer"
    },
    {
      "Name": "SQL Server",
      "ControlType": "Oqtane.Installer.Controls.SqlServerConfig, Oqtane.Client",
      "DBType": "Oqtane.Database.SqlServer.SqlServerDatabase, Oqtane.Database.SqlServer"
    },
    {
      "Name": "SQLite",
      "ControlType": "Oqtane.Installer.Controls.SqliteConfig, Oqtane.Client",
      "DBType": "Oqtane.Database.Sqlite.SqliteDatabase, Oqtane.Database.Sqlite"
    },
    {
      "Name": "MySQL",
      "ControlType": "Oqtane.Installer.Controls.MySQLConfig, Oqtane.Client",
      "DBType": "Oqtane.Database.MySQL.MySQLDatabase, Oqtane.Database.MySQL"
    },
    {
      "Name": "PostgreSQL",
      "ControlType": "Oqtane.Installer.Controls.PostgreSQLConfig, Oqtane.Client",
      "DBType": "Oqtane.Database.PostgreSQL.PostgreSQLDatabase, Oqtane.Database.PostgreSQL"
    }
  ],
  "Logging": {
    "FileLogger": {
      "LogLevel": {
        "Default": "Error"
      }
    },
    "LogLevel": {
      "Default": "Information"
    }
  },
  "InstallationId": "f5789fa4-895c-45d7-bc26-03eb166e008a"
}

 

As demonstrated, utilizing Oqtane in virtual hosting environments is a straightforward process. There is no need to recompile the source code, as configuring the application for virtual hosting can be easily accomplished through a single configuration section in the appsettings.json file.

 

 

 

How to monitor your aspnet core application on Ubuntu Linux

How to monitor your aspnet core application on Ubuntu Linux

Here are some recommendations to host your new shiny aspnet core app on Linux in this case in Ubuntu 18.04

First, create a user with the name aspnetapp

sudo adduser myaspnetapp

 

after executing the command, you will have a new folder in your home directory the folder will have the same name as your username so in this case “myaspnetapp”

now let’s SSH to with the new user you just created you can do that using your favorite SSH client, for example, if you are using windows you can use putty

when you log in with the new user you will be in its home folder, now we can create a folder called app with the following command

 

mkdir app

your folder structure should look like this now

/home/myaspnetapp/app

Now we are ready to upload the files. By now should have already compiled and publish your application to run in Linux, if you have not done that yet then you should take a look to this article https://www.jocheojeda.com/2019/06/10/how-to-create-a-self-contained-netcore-3-console-application/

There are many options to upload a zip file but I think is the best way is to use the secure copy command from linux “scp”, I won’t explain how you should call the scp command but if you are using windows you can run that command from the WSL console and if you are using Linux the command is already built-in, anyway here is an article about it https://linuxize.com/post/how-to-use-scp-command-to-securely-transfer-files/

Here I will write an example of how the scp command should look like and you adjust it to your needs

scp publish.zip myaspnetapp@200.35.15.25:/home/myaspnetapp/app

so that command above will do the following, it will copy the file publish.zip from the local folder to a server running on the following the IP 200.35.15.25 and the folder “/home/myaspnetapp/app”

now let’s unzip the content of the folder zip with the following command

unzip publish.zip

 

What we have done so far:

  1. We have created a user in the OS
  2. We have created a folder to host our application within the user home folder
  3. We have uploaded a zip file containing our application the folder “/home/myaspnetapp/app”

 

Now that the app is already in the server we need to change the permission of the folder where the app lives to 0777, you can learn more about Linux file system permissions here https://www.guru99.com/file-permissions.html

Creating a service to monitor the app

The next step to monitor our app is to use systemd is an init system that provides many powerful features for starting, stopping, and managing processes.

Let’s start by creating a service file in the following path “/etc/systemd/system/”

You can do that with the following command:

sudo nano /etc/systemd/system/MyExecutableFile.service

 

here is how the content of the file should look like

 

[Unit]

Description=A description of your app


[Service]

WorkingDirectory=/home/myaspnetapp /app

ExecStart= /home/ myaspnetapp /app/MyExecutableFile

Restart=always

# Restart service after 10 seconds if the dotnet service crashes:

RestartSec=10

KillSignal=SIGINT

SyslogIdentifier= MyExecutableFile

User=apache

Environment=ASPNETCORE_ENVIRONMENT=Production


[Install]

WantedBy=multi-user.target

 

Here is a little explanation of what you might need to change if the file above

WorkingDirectory: this is the working directory, usually is the same where the app lives

ExecStart: This is the executable file how what will you write here will depend on your application if its self-contained you just need to write the name the full path of the executable file otherwise you need to write the path to the dotnet runtime and then the path to your dll as show below:

/usr/local/bin/dotnet /var/www/helloapp/helloapp.dll

RestartSec: this is the time to wait before trying to restart the app after if the process crashes

SyslogIdentifier: the app identifier for sys logs

User: this is really important since the app will run under this user privileges, so you need to make sure that the user exists and that is able to access the files needed to start the app

That is all that we need for the service file now we need to go back to the console and enable our new service, you can do that with the following command

sudo systemctl enable MyExecutableFile.service

To start and stop the service you can use the following commands

//To Start
sudo systemctl start MyExecutableFile.service

//To Stop
sudo systemctl status MyExecutableFile.service