by Joche Ojeda | Mar 24, 2023 | Linux, netcore, Oqtane, Ubuntu
In this post we will learn how to publish Oqtane as a Linux service, but before we continue let’s do a recap of what we have learned so far
- How to compile Oqtane for an O.S other than Windows: HTTPs://www.jocheojeda.com/2023/03/20/1-compiling-oqtane-3-4-to-target-ubuntu-linux/
- The process of publish Oqtane installation files :https://www.jocheojeda.com/2023/03/20/2-publishing-oqtane-3-4-to-target-ubuntu-linux/
- How to change the binding URLs for kestrel, so we can run multiple instances of Oqtane :https://www.jocheojeda.com/2023/03/21/3-running-multiple-instances-of-oqtane-for-virtual-hosting-environments-in-apache-webserver/
At this moment, you might be asking yourself why should we run oqtane as a service? well this can be really long to answer but I’ll try to make it as short as possible.
The first thing that we need to understand is the difference between IIS (Internet Information Services for windows) and Apache or NGINX
In an IIS web service, the activation process refers to the series of steps that occur when a client request is received by the web server, and the corresponding web service code is executed to process the request and generate a response. The activation process in IIS typically involves the following steps:
- Client request: A client, such as a web browser or another application, sends an HTTP request to the IIS web server, targeting a specific web service endpoint (e.g., a URL).
- Routing: The IIS server routes the request to the appropriate web service application based on the requested URL and other configuration settings.
- Application pool: The request is handled by an application pool, which is a group of worker processes (w3wp.exe) that manage the execution of one or more web applications. Application pools provide isolation and resource management for web applications, helping to improve the overall performance and stability of the IIS server.
- Worker process: A worker process within the application pool receives the request and begins processing it. If there is no available worker process, the application pool may create a new one, or queue the request until a worker process is available.
- HTTP pipeline: The worker process processes the request through the IIS HTTP pipeline, which is a series of events and modules that handle various aspects of the request, such as authentication, caching, and logging.
- Handler mapping: Based on the request’s file extension or URL pattern, IIS maps the request to a specific handler, which is a component responsible for processing the request and generating a response. In the case of a web service, this handler is typically an ASP.NET handler or another custom handler.
- Service activation: The handler activates the web service, instantiating the required objects and executing the service’s code to process the client request. This may involve parsing input data, performing calculations, accessing databases, or interacting with other services.
- Response generation: The web service generates an appropriate response, such as an XML or JSON document, and returns it to the handler.
- HTTP pipeline (response): The response travels back through the HTTP pipeline, where additional processing, such as caching or compression, may occur.
- Client response: The IIS server sends the generated response back to the client, completing the activation process.
The activation process in IIS is designed to provide a robust and efficient way to handle client requests, offering features like application isolation, resource management, and extensibility through custom handlers and modules.
+-------------+ +-------------+ +-----------------+ +-------------+ +------------------+
| | | | | | | | | |
| Client +-------> IIS +-------> Application Pool +-------> Worker +-------> Web Service |
|(Web browser,| | Web Server | | | | Process | | |
| app, etc.) | | | | (w3wp.exe) | | | | |
+------+------+ +-------+------+ +--------+--------+ +------+------+ +------+-----------+
^ | | | |
| | | | |
| v v v v
| +-------+------+ +--------+--------+ +------+------+ +------+-----------+
| | | | | | | | |
+---------------+ HTTP | | Handler | | HTTP | | Response |
| Pipeline | | Mapping | | Pipeline | | (XML, JSON, etc.)|
| (Request) | | | | (Response) | | |
| | | | | | | |
+-------------+ +-----------------+ +-------------+ +------------------+
Now, let’s discuss the Apache web server. Unlike IIS, Apache does not have an activation process specifically designed for .NET applications. This implies that the server is unable to initiate a new process to handle incoming requests or restart the process in the event of a crash.
According to Microsoft documentation, hosting an ASP.NET Core application on servers other than IIS involves using a reverse proxy server. In this setup, the ASP.NET Core app runs on the built-in web server, Kestrel, which is only accessible via localhost. An external web server, such as Apache or NGINX, acts as a reverse proxy, forwarding requests between the client and the ASP.NET Core app seamlessly.
+-----------+ +-----------+ +-----------+ +---------------------+
| | | | | | | |
| Client +-----> Reverse +-----> Kestrel +-----> ASP.NET Core |
| (Browser, | | Proxy | | (Built-in | | Application |
| App) | | (Apache, | | Server) | | |
| | | NGINX) | | | | |
+-----+-----+ +-----+-----+ +-----+-----+ +---------------------+
^ ^ ^ ^
| | | |
+-----------------+-----------------+-----------------+
- The client (browser, app, etc.) sends a request to the reverse proxy server (Apache, NGINX, etc.).
- The reverse proxy server forwards the request to the Kestrel server, which is the built-in web server for the ASP.NET Core application.
- The Kestrel server processes the request and passes it to the ASP.NET Core application.
- The ASP.NET Core application processes the request and generates a response.
- The response is sent back to the Kestrel server.
- The Kestrel server forwards the response to the reverse proxy server.
- The reverse proxy server sends the response back to the client.
As demonstrated, handling requests and maintaining the application’s availability are two distinct processes in non-IIS servers, such as our scenario with an Ubuntu 22.04 server and Apache. Consequently, we must explore strategies for keeping our application process continuously running on an Ubuntu server.
On Microsoft official documentation on how to publish AspNetCore Apps in Linux, there is a section called “Monitoring the app” which describe how to create a Linux service that is able to restart automatically if the application crashes. Here is the link to the official documentation https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-apache?view=aspnetcore-7.0#monitor-the-app
In Ubuntu Linux, services (also known as daemons) are background processes that run continuously and perform specific tasks, such as listening for incoming connections, managing system resources, or running scheduled tasks. They are designed to start automatically during system boot, run in the background, and stop during system shutdown.
Ubuntu uses the systemd
system and service manager as its default init system, which is responsible for bootstrapping the user space and managing system services. Here’s how services work in Ubuntu Linux using systemd
:
- Service unit files: Each service has a corresponding unit file with a
.service
extension, typically located in /lib/systemd/system/
or /etc/systemd/system/
. These files contain the configuration and instructions for starting, stopping, and managing the service.
- Service management: You can manage services using the
systemctl
command. Some common tasks include starting, stopping, enabling, disabling, and checking the status of services. For example:
- Start a service:
sudo systemctl start service-name
- Stop a service:
sudo systemctl stop service-name
- Enable a service to start at boot:
sudo systemctl enable service-name
- Disable a service from starting at boot:
sudo systemctl disable service-name
- Check the status of a service:
systemctl status service-name
- Logging:
systemd
services use the journalctl
command for logging, which allows you to view and manage logs for services and the entire system. You can access logs for a specific service by running journalctl -u service-name
.
- Custom services: You can create custom services by writing your own service unit files and placing them in the
/etc/systemd/system/
directory. This is useful when you want to run your own applications or scripts as services.
To maintain the availability of an application process on an Ubuntu server, you can create a custom service using systemd
. This will enable the application to start automatically during system boot, restart if it crashes, and be managed using standard systemctl
commands.
Most linux users will the create the unit (service file) using a command line text editor, this can be a bit challenging for a DotNet programmer since we are used to user interfaces. In our company (Xari) we often deploy AspNetCore applications to linux servers, we do it so often that I had to create a tool to generate the unit files, the tool is publicly available at https://linux4dotnet.jocheojeda.com/

Using the tool you just need to fill the gaps and it will generate the text of each file that you need to use, so here are the files that I generated for my test of running oqtane in an ubuntu server
InstalService.sh
sudo apt-get update -y
sudo apt-get install -y libgdiplus
chmod +x /home/oqtane/Oqtane.Framework.3.4.0.Install/Oqtane.Server
chmod +x /home/oqtane/Oqtane.Framework.3.4.0.Install/UninstallService.sh
chmod +x /home/oqtane/Oqtane.Framework.3.4.0.Install/Start.sh
chmod +x /home/oqtane/Oqtane.Framework.3.4.0.Install/Stop.sh
chmod +x /home/oqtane/Oqtane.Framework.3.4.0.Install/Status.sh
sudo yes | cp /home/oqtane/Oqtane.Framework.3.4.0.Install/Oqtane.Server.service /etc/systemd/system/
sudo systemctl enable Oqtane.Server.service
sudo systemctl start Oqtane.Server.service
sudo systemctl status Oqtane.Server.service
Oqtane.Server.Service
[Unit]
Description=Oqtane.Server
[Service]
WorkingDirectory=/home/oqtane/Oqtane.Framework.3.4.0.Install
ExecStart=/home/oqtane/Oqtane.Framework.3.4.0.Install/Oqtane.Server
Restart=always
#Restart service after 10 seconds if the dotnet service crashes
RestartSec=10
KillSignal=SIGINT
SyslogIdentifier=Oqtane.Server
User=oqtane
Environment=ASPNETCORE_ENVIRONMENT=Production
[Install]
WantedBy=user. Target
Start.sh
sudo systemctl start Oqtane.Server.service
Stop.sh
sudo systemctl stop Oqtane.Server.service
Status.sh
sudo systemctl status OqtaneServer.service
Uninstall.sh
sudo systemctl stop Oqtane.Server.service
sudo systemctl disable Oqtane.Server.service
sudo rm /etc/systemd/system/Oqtane.Server.service
ps -ef | grep Oqtane.Server | grep -v grep | awk '{print }' | xargs kill
And last but not least the install instructions ))
These are the commands you need to run to install the app as a linux service
1) Go to the app directory
cd /home/oqtane/Oqtane.Framework.3.4.0.Install
2) First change the permissions of the InstallService.sh
chmod +x InstallService.sh
3) Run the installation file
./InstallService.sh
If you are using virtualmin(appache) and you want to for https redirection, here are the instructions
Virtualmin --> Services --> Configure Website(the one on port 80) --> Edit Directives
Under 'RewriteEngine On' add the next line
RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [R]
Restart Apache
Now we only need to copy the generated files to the Oqtane folder and run “InstallService.sh” and “voila” your oqtane app is a linux service now, you can check the results here https://oqtane.jocheojeda.com/
by Joche Ojeda | Mar 21, 2023 | Linux, netcore, Oqtane, Ubuntu, Uncategorized
Virtual hosting is a technique used by Apache (and other web servers) to host multiple websites on a single server. With virtual hosting, a single physical server can host multiple virtual servers, each with its own domain name, IP address, and content.
Virtual hosting can be implemented in two ways:
- Name-based virtual hosting: In this approach, the server uses the domain name provided in the HTTP request to determine which virtual host should serve the request. For example, if a user requests a page from “example.com”, the server will use the virtual host configured for that domain and serve the appropriate content.
- IP-based virtual hosting: In this approach, each virtual host is assigned a separate IP address, and the server uses the IP address in the HTTP request to determine which virtual host should serve the request. For example, if a user requests a page from the IP address assigned to “example.com”, the server will use the virtual host configured for that IP address and serve the appropriate content.
Virtual hosting allows a server to serve multiple websites, each with its own domain name and content, using a single physical server. This makes hosting more efficient and cost-effective, especially for smaller websites that don’t require dedicated servers.
The following diagram represents the most common virtual hosting setup
+-----------------------+
| Apache Web Server |
+-----------------------+
|
|
| +---------------------+
| | Virtual Host A |
+------| (example.com) |
| |
| Document Root: |
| /var/www/A/ |
| |
+---------------------+
|
|
|
| +---------------------+
| | Virtual Host B |
+------| (example.net) |
| |
| Document Root: |
| /var/www/B/ |
| |
+---------------------+
ASP.NET Core and Blazor applications have the capability to run their own in-process web server, Kestrel. Kestrel can be bound to a specific IP address or port number, enabling the applications to be hosted in virtual environments. To accomplish this, each application can be bound to a unique port number.
+-----------------------+
| Apache Web Server |
+-----------------------+
|
|
| +---------------------+
| | Virtual Host A |
+------| (example.com) |
| |
| Proxy to: |
|http://localhost:8016|
| |
+---------------------+
|
|
| +---------------------+
| | Virtual Host B |
+------| (example.net) |
| |
| Proxy to: |
|http://localhost:8017|
| |
+---------------------+
As shown in the diagram, physical folders for the document root are no longer utilized. Rather, a proxy is created to the Kestrel web server, which runs our ASP.NET Core applications
To bind our ASP.NET Core applications to a specific IP address or port number, there are multiple methods available. Detailed documentation on this subject can be found at the following link: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel/endpoints?view=aspnetcore-7.0#configureiconfiguration
There are various approaches that can be used based on the specific use case. For the sake of simplicity in this example, we will be utilizing the configuration method. This involves appending the configuration JSON for the Kestrel web server, as shown in the following example.
{
"Kestrel": {
"Endpoints": {
"Http": {
"Url": "http://localhost:8016"
}
}
}
}
So here is how our configuration files should look like
Example.com (Host A)
{
"Runtime": "Server",
"RenderMode": "ServerPrerendered",
"Database": {
"DefaultDBType": "Oqtane.Database.PostgreSQL.PostgreSQLDatabase, Oqtane.Database.PostgreSQL"
},
"ConnectionStrings": {
"DefaultConnection": "Server=127.0.0.1;Port=5432;Database=example.com;User ID=example.com;Password=1234567890;"
},
"Kestrel": {
"Endpoints": {
"Http": {
"Url": "http://localhost:8016"
}
}
},
"Installation": {
"DefaultAlias": "",
"HostPassword": "",
"HostEmail": "",
"SiteTemplate": "",
"DefaultTheme": "",
"DefaultContainer": ""
},
"Localization": {
"DefaultCulture": "en"
},
"AvailableDatabases": [
{
"Name": "LocalDB",
"ControlType": "Oqtane.Installer.Controls.LocalDBConfig, Oqtane.Client",
"DBType": "Oqtane.Database.SqlServer.SqlServerDatabase, Oqtane.Database.SqlServer"
},
{
"Name": "SQL Server",
"ControlType": "Oqtane.Installer.Controls.SqlServerConfig, Oqtane.Client",
"DBType": "Oqtane.Database.SqlServer.SqlServerDatabase, Oqtane.Database.SqlServer"
},
{
"Name": "SQLite",
"ControlType": "Oqtane.Installer.Controls.SqliteConfig, Oqtane.Client",
"DBType": "Oqtane.Database.Sqlite.SqliteDatabase, Oqtane.Database.Sqlite"
},
{
"Name": "MySQL",
"ControlType": "Oqtane.Installer.Controls.MySQLConfig, Oqtane.Client",
"DBType": "Oqtane.Database.MySQL.MySQLDatabase, Oqtane.Database.MySQL"
},
{
"Name": "PostgreSQL",
"ControlType": "Oqtane.Installer.Controls.PostgreSQLConfig, Oqtane.Client",
"DBType": "Oqtane.Database.PostgreSQL.PostgreSQLDatabase, Oqtane.Database.PostgreSQL"
}
],
"Logging": {
"FileLogger": {
"LogLevel": {
"Default": "Error"
}
},
"LogLevel": {
"Default": "Information"
}
},
"InstallationId": "f5789fa4-895c-45d7-bc26-03eb166e008a"
}
Example.net (Host B)
{
"Runtime": "Server",
"RenderMode": "ServerPrerendered",
"Database": {
"DefaultDBType": "Oqtane.Database.PostgreSQL.PostgreSQLDatabase, Oqtane.Database.PostgreSQL"
},
"ConnectionStrings": {
"DefaultConnection": "Server=127.0.0.1;Port=5432;Database=example.net;User ID=example.net;Password=1234567890;"
},
{
"Kestrel": {
"Endpoints": {
"Http": {
"Url": "http://localhost:8017"
}
}
}
},
"Installation": {
"DefaultAlias": "",
"HostPassword": "",
"HostEmail": "",
"SiteTemplate": "",
"DefaultTheme": "",
"DefaultContainer": ""
},
"Localization": {
"DefaultCulture": "en"
},
"AvailableDatabases": [
{
"Name": "LocalDB",
"ControlType": "Oqtane.Installer.Controls.LocalDBConfig, Oqtane.Client",
"DBType": "Oqtane.Database.SqlServer.SqlServerDatabase, Oqtane.Database.SqlServer"
},
{
"Name": "SQL Server",
"ControlType": "Oqtane.Installer.Controls.SqlServerConfig, Oqtane.Client",
"DBType": "Oqtane.Database.SqlServer.SqlServerDatabase, Oqtane.Database.SqlServer"
},
{
"Name": "SQLite",
"ControlType": "Oqtane.Installer.Controls.SqliteConfig, Oqtane.Client",
"DBType": "Oqtane.Database.Sqlite.SqliteDatabase, Oqtane.Database.Sqlite"
},
{
"Name": "MySQL",
"ControlType": "Oqtane.Installer.Controls.MySQLConfig, Oqtane.Client",
"DBType": "Oqtane.Database.MySQL.MySQLDatabase, Oqtane.Database.MySQL"
},
{
"Name": "PostgreSQL",
"ControlType": "Oqtane.Installer.Controls.PostgreSQLConfig, Oqtane.Client",
"DBType": "Oqtane.Database.PostgreSQL.PostgreSQLDatabase, Oqtane.Database.PostgreSQL"
}
],
"Logging": {
"FileLogger": {
"LogLevel": {
"Default": "Error"
}
},
"LogLevel": {
"Default": "Information"
}
},
"InstallationId": "f5789fa4-895c-45d7-bc26-03eb166e008a"
}
As demonstrated, utilizing Oqtane in virtual hosting environments is a straightforward process. There is no need to recompile the source code, as configuring the application for virtual hosting can be easily accomplished through a single configuration section in the appsettings.json file.
by Joche Ojeda | Mar 20, 2023 | Linux, netcore, Oqtane
Oqtane is an open-source, modular application framework built on top of ASP.NET Core, a popular web development platform created by Microsoft. Oqtane is inspired by DotNetNuke (DNN), another content management system and web application framework, but it is designed specifically to take advantage of the benefits of ASP.NET Core, such as cross-platform compatibility, improved performance, and modern architectural patterns.
Since Oqtane is built on ASP.NET Core, it leverages the underlying features of the platform, such as support for C# and Razor syntax, dependency injection, and Model-View-Controller (MVC) architecture. As a result, developers familiar with ASP.NET Core will find it easier to work with Oqtane.
Oqtane allows developers to build customizable, extensible, and scalable web applications by providing a modular infrastructure that supports the development of plug-and-play components, such as themes, modules, and extensions. It offers a range of features, including user authentication and authorization, multi-tenancy, a content management system, and a built-in administration dashboard.
Currently, the Oqtane documentation primarily outlines the installation process on an IIS server, which is exclusive to Windows operating systems. However, as previously mentioned, Oqtane is built upon the versatile .NET Core framework, which boasts compatibility with a variety of operating systems, including Linux.
Embracing .NET Core on Linux has been a passion of mine ever since its inception. I have diligently sought to acquire the knowledge necessary to effectively run .NET applications on Linux, immersing myself in every aspect of this cross-platform journey.
Motivated to explore the potential of running Oqtane on Ubuntu 22.04 with PostgreSQL (a previously unsupported database system by Oqtane), I set forth with two primary objectives. The first is to determine the feasibility of compiling the code and executing it in alignment with the guidelines provided in Oqtane’s documentation. My second is to generate Linux-compatible binaries, enabling deployment on a Linux server.
In accordance with the “Getting Started” section of Oqtane’s GitHub repository, three prerequisites must be met. The first requirement, installing the .NET 6 SDK, is effortlessly accomplished on a Linux machine by executing a mere two commands, thus equipping the system with both the SDK and runtime.
To install the SDK, execute the following command
sudo apt-get update && \
sudo apt-get install -y dotnet-sdk-6.0
To install the runtime, execute the following command
sudo apt-get install -y dotnet-runtime-6.0
You can check the official documentation here
The second requirement is “Install the latest edition (v17.0 or higher) of Visual Studio 2022 with the ASP.NET and web development workload enabled.” that is not possible because we are using Linux, we can use visual studio code, but for the sake of simplicity we will just use the dotnet CLI.
Third and last step is to clone or download the development branch of oqtane , to keep it simple we will just download the source
After we have download the source, we should navigate to the folder where oqtane server project lives, that is usually “Oqtane.Server” inside the solution folder, once there start a terminal and run the following command
dotnet run Oqtane.Server.csproj
Then you will see something like this

After that you can navigate to http://localhost:44357 and you will se this page

Congratulations, you have successfully compiled and run Oqtane for Ubuntu Linux
In the next post I will include the details to generate oqtane release binaries for Linux
by Joche Ojeda | Mar 12, 2023 | Postgres
An activity stream is a data format used to represent a list of recent activities performed by an individual or group on a social network, web application, or other platform. It typically includes information such as the type of activity (e.g., posting a status update, commenting on a post), the person or entity performing the activity, and any associated objects or targets (e.g., a photo or link). Activity streams can be used to track user behavior, personalize content recommendations, and facilitate social interactions between users.
An activity stream typically consists of the following parts:
- Actor: The person or entity that initiates the action.
- Verb: The action being taken.
- Object: The thing on which the action is taken.
- Target: The thing to which the action is directed.
- Time: The time at which the action occurred.
- Context: Any additional information about the action, such as the location or device used to perform it.
- Metadata: Additional information about the action, such as the user’s preferences or the permissions required to perform it.
Activity streams can be used to represent data from any system, and there is no direct relationship between the stream of activities and the associated objects.
With a basic understanding of what an activity stream is, we can leverage PostgreSQL as a database storage to implement one. PostgreSQL is particularly suitable for activity streams due to its built-in support for JSON columns, which can store data with flexible schemas, and its GIS functionality, which makes it easy to filter activities based on location.
For this project, I have chosen to use Postgres 15 with GIS extensions, as well as the DBeaver Community Edition for managing the database. The GIS extensions are especially useful for this project since we want to display only activities that occurred around specific geographical points
Let’s begin our coding journey with the creation of an object storage in PostgreSQL. The object storage will have a column to store the object type and a JSON column to store the complete data of the object being stored.
CREATE DATABASE ActivityStream;
After creating the database, the next step is to install the PostGIS extension using the following query.
CREATE EXTENSION IF NOT EXISTS postgis; -- Enable PostGIS extension
CREATE TABLE objectstorage (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
latitude DECIMAL(9,6) NOT NULL,
longitude DECIMAL(9,6) NOT NULL,
location GEOMETRY(Point, 4326), -- 4326 is the SRID for WGS 84, a common coordinate system for GPS data
object_type TEXT NOT NULL,
object_data JSONB NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
CREATE OR REPLACE FUNCTION update_location() RETURNS TRIGGER AS $$
BEGIN
NEW.location := ST_SetSRID(ST_MakePoint(NEW.longitude, NEW.latitude), 4326);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER set_location
BEFORE INSERT OR UPDATE
ON objectstorage
FOR EACH ROW
EXECUTE FUNCTION update_location();
This query creates a table named objectstorage
with columns for id
, latitude
, longitude
, location
, object_type
, object_data
, created_at
, and updated_at
. The id
column is a primary key and generates a random UUID as its default value. The latitude
and longitude
columns store decimal values for geographic coordinates. The location
column stores a geometry object of type Point
using the WGS 84 coordinate system with SRID 4326. The object_type
column stores the type of the object being stored, and the object_data
column stores the complete data for the object in JSONB format. The created_at
and updated_at
columns store timestamps for when the row was created and last updated, respectively.
Additionally, this query creates a trigger function named update_location()
that is triggered when a row is inserted or updated in the objectstorage
table. The function updates the location
column based on the values in the latitude
and longitude
columns using the ST_SetSRID()
and ST_MakePoint()
functions from PostGIS. The ST_SetSRID()
function sets the coordinate system for the point, and the ST_MakePoint()
function creates a point geometry object from the latitude
and longitude
values. The function returns the updated row.
To simplify our database interactions, we’ll create UPSERT functions as needed. Here’s an example of an UPSERT function we can use for the objectstorage table.
CREATE OR REPLACE FUNCTION upsert_objectstorage(
p_id UUID,
p_latitude DECIMAL(9,6),
p_longitude DECIMAL(9,6),
p_object_type TEXT,
p_object_data JSONB
) RETURNS VOID AS $$
BEGIN
-- Try to update the existing row
UPDATE objectstorage SET
latitude = p_latitude,
longitude = p_longitude,
location = ST_SetSRID(ST_MakePoint(p_longitude, p_latitude), 4326),
object_type = p_object_type,
object_data = p_object_data,
updated_at = CURRENT_TIMESTAMP
WHERE id = p_id;
-- If no row was updated, insert a new one
IF NOT FOUND THEN
INSERT INTO objectstorage (id, latitude, longitude, location, object_type, object_data)
VALUES (p_id, p_latitude, p_longitude, ST_SetSRID(ST_MakePoint(p_longitude, p_latitude), 4326), p_object_type, p_object_data);
END IF;
END;
$$ LANGUAGE plpgsql;
Below is the code for the “activity” table, which is the central piece of an activity stream system. It includes a trigger function that updates the “location” column using PostGIS.
CREATE TABLE activity (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
verb TEXT NOT NULL,
actor_id UUID NOT NULL REFERENCES objectstorage(id),
object_id UUID NOT NULL REFERENCES objectstorage(id),
target_id UUID REFERENCES objectstorage(id),
latitude DECIMAL(9,6) NOT NULL,
longitude DECIMAL(9,6) NOT NULL,
location GEOMETRY(Point, 4326) NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
CREATE OR REPLACE FUNCTION update_activity_location() RETURNS TRIGGER AS $$
BEGIN
NEW.location := ST_SetSRID(ST_MakePoint(NEW.longitude, NEW.latitude), 4326);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER set_activity_location
BEFORE INSERT OR UPDATE
ON activity
FOR EACH ROW
EXECUTE FUNCTION update_activity_location();
Now the UPSERT function for the activity table
CREATE OR REPLACE FUNCTION upsert_activity(
p_id UUID,
p_verb TEXT,
p_actor_id UUID,
p_object_id UUID,
p_target_id UUID,
p_latitude DECIMAL(9,6),
p_longitude DECIMAL(9,6)
) RETURNS VOID AS $$
BEGIN
-- Try to update the existing row
UPDATE activity SET
verb = p_verb,
actor_id = p_actor_id,
object_id = p_object_id,
target_id = p_target_id,
latitude = p_latitude,
longitude = p_longitude,
location = ST_SetSRID(ST_MakePoint(p_longitude, p_latitude), 4326),
updated_at = CURRENT_TIMESTAMP
WHERE id = p_id;
-- If no row was updated, insert a new one
IF NOT FOUND THEN
INSERT INTO activity (id, verb, actor_id, object_id, target_id, latitude, longitude, location)
VALUES (p_id, p_verb, p_actor_id, p_object_id, p_target_id, p_latitude, p_longitude, ST_SetSRID(ST_MakePoint(p_longitude, p_latitude), 4326));
END IF;
END;
$$ LANGUAGE plpgsql;
To avoid serialization issues and redundant code, we’ll modify our queries to return JSON arrays. We’ll add a new column named “self” to the activity table and create a trigger that saves the current activity values in JSON format.
ALTER TABLE activity ADD COLUMN self JSON;
CREATE OR REPLACE FUNCTION update_activity_self() RETURNS TRIGGER AS $$
BEGIN
NEW.self = json_build_object(
'id', NEW.id,
'verb', NEW.verb,
'actor_id',NEW.actor_id,
'actor', (SELECT object_data FROM objectstorage WHERE id = NEW.actor_id),
'object_id',NEW.object_id,
'object', (SELECT object_data FROM objectstorage WHERE id = NEW.object_id),
'target_id',NEW.target_id,
'target', (SELECT object_data FROM objectstorage WHERE id = NEW.target_id),
'latitude', NEW.latitude,
'longitude', NEW.longitude,
'created_at', NEW.created_at,
'updated_at', NEW.updated_at
)::jsonb;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER activity_self_trigger
BEFORE INSERT OR UPDATE ON activity
FOR EACH ROW
EXECUTE FUNCTION update_activity_self();
CREATE OR REPLACE FUNCTION get_activities_by_distance_as_json(
p_lat NUMERIC,
p_long NUMERIC,
p_distance INTEGER,
p_page_num INTEGER,
p_page_size INTEGER
)
RETURNS JSON
AS $$
DECLARE
activities_json JSON;
BEGIN
SELECT json_agg(a.self) INTO activities_json
FROM (
SELECT a.self
FROM activity a
WHERE ST_DWithin(location::geography, ST_SetSRID(ST_Point(p_long, p_lat), 4326)::geography, p_distance)
ORDER BY created_at DESC
LIMIT p_page_size
OFFSET (p_page_num - 1) * p_page_size
) a;
RETURN activities_json;
END;
$$ LANGUAGE plpgsql;
An activity stream without a follow functionality would defeat the main purpose of an activity stream, which is to keep track of the activities of other actors without the need to constantly visit their profile page.
So here is the code for the follow functionality
CREATE TABLE follow (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
follower_id UUID NOT NULL REFERENCES objectstorage(id),
followee_id UUID NOT NULL REFERENCES objectstorage(id),
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP
);
CREATE OR REPLACE FUNCTION follow_user(
p_follower_id UUID,
p_followee_id UUID
) RETURNS VOID AS $$
BEGIN
-- Try to insert a new row into the follow table
-- If the row already exists, do nothing
BEGIN
INSERT INTO follow (follower_id, followee_id)
VALUES (p_follower_id, p_followee_id);
EXCEPTION WHEN unique_violation THEN
RETURN;
END;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION unfollow_user(
p_follower_id UUID,
p_followee_id UUID
) RETURNS VOID AS $$
BEGIN
-- Delete the row from the follow table where the follower_id and followee_id match
DELETE FROM follow
WHERE follower_id = p_follower_id AND followee_id = p_followee_id;
END;
$$ LANGUAGE plpgsql;
To create an activity stream, we need to first identify the actors that we are following. To accomplish this, we can define a function that takes an ID of an object from our object storage and retrieves the IDs of all the actors that are being followed by that object.
Here’s the function code:
CREATE OR REPLACE FUNCTION get_following_ids(p_user_id UUID)
RETURNS UUID[] AS $$
DECLARE
following_ids UUID[];
BEGIN
SELECT ARRAY_AGG(followee_id) INTO following_ids
FROM follow
WHERE follower_id = p_user_id;
RETURN following_ids;
END;
$$ LANGUAGE plpgsql;
Now that we have obtained the list of actors we are following, the next step is to retrieve their activities. However, this can be a challenging task due to two reasons: first, using a relational database could result in complex joins that could slow down the data retrieval process; second, the actors we are following might have produced a large number of activities, and retrieving them all at once could potentially overload the server. To address these issues, we will introduce pagination to our queries to ensure efficient and scalable data retrieval.
CREATE OR REPLACE FUNCTION get_activities_by_following(p_page_num INTEGER, p_page_size INTEGER, p_following_ids UUID[])
RETURNS TABLE (
id UUID,
verb TEXT,
actor_id UUID,
object_id UUID,
target_id UUID,
latitude DECIMAL(9,6),
longitude DECIMAL(9,6),
location GEOMETRY(Point, 4326),
self_data JSON,
created_at TIMESTAMP WITH TIME ZONE,
updated_at TIMESTAMP WITH TIME ZONE
) AS $$
BEGIN
RETURN QUERY
SELECT a.id, a.verb, a.actor_id, a.object_id, a.target_id, a.latitude, a.longitude, a.location, a."self" , a.created_at, a.updated_at
FROM activity a
WHERE a.actor_id = ANY(p_following_ids)
ORDER BY a.created_at DESC
LIMIT p_page_size
OFFSET (p_page_num - 1) * p_page_size;
END;
$$ LANGUAGE plpgsql;
We need a function that takes the result produced by the get_activities_by_following
function, and converts it into a JSON array.
CREATE OR REPLACE FUNCTION get_activities_by_following_as_json(p_page_num INTEGER, p_page_size INTEGER, p_user_id UUID)
RETURNS JSON AS $$
DECLARE
following_ids UUID[] := ARRAY(SELECT get_following_ids(p_user_id));
BEGIN
RETURN (SELECT json_agg(self_data) FROM get_activities_by_following(p_page_num, p_page_size, following_ids));
END;
$$ LANGUAGE plpgsql;
To demonstrate our activity stream system, we need to create sample data. Let’s create 5 users and have them post ads on our objectstorage table.
--create users and activities
SELECT upsert_objectstorage(
'b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c01', -- object ID 1
59.9311, -- latitude
30.3609, -- longitude
'user', -- object type
'{"name": "Alice", "age": 27, "email": "alice@example.com", "picture_url": "https://example.com/pictures/alice.jpg"}' -- object data in JSON format
);
SELECT upsert_objectstorage(
'cc7ebda2-019c-4387-925c-352f7e1f0b10', -- object ID 2
59.9428, -- latitude
30.3071, -- longitude
'user', -- object type
'{"name": "Bob", "age": 33, "email": "bob@example.com", "picture_url": "https://example.com/pictures/bob.jpg"}' -- object data in JSON format
);
SELECT upsert_objectstorage(
'99875f15-49ee-4e6d-b356-cbab4f4e4a4c', -- object ID 3
59.9375, -- latitude
30.3086, -- longitude
'user', -- object type
'{"name": "Charlie", "age": 42, "email": "charlie@example.com", "picture_url": "https://example.com/pictures/charlie.jpg"}' -- object data in JSON format
);
SELECT upsert_objectstorage(
'34f6c0a5-5d5e-463f-a2cf-11b7529a92a1', -- object ID 4
59.9167, -- latitude
30.25, -- longitude
'user', -- object type
'{"name": "Dave", "age": 29, "email": "dave@example.com", "picture_url": "https://example.com/pictures/dave.jpg"}' -- object data in JSON format
);
SELECT upsert_objectstorage(
'8d7685d5-5b1f-4a7a-835e-b89e7d3a3b54', -- object ID 5
59.9391, -- latitude
30.3158, -- longitude
'user', -- object type
'{"name": "Eve", "age": 25, "email": "eve@example.com", "picture_url": "https://example.com/pictures/eve.jpg"}' -- object data in JSON format
);
--create ads
-- Bob's ad
SELECT upsert_objectstorage(
'f6c7599e-8161-4d54-82ec-faa13bb8cbf7', -- object ID
59.9428, -- latitude (near Saint Petersburg)
30.3071, -- longitude (near Saint Petersburg)
'ad', -- object type
'{"description": "Vintage bicycle, good condition", "ad_type": "sale", "picture_url": "https://example.com/pictures/bicycle.jpg"}' -- object data in JSON format
);
SELECT upsert_activity(
gen_random_uuid(), -- activity ID
'post', -- verb
'cc7ebda2-019c-4387-925c-352f7e1f0b10', -- actor ID (Bob)
'f6c7599e-8161-4d54-82ec-faa13bb8cbf7', -- object ID (Bob's ad)
NULL, -- target ID (no target)
59.9428, -- latitude (near Saint Petersburg)
30.3071 -- longitude (near Saint Petersburg)
);
-- Charlie's ad
SELECT upsert_objectstorage(
'41f7c558-1cf8-4f2b-b4b4-4d4e4df50843', -- object ID
59.9375, -- latitude (near Saint Petersburg)
30.3086, -- longitude (near Saint Petersburg)
'ad', -- object type
'{"description": "Smartphone, unlocked", "ad_type": "sale", "picture_url": "https://example.com/pictures/smartphone.jpg"}' -- object data in JSON format
);
SELECT upsert_activity(
gen_random_uuid(), -- activity ID
'post', -- verb
'99875f15-49ee-4e6d-b356-cbab4f4e4a4c', -- actor ID (Charlie)
'41f7c558-1cf8-4f2b-b4b4-4d4e4df50843', -- object ID (Charlie's ad)
NULL, -- target ID (no target)
59.9375, -- latitude (near Saint Petersburg)
30.3086 -- longitude (near Saint Petersburg)
);
-- Dave's ad
SELECT upsert_objectstorage(
'c3dd7b47-1bba-4916-8a6a-8b5f2b50ba88', -- object ID
59.9139, -- latitude (near Saint Petersburg)
30.3341, -- longitude (near Saint Petersburg)
'ad', -- object type
'{"description": "Vintage camera, working condition", "ad_type": "exchange", "picture_url": "https://example.com/pictures/camera.jpg"}' -- object data in JSON format
);
SELECT upsert_activity(
gen_random_uuid(), -- activity ID
'post', -- verb
'34f6c0a5-5d5e-463f-a2cf-11b7529a92a1', -- actor ID (Dave)
'c3dd7b47-1bba-4916-8a6a-8b5f2b50ba88', -- object ID (Dave's ad)
NULL, -- target ID (no target)
59.9139, -- latitude (near Saint Petersburg)
30.3341 -- longitude (near Saint Petersburg)
);
-- Eve's ad
SELECT upsert_objectstorage(
'3453f3c1-296d-47a5-a5a5-f5db5ed3f3b3', -- object ID
59.9375, -- latitude (near Saint Petersburg)
30.3141, -- longitude (near Saint Petersburg)
'ad', -- object type
'{"description": "Plants, various types", "ad_type": "want", "picture_url": "https://example.com/pictures/plants.jpg"}' -- object data in JSON format
);
SELECT upsert_activity(
gen_random_uuid(), -- activity ID
'post', -- verb
'8d7685d5-5b1f-4a7a-835e-b89e7d3a3b54', -- actor ID (Eve)
'3453f3c1-296d-47a5-a5a5-f5db5ed3f3b3', -- object ID (Eve's ad)
NULL, -- target ID (no target)
59.9375, -- latitude (near Saint Petersburg)
30.3141 -- longitude (near Saint Petersburg)
);
-- Alice's ad
SELECT upsert_objectstorage(
'b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c02', -- new object ID for Alice's ad
59.9311, -- latitude
30.3609, -- longitude
'ad', -- object type
'{"description": "Used bicycle, good condition", "ad_type": "sell", "picture_url": "https://example.com/pictures/bicycle.jpg"}' -- ad data in JSON format
);
SELECT upsert_activity(
gen_random_uuid(), -- activity ID
'post', -- verb
'b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c01', -- actor ID (Alice)
'b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c02', -- object ID (Alice's ad)
NULL, -- target ID (no target)
59.9311, -- latitude
30.3609 -- longitude
);
-- Charly's ad
SELECT upsert_objectstorage(
'99875f15-49ee-4e6d-b356-cbab4f4e4a4d', -- new object ID for Charlie's ad
59.9375, -- latitude
30.3086, -- longitude
'ad', -- object type
'{"description": "Books, various genres", "ad_type": "sell", "picture_url": "https://example.com/pictures/books.jpg"}' -- ad data in JSON format
);
SELECT upsert_activity(
gen_random_uuid(), -- activity ID
'post', -- verb
'99875f15-49ee-4e6d-b356-cbab4f4e4a4c', -- actor ID (Charlie)
'99875f15-49ee-4e6d-b356-cbab4f4e4a4d', -- object ID (Charlie's ad)
NULL, -- target ID (no target)
59.9428, -- latitude
30.3071 -- longitude
);
-- Bob's ad
SELECT upsert_objectstorage(
'cc7ebda2-019c-4387-925c-352f7e1f0b12', -- new object ID for Bob's ad
59.9428, -- latitude
30.3071, -- longitude
'ad', -- object type
'{"description": "Vintage record player, needs repair", "ad_type": "exchange", "picture_url": "https://example.com/pictures/record_player.jpg"}' -- ad data in JSON format
);
SELECT upsert_activity(
gen_random_uuid(), -- activity ID
'post', -- verb
'cc7ebda2-019c-4387-925c-352f7e1f0b10', -- actor ID (Bob)
'cc7ebda2-019c-4387-925c-352f7e1f0b12', -- object ID (Bob's ad)
NULL, -- target ID (no target)
59.9428, -- latitude
30.3071 -- longitude
);
Now that we have created objects and activities, the activity stream will still be empty because actors need to follow each other to generate activity. Therefore, we need to establish follow relationships between users to create a stream showing their activities.
-- Follow data
-- Follow Eve and Alice to themselves
SELECT follow_user('8d7685d5-5b1f-4a7a-835e-b89e7d3a3b54', '8d7685d5-5b1f-4a7a-835e-b89e7d3a3b54');
SELECT follow_user('b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c01', 'b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c01');
-- Follow Eve and Alice to Bob, Charlie, and Dave
SELECT follow_user('8d7685d5-5b1f-4a7a-835e-b89e7d3a3b54', 'cc7ebda2-019c-4387-925c-352f7e1f0b10');
SELECT follow_user('b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c01', 'cc7ebda2-019c-4387-925c-352f7e1f0b10');
SELECT follow_user('8d7685d5-5b1f-4a7a-835e-b89e7d3a3b54', '99875f15-49ee-4e6d-b356-cbab4f4e4a4c');
SELECT follow_user('b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c01', '99875f15-49ee-4e6d-b356-cbab4f4e4a4c');
SELECT follow_user('8d7685d5-5b1f-4a7a-835e-b89e7d3a3b54', '34f6c0a5-5d5e-463f-a2cf-11b7529a92a1');
SELECT follow_user('b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c01', '34f6c0a5-5d5e-463f-a2cf-11b7529a92a1');
-- Follow data
It’s time to test our activity stream first lets try to get the followers for the user Alice
SELECT get_following_ids('b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c01') -- get the objects that Allice is following
here is the result
get_following_ids
-----------------------------------------------------------------------------------------------------------------------------------------------------+
{
b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c01,
cc7ebda2-019c-4387-925c-352f7e1f0b10,
99875f15-49ee-4e6d-b356-cbab4f4e4a4c,
34f6c0a5-5d5e-463f-a2cf-11b7529a92a1
}
Now lets get the activities of the objects that Alice is following, we will get page 1 and how 10 records per page
SELECT * FROM get_activities_by_following(1, 10, ARRAY(SELECT get_following_ids('b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c01')));
here is the result
id |verb|actor_id |object_id |target_id|
------------------------------------+----+------------------------------------+------------------------------------+---------+
f905356f-2fe3-4f51-b6de-d2cd107f46b8|post|cc7ebda2-019c-4387-925c-352f7e1f0b10|f6c7599e-8161-4d54-82ec-faa13bb8cbf7| |
43a92964-5bcd-4096-93bc-e5e87c76455e|post|99875f15-49ee-4e6d-b356-cbab4f4e4a4c|41f7c558-1cf8-4f2b-b4b4-4d4e4df50843| |
69ec53ac-bbaa-4c36-81ef-8764647d7914|post|34f6c0a5-5d5e-463f-a2cf-11b7529a92a1|c3dd7b47-1bba-4916-8a6a-8b5f2b50ba88| |
de6b052f-8a84-4b37-9920-9f76cbb539d4|post|b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c01|b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c02| |
3c35544a-3ee0-47ae-bddc-1017127ff4d6|post|99875f15-49ee-4e6d-b356-cbab4f4e4a4c|99875f15-49ee-4e6d-b356-cbab4f4e4a4d| |
e76dcbb9-56c4-46d8-bb42-2f67dec4c5aa|post|cc7ebda2-019c-4387-925c-352f7e1f0b10|cc7ebda2-019c-4387-925c-352f7e1f0b12| |
Now lets makes this better and get the activities in JSON format
SELECT * FROM get_activities_by_following_as_json(1, 2, 'b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c01');
and here is the result
[
{
"id":"43a92964-5bcd-4096-93bc-e5e87c76455e",
"verb":"post",
"actor":{
"age":42,
"name":"Charlie",
"email":"charlie@example.com",
"picture_url":"https://example.com/pictures/charlie.jpg"
},
"object":{
"ad_type":"sale",
"description":"Smartphone, unlocked",
"picture_url":"https://example.com/pictures/smartphone.jpg"
},
"target":null,
"actor_id":"99875f15-49ee-4e6d-b356-cbab4f4e4a4c",
"latitude":59.937500,
"longitude":30.308600,
"object_id":"41f7c558-1cf8-4f2b-b4b4-4d4e4df50843",
"target_id":null,
"created_at":"2023-03-12T17:54:11.636928+03:00",
"updated_at":"2023-03-12T17:54:11.636928+03:00"
},
{
"id":"f905356f-2fe3-4f51-b6de-d2cd107f46b8",
"verb":"post",
"actor":{
"age":33,
"name":"Bob",
"email":"bob@example.com",
"picture_url":"https://example.com/pictures/bob.jpg"
},
"object":{
"ad_type":"sale",
"description":"Vintage bicycle, good condition",
"picture_url":"https://example.com/pictures/bicycle.jpg"
},
"target":null,
"actor_id":"cc7ebda2-019c-4387-925c-352f7e1f0b10",
"latitude":59.942800,
"longitude":30.307100,
"object_id":"f6c7599e-8161-4d54-82ec-faa13bb8cbf7",
"target_id":null,
"created_at":"2023-03-12T17:54:11.636928+03:00",
"updated_at":"2023-03-12T17:54:11.636928+03:00"
}
]
And now before I go, here is a good , this query will return all the activities that happened around a specific geo location
SELECT get_activities_by_distance_as_json(59.9343, 30.3351, 1600, 1, 10);
Here are the results, all those places are near my home ))
"name":"Charlie",
"email":"charlie@example.com",
"picture_url":"https://example.com/pictures/charlie.jpg"
},
"object":{
"ad_type":"sale",
"description":"Smartphone, unlocked",
"picture_url":"https://example.com/pictures/smartphone.jpg"
},
"target":null,
"actor_id":"99875f15-49ee-4e6d-b356-cbab4f4e4a4c",
"latitude":59.937500,
"longitude":30.308600,
"object_id":"41f7c558-1cf8-4f2b-b4b4-4d4e4df50843",
"target_id":null,
"created_at":"2023-03-12T17:54:11.636928+03:00",
"updated_at":"2023-03-12T17:54:11.636928+03:00"
},
{
"id":"e5e26df0-e58f-4b25-96c1-5b3460beb0c8",
"verb":"post",
"actor":{
"age":25,
"name":"Eve",
"email":"eve@example.com",
"picture_url":"https://example.com/pictures/eve.jpg"
},
"object":{
"ad_type":"want",
"description":"Plants, various types",
"picture_url":"https://example.com/pictures/plants.jpg"
},
"target":null,
"actor_id":"8d7685d5-5b1f-4a7a-835e-b89e7d3a3b54",
"latitude":59.937500,
"longitude":30.314100,
"object_id":"3453f3c1-296d-47a5-a5a5-f5db5ed3f3b3",
"target_id":null,
"created_at":"2023-03-12T17:54:11.636928+03:00",
"updated_at":"2023-03-12T17:54:11.636928+03:00"
},
{
"id":"de6b052f-8a84-4b37-9920-9f76cbb539d4",
"verb":"post",
"actor":{
"age":27,
"name":"Alice",
"email":"alice@example.com",
"picture_url":"https://example.com/pictures/alice.jpg"
},
"object":{
"ad_type":"sell",
"description":"Used bicycle, good condition",
"picture_url":"https://example.com/pictures/bicycle.jpg"
},
"target":null,
"actor_id":"b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c01",
"latitude":59.931100,
"longitude":30.360900,
"object_id":"b8dcbf13-cb01-4a35-93d5-5a5f5a2f6c02",
"target_id":null,
"created_at":"2023-03-12T17:54:11.636928+03:00",
"updated_at":"2023-03-12T17:54:11.636928+03:00"
}
]
In conclusion, this article provided a step-by-step guide on how to create an activity stream system using PostgreSQL as the database storage. The article covered various aspects, such as the creation of the object storage table, activity table, follow functionality, and pagination to handle the huge amount of data generated by users. Additionally, the article discussed the use of PostGIS extensions for geographical search and the benefits of using JSON columns in PostgreSQL to store complex data structures. Overall, the article provided a comprehensive guide to building an activity stream system that can handle a large volume of data efficiently. By following this guide, developers can create their own activity stream systems using PostgreSQL and implement them into their applications.
You can find the complete code for this tutorial here :
https://github.com/egarim/PostgresActivityStream