PostgreSQL Full-text search using “text search vectors”

PostgreSQL Full-text search using “text search vectors”

Postgres Vector Search

Full-text search in PostgreSQL is implemented using a concept called “text search vectors” (or just “tsvector”). Let’s dive into how it works:

  1. Text Search Vectors (tsvector):
    • A tsvector is a sorted list of distinct lexemes, which are words that have been normalized to merge different forms of the same word (e.g., “run” and “running”).
    • PostgreSQL provides functions to convert plain text into tsvector format, which typically involves:
      • Parsing the text into tokens.
      • Converting tokens to lexemes.
      • Removing stop words (common words like “and” or “the” that are typically ignored in searches).
    • Example: The text “The quick brown fox” might be represented in tsvector as ‘brown’:3 ‘fox’:4 ‘quick’:2.
  2. Text Search Queries (tsquery):
    • A tsquery represents a text search query, which includes lexemes and optional operators.
    • Operators can be used to combine lexemes in different ways (e.g., AND, OR, NOT).
    • Example: The query “quick & fox” would match any tsvector containing both “quick” and “fox”.
  3. Searching:
    • PostgreSQL provides the @@ operator to search a tsvector column with a tsquery.
    • Example: WHERE column @@ to_tsquery(‘english’, ‘quick & fox’).
  4. Ranking:
    • Once you’ve found matches using the @@ operator, you often want to rank them by relevance.
    • PostgreSQL provides the ts_rank function to rank results. It returns a number indicating how relevant a tsvector is to a tsquery.
    • The ranking is based on various factors, including the frequency of lexemes and their proximity to each other in the text.
  5. Indexes:
    • One of the significant advantages of tsvector is that you can create a GiST or GIN index on it.
    • These indexes significantly speed up full-text search queries.
    • GIN indexes, in particular, are optimized for tsvector and provide very fast lookups.
  6. Normalization and Configuration:
    • PostgreSQL supports multiple configurations (e.g., “english”, “french”) that determine how text is tokenized and which stop words are used.
    • This allows you to tailor your full-text search to specific languages or requirements.
  7. Highlighting and Snippets:
    • In addition to just searching, PostgreSQL provides functions like ts_headline to return snippets of the original text with search terms highlighted.

In summary, PostgreSQL’s full-text search works by converting regular text into a normalized format (tsvector) that is optimized for searching. This combined with powerful query capabilities (tsquery) and indexing options makes it a robust solution for many full-text search needs.

Implementing vector search using E.F Core and Postgres SQL

here are the steps to implement vector search in your dot net project:

Step 1: Add the required nuget packages

   
<PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL" Version="7.0.11" />
<PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL.Design" Version="1.1.0" />
<PackageReference Include="Npgsql.EntityFrameworkCore.PostgreSQL.NetTopologySuite" Version="7.0.11" />

Step 2: Implement a vector in your entities by implementing properties of type NpgsqlTsVector as shown below

public class Blog
{
    public int Id { get; set; }
    public string Title { get; set; }
    public NpgsqlTsVector SearchVector { get; set; }
}

Step 3: add a computed column in your DbContext

protected override void OnModelCreating(ModelBuilder modelBuilder)
{
    modelBuilder.Entity<Blog>()
        .Property(b => b.SearchVector)
      .HasComputedColumnSql("to_tsvector('english', \"Blogs\".\"Title\")", stored: true);
}

in this case you are calculating the vector using the value of the title column of the blogs table, you can calculate the vector using a single column or a combination of columns

Now you are ready to use vector search in your queries, please check the example below

var searchTerm = "Jungle"; // Example search term
var searchVector = NpgsqlTsVector.Parse(searchTerm);

var blogs = context.Blogs
    .Where(p => p.SearchVector.Matches(searchTerm))
    .OrderByDescending(td => td.SearchVector.Rank(EF.Functions.ToTsQuery(searchTerm))).ToList();

In real world scenarios its better to create a vector by joining the values of several columns and weight them according to the relevance for your business case, you can check the test project I have created here : https://github.com/egarim/PostgresVectorSearch

and that’s it for this post, until next time, happy coding ))

 

 

 

 

Elastic Search

Elastic Search

Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. It is a technology that is part of the Elastic Stack, along with Logstash, and Kibana, collectively known as the ELK Stack. Elasticsearch is built on top of the open-source Lucene library and provides a multi-tenant capable full-text search engine. It’s designed to be scalable, resilient, and very fast, which makes it a popular choice for many different types of applications including:

1. Search Engines: Full-text search, partial text search, faceted search, and more.

2. Log and Event Data Analysis: Often used with Logstash and Kibana for searching, analyzing, and visualizing log data in real-time.xx

3. Real-time Analytics: Can be used for analyzing large volumes of real-time data efficiently.

4. Data Visualization: Often used with Kibana to visualize the data stored in Elasticsearch.

5. Autocomplete Features: Quick search suggestions.

6. Geospatial Search: Searching based on geographic location.

Key Features:

  • Distributed and Scalable: Built to scale horizontally with easy distribution across multiple nodes.
  • Schema-free JSON Documents: Uses JSON documents in order to store data, which makes it flexible and easy to use.
  • RESTful API: Exposes REST APIs for CRUD operations, allowing interaction via standard HTTP methods.
  • Real-time Indexing: As soon as a document is stored, it is available for search.
  • Multi-tenancy: Supports multiple indices, and the indices can be divided into shards for better performance.

Basic Concepts:

  • Node: A single running instance of Elasticsearch.
  • Cluster: A collection of one or more nodes.
  • Index: A collection of documents having somewhat similar characteristics.
  • Shard: A subset of an index. Each shard is a self-contained index.
  • Replica: A copy of a shard for failover and increased performance.

Elasticsearch is widely used in a variety of applications that require complex search features, large-scale logging, or real-time analytics. It’s often compared to other NoSQL databases like MongoDB and Apache Solr.

Choosing to use Elasticsearch depends on your specific needs, but there are several compelling reasons why it might be a good fit for your project:

Speed

  • Fast Search: Built on top of Apache Lucene, Elasticsearch is designed for fast, real-time search operations.
  • Real-time Indexing: New data is searchable almost immediately after it’s added.

Scalability

  • Horizontal Scaling: You can easily add more nodes to your Elasticsearch cluster as your data and query volume grow.
  • Distributed Nature: Automatically distributes data and query load across all the available nodes in the cluster.

Flexibility

  •  Schema-less: You can index JSON documents without a predefined schema.
  •  RESTful API: Easily interact with the search engine through RESTful APIs, using JSON over HTTP.
  •  Multiple Data Types: Supports text, numbers, dates, geospatial data, and more.

Robustness

  • High Availability: Multiple copies of data (replicas) can be maintained to provide failover.
  • Built-in Cluster Health and Monitoring: Tools like Kibana can provide insights into the operations and health of your Elasticsearch cluster.

Rich Query DSL

  • Powerful Query Language: Elasticsearch provides a rich, flexible, query language (DSL) that can perform complex queries, filters, and aggregations.
  • Relevancy Scoring: Sophisticated algorithms score each document for its relevance to a given search query.

Integration and Extensibility

  • Part of the Elastic Stack: Integrates seamlessly with other components like Logstash for data ingestion and Kibana for data visualization.
  • Extensible: Supports plugins to add additional features and capabilities.

Multi-Tenancy

  • Support for Multiple Indices: You can have multiple indices (databases) and query them all at once if needed.

Use Cases

  • Full-text Search: For applications like e-commerce product search, media catalog search, etc.
  • Logging and Log Analysis: When combined with Logstash and Kibana, it’s a powerful tool for logging debug information, monitoring, and real-time analytics.
  • Real-time Analytics: For business intelligence, performance metrics, and other real-time analytics needs.
  • Data Visualization: Can be used with Kibana or other visualization tools to graphically represent your data.

Community and Ecosystem

  • Strong Community: A large, active community contributes to its robust set of features.
  • Comprehensive Documentation: Extensive online resources are available to help you get the most out of Elasticsearch.

 

However, it’s important to note that Elasticsearch may not be suitable for all types of projects. It can be resource-intensive, and the learning curve can be steep if you’re new to search and analytics engines. It might also be overkill for simple search needs or small datasets. Always consider your specific requirements and constraints when deciding whether to use Elasticsearch.