Building AI-Powered Semantic Search

In today’s world, vast amounts of data are being generated and consumed online & having smart search functionality is a must. Traditional search techniques are keyword-based solutions and have limitations. Specialized search databases like ElasticSearch provide Fuzzy search allowing a bit of flexibility for typing mistakes. But is that good enough?

Not for Jam, a decentralized social media platform built on the Farcaster protocol. This was in December 2022. Jam was one of our clients and while working on it, we quickly realized smart search was going to be the key differentiator for this product.

Keyword Search and its Limitations

Keyword search or lexical search is where you look for exact matches of the search term (and maybe allow for typos or plural cases). What keyword search doesn’t do is understand the query’s overall meaning.

If you are searching for “famous landmarks of Paris”, the result with the Eiffel Tower won’t come up unless the result has those search query keywords.

That’s where semantic search or vector search comes in. It understands that the Eiffel Tower is in fact Paris’ most famous landmark.

Image Credit: Inspiration from OpenAI

Vector search is an approach in information retrieval that uses numeric representations of content for search scenarios. These numeric representations, also called vectors, are N-dimensional arrays created using AI models that capture the meaning and semantics of the whole content. The AI model, also referred as Embedding model, can transform complex unstructured data, including text and images, into vector representations.

Vector search uses approximate nearest neighbour (ANN) algorithms to find similar data and yields more relevant and precise results than traditional search.

Image Credit: Inspiration from OpenAI

Let’s understand in detail how a search and recommendation solution was implemented in Jam using vector search.

Selecting Database

While there are several more vector databases to choose from now (including Redis), when we started on Jam in December 2022, the choice was between Weaviate and Pinecone. We finalized Weaviate for our use case. It is an open-source platform with pre-built modules for popular AI models and frameworks enabling faster development.

💡
Btw, Weaviate has an absolutely stellar support team. Sila Uygun, one of their account managers who was always very prompt in answering our queries, deserves a shout-out.

Choosing an AI Model

We conducted a POC to evaluate various large language models (LLMs) that generate embeddings. Factors such as speed, cost, and accuracy were taken into consideration. Ultimately, OpenAI's text-embedding-ada-002 model emerged as the most suitable choice for our use case.

Integrating into the application

We used Weaviate's nearText method for vector search which internally uses OpenAI for creating vectors. You only need to provide the text and Weaviate converts the input text to a vector. After analyzing the results for different use cases, a distance threshold was set between 0.3 to 0.25 to filter out less relevant content.

Advanced Search Features

We provided users precise control over search results with advanced sorting and filtering options. Users could sort content based on relevance & engagement score. Factors such as replies, likes, etc were taken into consideration. Filtering content based on published time allowed users to search within a specific timeframe.

s

Custom Relevance Filtering

We built custom logic on top of the vector search to further refine our results. Several factors, such as text length, internal engagement score, presence of the search term, etc were taken into consideration. This additional layer of refinement ensured that users received the most relevant and meaningful results.

Recommendations with Twitcaster

We extended search functionality to another feature called Twitcaster. Twitcaster intelligently suggested related users and content within the Farcaster network based on a user's recent tweets. The recommendation system enhanced the user experience by facilitating discovery within the platform. We used the nearText method to find the top 3 content sorted by internal engagement score and within a vector distance of 0.3 for each tweet. The fetched results were then re-ranked and filtered based on the internal engagement score and the vector distance with the tweet.

Account for Downtimes

We faced occasional downtimes with OpenAI services due to the increased load on their system, especially during the early days. To mitigate such disruptions, we employed a fallback strategy. In case of downtimes, we utilized our primary database with limited search capabilities. This approach ensured that users could still access content even during periods of reduced AI service availability.

Game Changer

In conclusion, using vector search at Jam proved to be a game-changer and resulted in Jam becoming the most widely used third-party app on Farcaster.It gave Jam a significant edge over other Farcaster apps that used traditional search methods.

It is important to note that Vector Search is not just limited to these use cases and can be used for classification, clustering, anomaly detection etc. It beholds interesting opportunities that can be explored in future. With rapid advancements happening in the field of AI, these solutions are only going to get better. It's crucial to embrace such innovations & stay relevant in the changing tech landscape.

Aman Barbaria

Aman Barbaria

Pune