Tools
Tools: Laravel Search in 2026: Full-Text, Semantic, and Vector Search Explained
2026-02-11
0 views
admin
The Search Ladder: Start Simple, Scale When You Need To ## Full-Text Search: Your First Real Upgrade ## Adding Full-Text Indexes ## Running Full-Text Queries ## When Full-Text Search Falls Short ## Laravel Scout: The Middle Ground ## The Database Driver (No External Service) ## External Search Engines ## Semantic and Vector Search: When Keywords Aren't Enough ## What's Actually Happening Here ## Generating Embeddings ## Storing and Indexing Vectors ## Querying by Similarity ## Reranking: The Best of Both Worlds ## Combining Techniques: The Real-World Approach ## The Decision Framework ## Common Mistakes to Avoid ## Do I need Laravel Scout for full-text search? ## Can I use vector search with MySQL? ## How much does vector search cost in production? ## Should I switch from Algolia to Meilisearch? ## What about Elasticsearch? ## Wrapping Up Originally published at hafiz.dev Laravel just got a brand new documentation page dedicated entirely to search. Taylor Otwell shipped it today, and it quietly changes how we should think about search in Laravel applications. Why does this matter? Because Laravel now treats full-text search, semantic search, vector search, and reranking as first-class features. Not as afterthoughts. Not as "install this third-party package and figure it out." They're all documented in one place, with clear guidance on when to use what. I've been building search functionality in production Laravel apps for years. One of my SaaS projects handles thousands of search queries daily from users across multiple countries. And honestly? I wish this docs page existed two years ago. It would have saved me from overengineering my first search implementation. Let me walk you through everything this new page covers, what it actually means for your projects, and most importantly, which approach you should pick. Here's something Taylor himself said in the announcement: "I think most applications can get pretty far with the built-in database stuff." He's right. And this is the most important takeaway from the entire docs page. Think of Laravel's search options as a ladder. You start at the bottom and only climb when your current step isn't enough. Step 1: WHERE LIKE queries (you're already here)
Step 2: Full-text search with whereFullText (built-in, no packages)
Step 3: Laravel Scout with database driver (still no external services)
Step 4: Scout with Meilisearch, Algolia, or Typesense (external service)
Step 5: Semantic/vector search with embeddings (AI-powered) Most Laravel apps never need to go past Step 2 or 3. Seriously. If you're still doing WHERE title LIKE '%search term%', you're leaving performance on the table. Full-text search is built right into MariaDB, MySQL, and PostgreSQL, and Laravel makes it dead simple. First, add a full-text index to your migration: That's it. No packages. No external services. Just a migration. Now you can search using whereFullText: This is significantly faster than LIKE queries on large datasets. The database engine handles word boundaries, stemming, and relevance ranking for you. A search for "running" will match records containing "run" too. No external service required. One thing to note: on MariaDB and MySQL, results are automatically ordered by relevance score. On PostgreSQL, whereFullText filters matching records but doesn't order them by relevance. If you need automatic relevance ordering on PostgreSQL, consider using Scout's database engine, which handles this for you. If you need more control over the search mode, you can pass options: I used this exact approach in a production app with around 50,000 records. It handled everything we threw at it. Search results came back in under 100ms. Not once did we need to bring in an external search engine for that project. There are real limitations though. Full-text search matches keywords, not meaning. If your user searches for "how to fix my website" and your content talks about "debugging web applications," full-text search won't make that connection. It doesn't understand that "fix" and "debug" mean similar things in this context. Also, LIKE queries with a leading wildcard (%term) can't use indexes at all. Full-text search solves that, but it still won't give you typo tolerance, faceted filtering, or synonym matching out of the box. That's when you move up the ladder. Scout sits between raw database queries and full AI-powered search. It's a package (not built into the framework core), but it's maintained by the Laravel team and deeply integrated. Add the Searchable trait to your model: Scout ships with a database driver that works with MySQL and PostgreSQL. No Algolia account needed. No Docker containers to manage. This gives you a cleaner API than raw whereFullText calls, automatic index syncing via model observers, and pagination support. For many applications, this is the sweet spot. When the database driver isn't cutting it, Scout supports three external engines out of the box: Algolia (hosted, paid), Meilisearch (open source, self-hostable), and Typesense (open source, self-hostable). I'll be honest, if you need an external engine, I'd go with Meilisearch for most Laravel projects. It ships with Laravel Sail, it's open source, and the developer experience is excellent. Typesense is a strong alternative if you need built-in vector search capabilities. Algolia is solid but gets expensive fast once you scale past the free tier. Switching between engines is trivial since Scout abstracts the driver: Your application code stays exactly the same. That's the beauty of Scout's driver-based architecture. This is where it gets interesting. And this is the part of the new docs page that surprised me most. Laravel 12 now documents how to implement semantic search using vector embeddings. Not as a "maybe someday" feature, but as a real, documented approach with code examples. Traditional search matches words. Semantic search matches meaning. When a user searches for "affordable accommodation near the beach," semantic search understands that "budget hotel oceanfront" is a relevant result even though none of the original keywords match. This works through embeddings. You convert your text into numerical vectors (arrays of floating-point numbers) using an AI model. Similar concepts end up as similar vectors. Then you search by calculating the distance between the query vector and your stored vectors. This is where the Laravel AI SDK comes in. If you haven't set it up yet, my step-by-step tutorial gets you running in 30 minutes. The simplest way to generate an embedding is using Laravel's Stringable class: If you need to embed multiple inputs at once (which is more efficient since it makes a single API call), use the Embeddings class: Each piece of text becomes a point in high-dimensional space. Similar text ends up nearby. That's the core idea. PostgreSQL with the pgvector extension is the most practical option for Laravel developers. Laravel 12 has built-in support for vector columns and indexes right in the migration builder. No extra PHP packages needed. The ensureVectorExtensionExists() call enables pgvector on your database. The ->index() chain creates an HNSW (Hierarchical Navigable Small World) index automatically, which dramatically speeds up similarity searches on large datasets. That's two lines doing what used to take a separate package and raw SQL statements. On your Eloquent model, cast the vector column to an array: Once your embeddings are stored, you can find semantically similar content using Laravel's built-in whereVectorSimilarTo query builder method: This is powerful stuff. But it comes with real costs. Every piece of content needs an embedding generated (API calls cost money). Every search query also needs an embedding (more API calls, added latency). And you need PostgreSQL with pgvector, which not every hosting provider supports yet. All Laravel Cloud Serverless Postgres databases already include pgvector though, so if you're on Cloud you're good to go. Here's something most Laravel developers haven't heard about yet. Reranking. The idea is simple: use a fast, cheap search method first (full-text or keyword search) to get a broad set of candidates. Then use a more expensive AI model to re-order those results by semantic relevance. That second line uses Scout's collection macro to rerank all 50 candidates by how semantically relevant their body text is to the query. You can also rerank by multiple fields or use a closure to build a custom document string: This gives you semantic understanding without embedding every single record in your database. You only run the expensive AI operation on 20-50 candidates, not your entire dataset. Smart tradeoff. I haven't used reranking in production yet, but I'm excited about it. It solves the biggest problem with full vector search, which is the cost and complexity of maintaining embeddings for every record. With reranking, you get 80% of the benefit at maybe 20% of the cost. The new docs page also covers combining these approaches, and this is where the practical value really shines. In a production SaaS app I worked on, search evolved over time. We started with basic WHERE LIKE queries. Within a month, we migrated to whereFullText because the dataset grew to 30,000+ records and the LIKE queries were getting slow. Six months in, we added Scout with the database driver for a cleaner API and automatic index syncing. We never needed vector search for that project. The content was structured (titles, categories, tags) and users searched with specific terms. Full-text search handled it perfectly. But in another project with user-generated content in multiple languages, I could see vector search being the right call from day one. When your users describe the same thing in completely different ways, keyword matching falls apart fast. The point is: don't pick your search strategy based on what's technically impressive. Pick it based on what your data and users actually need. Here's my quick guide for choosing: Use whereFullText when your data is structured, searches use specific keywords, and you have under 100K records. This covers most admin panels, internal tools, and early-stage SaaS apps. Zero additional cost. Use Scout with database driver when you want a cleaner search API, automatic index syncing, and pagination. Still no external services. Good for any app that's outgrowing raw queries. Use Scout with Meilisearch/Typesense when you need typo tolerance, faceted search, filtering, or instant search-as-you-type. You'll need to run an additional service, but Sail makes this painless. Use vector search when your users search by concept rather than keywords, you have multilingual content, or you're building recommendation systems. This requires PostgreSQL + pgvector and an AI embedding provider. Real cost implications here. Use reranking when you want semantic relevance without the overhead of embedding your entire dataset. Great middle ground between full-text and full vector search. Starting with vector search. I've seen developers reach for pgvector and embeddings for a blog with 200 posts. whereFullText would handle that in microseconds with zero ongoing costs. Don't overengineer. Ignoring the database driver. Scout's database driver is surprisingly capable. Before spinning up Meilisearch, try the database driver first. You might not need anything else. Not indexing properly. Adding a whereFullText query without a full-text index is actually slower than a regular LIKE query. Always add the index in your migration. I covered database indexing strategies in detail if you want to go deeper. Embedding everything upfront. If you're adding vector search, embed on write (when content is created/updated), not in bulk. Use queue jobs to handle embedding generation asynchronously. Forgetting about cost. OpenAI's embedding API is cheap per request, but it adds up. 100,000 records at 1536 dimensions each, plus every search query needing an embedding, that's real money on your monthly bill. No. whereFullText is built into Laravel's query builder and works without Scout. Scout adds a nicer API, automatic syncing, and support for external engines, but it's not required for basic full-text search. Not natively. MySQL doesn't have a vector extension like PostgreSQL's pgvector. If you need vector search, you'll need PostgreSQL, or you can use an external service like Typesense or Meilisearch (which now supports vector search too). It depends on your embedding provider and dataset size. With OpenAI's text-embedding-3-small model, embedding 100,000 articles costs roughly $2-3. But every search query also needs an embedding, so factor in ongoing API costs. Reranking can reduce this significantly. If cost is a concern, yes. Meilisearch is free to self-host and gives you similar features. The migration is simple since Scout abstracts the driver. Just change your .env and you're mostly done. Laravel doesn't include an Elasticsearch driver for Scout out of the box. There are community packages, but if you're starting fresh, Meilisearch or Typesense are easier to integrate and maintain. Elasticsearch makes more sense for very large-scale applications with complex search requirements. The new /docs/search page is a small addition to Laravel's documentation, but it signals something bigger. Search in Laravel has evolved from "use Scout" to a complete toolkit covering everything from simple database queries to AI-powered semantic understanding. My recommendation? Start with whereFullText. It's free, it's fast, and it's built in. Move to Scout when you need a better API. Add an external engine when you need typo tolerance or faceted search. And only reach for vector search when keywords really can't solve your problem. Need help implementing search in your Laravel app? Let's talk. Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse CODE_BLOCK:
Schema::create('articles', function (Blueprint $table) { $table->id(); $table->string('title'); $table->text('body'); $table->timestamps(); $table->fullText(['title', 'body']);
}); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
Schema::create('articles', function (Blueprint $table) { $table->id(); $table->string('title'); $table->text('body'); $table->timestamps(); $table->fullText(['title', 'body']);
}); CODE_BLOCK:
Schema::create('articles', function (Blueprint $table) { $table->id(); $table->string('title'); $table->text('body'); $table->timestamps(); $table->fullText(['title', 'body']);
}); CODE_BLOCK:
$articles = Article::whereFullText(['title', 'body'], 'laravel search') ->get(); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
$articles = Article::whereFullText(['title', 'body'], 'laravel search') ->get(); CODE_BLOCK:
$articles = Article::whereFullText(['title', 'body'], 'laravel search') ->get(); COMMAND_BLOCK:
// Boolean mode for more precise matching
$articles = Article::whereFullText( ['title', 'body'], '+laravel -wordpress', ['mode' => 'boolean']
)->get(); Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
// Boolean mode for more precise matching
$articles = Article::whereFullText( ['title', 'body'], '+laravel -wordpress', ['mode' => 'boolean']
)->get(); COMMAND_BLOCK:
// Boolean mode for more precise matching
$articles = Article::whereFullText( ['title', 'body'], '+laravel -wordpress', ['mode' => 'boolean']
)->get(); CODE_BLOCK:
composer require laravel/scout
php artisan vendor:publish --provider="Laravel\Scout\ScoutServiceProvider" Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
composer require laravel/scout
php artisan vendor:publish --provider="Laravel\Scout\ScoutServiceProvider" CODE_BLOCK:
composer require laravel/scout
php artisan vendor:publish --provider="Laravel\Scout\ScoutServiceProvider" CODE_BLOCK:
use Laravel\Scout\Searchable; class Article extends Model
{ use Searchable;
} Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
use Laravel\Scout\Searchable; class Article extends Model
{ use Searchable;
} CODE_BLOCK:
use Laravel\Scout\Searchable; class Article extends Model
{ use Searchable;
} CODE_BLOCK:
SCOUT_DRIVER=database Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
SCOUT_DRIVER=database CODE_BLOCK:
SCOUT_DRIVER=database CODE_BLOCK:
$articles = Article::search('laravel search')->get(); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
$articles = Article::search('laravel search')->get(); CODE_BLOCK:
$articles = Article::search('laravel search')->get(); COMMAND_BLOCK:
# Just change this line
SCOUT_DRIVER=meilisearch Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
# Just change this line
SCOUT_DRIVER=meilisearch COMMAND_BLOCK:
# Just change this line
SCOUT_DRIVER=meilisearch CODE_BLOCK:
use Illuminate\Support\Str; $embedding = Str::of('affordable accommodation near the beach') ->toEmbeddings(); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
use Illuminate\Support\Str; $embedding = Str::of('affordable accommodation near the beach') ->toEmbeddings(); CODE_BLOCK:
use Illuminate\Support\Str; $embedding = Str::of('affordable accommodation near the beach') ->toEmbeddings(); CODE_BLOCK:
use Laravel\Ai\Embeddings; $response = Embeddings::for([ 'affordable accommodation near the beach', 'luxury resort with ocean view',
])->generate(); $response->embeddings;
// [[0.123, 0.456, ...], [0.789, 0.012, ...]] Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
use Laravel\Ai\Embeddings; $response = Embeddings::for([ 'affordable accommodation near the beach', 'luxury resort with ocean view',
])->generate(); $response->embeddings;
// [[0.123, 0.456, ...], [0.789, 0.012, ...]] CODE_BLOCK:
use Laravel\Ai\Embeddings; $response = Embeddings::for([ 'affordable accommodation near the beach', 'luxury resort with ocean view',
])->generate(); $response->embeddings;
// [[0.123, 0.456, ...], [0.789, 0.012, ...]] CODE_BLOCK:
Schema::ensureVectorExtensionExists(); Schema::create('articles', function (Blueprint $table) { $table->id(); $table->string('title'); $table->text('body'); $table->vector('embedding', dimensions: 1536)->index(); $table->timestamps();
}); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
Schema::ensureVectorExtensionExists(); Schema::create('articles', function (Blueprint $table) { $table->id(); $table->string('title'); $table->text('body'); $table->vector('embedding', dimensions: 1536)->index(); $table->timestamps();
}); CODE_BLOCK:
Schema::ensureVectorExtensionExists(); Schema::create('articles', function (Blueprint $table) { $table->id(); $table->string('title'); $table->text('body'); $table->vector('embedding', dimensions: 1536)->index(); $table->timestamps();
}); COMMAND_BLOCK:
protected function casts(): array
{ return [ 'embedding' => 'array', ];
} Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
protected function casts(): array
{ return [ 'embedding' => 'array', ];
} COMMAND_BLOCK:
protected function casts(): array
{ return [ 'embedding' => 'array', ];
} CODE_BLOCK:
use Illuminate\Support\Str; // First, embed the search query
$queryEmbedding = Str::of($searchQuery)->toEmbeddings(); // Then find nearest neighbors by cosine similarity
$results = Article::whereVectorSimilarTo('embedding', $queryEmbedding) ->take(10) ->get(); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
use Illuminate\Support\Str; // First, embed the search query
$queryEmbedding = Str::of($searchQuery)->toEmbeddings(); // Then find nearest neighbors by cosine similarity
$results = Article::whereVectorSimilarTo('embedding', $queryEmbedding) ->take(10) ->get(); CODE_BLOCK:
use Illuminate\Support\Str; // First, embed the search query
$queryEmbedding = Str::of($searchQuery)->toEmbeddings(); // Then find nearest neighbors by cosine similarity
$results = Article::whereVectorSimilarTo('embedding', $queryEmbedding) ->take(10) ->get(); CODE_BLOCK:
use Laravel\Ai\Reranking; // Step 1: Get candidates with fast full-text search
$candidates = Article::whereFullText(['title', 'body'], $query) ->take(50) ->get(); // Step 2: Rerank with AI
$reranked = $candidates->rerank('body', $query); Enter fullscreen mode Exit fullscreen mode CODE_BLOCK:
use Laravel\Ai\Reranking; // Step 1: Get candidates with fast full-text search
$candidates = Article::whereFullText(['title', 'body'], $query) ->take(50) ->get(); // Step 2: Rerank with AI
$reranked = $candidates->rerank('body', $query); CODE_BLOCK:
use Laravel\Ai\Reranking; // Step 1: Get candidates with fast full-text search
$candidates = Article::whereFullText(['title', 'body'], $query) ->take(50) ->get(); // Step 2: Rerank with AI
$reranked = $candidates->rerank('body', $query); COMMAND_BLOCK:
// Rerank by multiple fields
$reranked = $candidates->rerank(['title', 'body'], $query); // Rerank with a custom document builder
$reranked = $candidates->rerank( fn ($article) => $article->title . ': ' . $article->body, $query
); Enter fullscreen mode Exit fullscreen mode COMMAND_BLOCK:
// Rerank by multiple fields
$reranked = $candidates->rerank(['title', 'body'], $query); // Rerank with a custom document builder
$reranked = $candidates->rerank( fn ($article) => $article->title . ': ' . $article->body, $query
); COMMAND_BLOCK:
// Rerank by multiple fields
$reranked = $candidates->rerank(['title', 'body'], $query); // Rerank with a custom document builder
$reranked = $candidates->rerank( fn ($article) => $article->title . ': ' . $article->body, $query
);
how-totutorialguidedev.toaiopenaiserverswitchmysqlpostgresqldockerdatabase