Deadlock-free product indexing in Shopware 6 with a dedicated messenger transport
When running php bin/console dal:refresh:index on a Shopware 6 installation with multiple message workers, you may encounter the following error:
Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transactionOften followed by:
SAVEPOINT DOCTRINE_2 does not existThis guide explains why these deadlocks occur, how Shopware’s message routing works under the hood, and how to configure a dedicated indexer transport that eliminates the multi-worker indexer deadlock class and dramatically reduces deadlocks overall.
Prerequisites & Tested VersionsLink to this section
This guide was written and tested against the following stack. Class names, middleware behavior, and Doctrine transport internals can change across versions. Verify that these match your setup before applying the configuration.
| Component | Version |
|---|---|
| Shopware | 6.6.x |
| Symfony Messenger | 7.x (ships with Shopware 6.6) |
| Database | MariaDB 10.11+ / MySQL 8.0+ |
| Transport | Doctrine (default) |
If you are running Shopware 6.5 or earlier, the routing_overwrite config key and some internal class paths may differ. Check your Shopware version’s documentation.
Understanding the ProblemLink to this section
A typical production Shopware 6 setup runs multiple message workers to process asynchronous tasks like email sending, import/export, and indexing. The default configuration from the Shopware Docker documentation suggests running multiple worker replicas:
worker:
entrypoint:
[
'php',
'bin/console',
'messenger:consume',
'async',
'low_priority',
'--limit=300',
'--memory-limit=512M',
]
deploy:
replicas: 3This works well for most async tasks. However, when the product indexer runs, either via dal:refresh:index or dal:refresh:index —use-queue , multiple workers end up executing concurrent UPDATE statements on the product table. With tens of thousands of products and parent-child variant relationships, MariaDB/MySQL detects row-level lock conflicts and raises deadlock errors.
The SAVEPOINT does not exist error is a secondary symptom: when MariaDB rolls back a transaction due to a deadlock, Doctrine tries to return to a savepoint that no longer exists.
Why Deadlocks Occur During IndexingLink to this section
The ProductIndexer runs over 10 updaters sequentially for each batch of products:
- InheritanceUpdater : resolves parent/child field inheritance
- StockStorage : recalculates stock values
- VariantListingUpdater : updates variant listing configuration
- ChildCountUpdater : counts child products per parent
- CategoryDenormalizer : flattens category assignments
- CheapestPriceUpdater : determines cheapest variant prices
- RatingAverageUpdater : aggregates product ratings
- SearchKeywordUpdater : rebuilds search keywords
- StreamUpdater : evaluates dynamic product groups
- ManyToManyIdFieldUpdater : denormalizes many-to-many IDs
Each updater issues UPDATE statements (often with JOIN s) on the product table. When multiple workers process different product batches simultaneously, their lock patterns overlap, especially on parent products that are referenced by multiple variants. This is the classic recipe for database deadlocks.
The Solution: A Dedicated Indexer TransportLink to this section
The idea is simple: route all indexer messages to a separate transport that is consumed by only one worker. This eliminates concurrent writes to the product table from multiple indexer processes, the primary source of deadlocks, while keeping multiple workers available for all other async messages. Note that occasional deadlocks can still occur from other writers (see How Shopware Handles Deadlocks Internally), but the high-frequency multi-worker deadlock pattern is resolved.
Symfony Messenger supports multiple transports, each backed by its own queue. Messages can be routed to specific transports, and workers can be configured to consume from specific transports.
However, Shopware adds its own routing layer on top of Symfony’s, and the Doctrine transport has a database-level detail that must be accounted for. The next two sections explain the pitfalls you need to avoid.
Pitfall 1: Using the Wrong Routing MechanismLink to this section
Your first instinct might be to add routing entries under framework.messenger.routing :
# DO NOT do this -- it causes double-routing
framework:
messenger:
routing:
'Shopware\Core\Framework\MessageQueue\AsyncMessageInterface': async
'Shopware\Core\Framework\MessageQueue\LowPriorityMessageInterface': low_priority
'Shopware\Core\Framework\DataAbstractionLayer\Indexing\EntityIndexingMessage': indexerThis does not work as expected. Here’s why:
Both EntityIndexingMessage and IterateEntityIndexerMessage implement AsyncMessageInterface :
class EntityIndexingMessage implements AsyncMessageInterface, DeduplicatableMessageInterfaceSymfony’s SendersLocator iterates over all matching routing entries, both the class name and all implemented interfaces. Since EntityIndexingMessage matches both AsyncMessageInterface: async and EntityIndexingMessage: indexer , the message is sent to both transports. Your async workers still process it.
From vendor/symfony/messenger/Transport/Sender/SendersLocator.php :
public function getSenders(Envelope $envelope): iterable
{
// If a TransportNamesStamp exists, use ONLY those transports
if ($envelope->all(TransportNamesStamp::class)) {
foreach ($envelope->last(TransportNamesStamp::class)->getTransportNames() as $senderAlias) {
yield from $this->getSenderFromAlias($senderAlias);
}
return; // <-- skips all routing below
}
// Otherwise, check ALL matching types (class, parents, interfaces)
foreach (HandlersLocator::listTypes($envelope) as $type) {
foreach ($this->sendersMap[$type] ?? [] as $senderAlias) {
// yields ALL matching transports
}
}
}Notice the key detail: if a TransportNamesStamp is present on the envelope, SendersLocator skips the routing map entirely and only uses the transports specified in the stamp.
The correct approach: shopware.messenger.routing_overwrite
Shopware provides a RoutingOverwriteMiddleware that runs before Symfony’s SendMessageMiddleware . It checks the shopware.messenger.routing_overwrite configuration, and if a match is found, adds a TransportNamesStamp to the envelope. This stamp then causes SendersLocator to bypass the default routing entirely.
// Shopware's RoutingOverwriteMiddleware (simplified)
public function handle(Envelope $envelope, StackInterface $stack): Envelope
{
// Skip if already stamped
if ($this->hasTransportStamp($envelope)) {
return $stack->next()->handle($envelope, $stack);
}
// Check routing_overwrite config
$transports = $this->getTransports($envelope, $this->routing, inherited: true);
if (empty($transports)) {
return $stack->next()->handle($envelope, $stack); // fall through to Symfony routing
}
// Add TransportNamesStamp -- this overrides ALL Symfony routing
return $stack->next()->handle(
$envelope->with(new TransportNamesStamp($transports)),
$stack
);
}Use routing_overwrite like this:
shopware:
messenger:
routing_overwrite:
'Shopware\Core\Framework\DataAbstractionLayer\Indexing\EntityIndexingMessage': indexer
'Shopware\Core\Framework\DataAbstractionLayer\Indexing\MessageQueue\IterateEntityIndexerMessage': indexerThis ensures the message is sent exclusively to the indexer transport.
Pitfall 2: Shared queue_name in the DatabaseLink to this section
Even with correct routing, there’s a second problem. Shopware uses the Doctrine transport by default, which stores all messages in a single messenger_messages database table. Transports are differentiated by the queue_name column.
The default DSN (Data Source Name, the connection string that tells Symfony which transport backend to use) is:
doctrine://default?auto_setup=falseWith no queue_name parameter, the Doctrine transport defaults to queue_name=default (defined in Symfony’s Connection class):
// vendor/symfony/doctrine-messenger/Transport/Connection.php
protected const DEFAULT_OPTIONS = [
'table_name' => 'messenger_messages',
'queue_name' => 'default', // <-- this is the default
'redeliver_timeout' => 3600,
'auto_setup' => true,
];If you define your indexer transport using the same DSN without specifying a queue_name :
# This is WRONG -- both transports use queue_name=default
framework:
messenger:
transports:
async:
dsn: '%env(MESSENGER_TRANSPORT_DSN)%' # queue_name=default
indexer:
dsn: '%env(MESSENGER_TRANSPORT_DSN)%' # queue_name=default (!)Both transports read from and write to the same queue_name=default rows. When the async worker polls for messages:
SELECT * FROM messenger_messages
WHERE queue_name = 'default'
AND (delivered_at IS NULL OR delivered_at < ...)
AND available_at <= NOW()It finds the indexer messages because they share the same queue_name . The routing was correct, but the database doesn’t distinguish between the transports.
The fix is to give the indexer transport its own queue_name :
framework:
messenger:
transports:
indexer:
dsn: '%env(MESSENGER_TRANSPORT_DSN)%'
options:
queue_name: indexerWhy options instead of DSN concatenation? You may see examples that append &queue_name=indexer directly to the DSN. This only works if the base DSN already contains a
?query separator. Otherwise the resulting URL is invalid. Using options.queue_name is portable, explicit, and works regardless of the DSN format. Shopware’s default low_priority transport sets the value directly in the DSN because its base DSN is hardcoded with the?separator already present. For your own transports, stick with the options approach shown above.
The Complete ConfigurationLink to this section
Create the file config/packages/messenger.yaml :
framework:
messenger:
transports:
indexer:
dsn: '%env(MESSENGER_TRANSPORT_DSN)%'
options:
queue_name: indexer
serializer: messenger.transport.symfony_serializer
retry_strategy:
max_retries: 3
delay: 1000
multiplier: 2
max_delay: 0
shopware:
messenger:
routing_overwrite:
'Shopware\Core\Framework\DataAbstractionLayer\Indexing\EntityIndexingMessage': indexer
'Shopware\Core\Framework\DataAbstractionLayer\Indexing\MessageQueue\IterateEntityIndexerMessage': indexerThis configuration does three things:
- Defines an indexer transport with its own queue_name=indexer in the database (set via options , not DSN concatenation)
- Routes EntityIndexingMessage (the message for processing product batches) to the indexer transport exclusively
- Routes IterateEntityIndexerMessage (the message that triggers batch iteration) to the indexer transport exclusively
After adding this file, clear the cache:
php bin/console cache:clearDocker / Deployment SetupLink to this section
With the configuration in place, set up your workers so that only one worker consumes from the indexer transport:
services:
worker:
# ... your base shopware service config
entrypoint:
- php
- bin/console
- messenger:consume
- async
- low_priority
- --limit=300
- --memory-limit=512M
deploy:
replicas: 3 # multiple replicas are fine for async + low_priority
indexer-worker:
# ... your base shopware service config
entrypoint:
- php
- bin/console
- messenger:consume
- indexer
- --limit=1000
- --memory-limit=512M
deploy:
replicas: 1 # IMPORTANT: only ONE replicaKey points:
- The async workers consume from async and low_priority . They never see indexer messages.
- The indexer worker consumes only from indexer . Only one instance runs at a time.
- In a multi-node setup, the indexer worker should run on one node only
To trigger queue-based indexing:
php bin/console dal:refresh:index --use-queueVerifying the ConfigurationLink to this section
After applying the configuration, verify that messages are routed correctly:
Stop all workers (async and indexer)
Trigger the indexer:
php bin/console dal:refresh:index --use-queueCheck the database:
SELECT queue_name, COUNT(*) as count FROM messenger_messages GROUP BY queue_name;Expected result:
queue_name count indexer > 0 If you see messages with queue_name=default , the routing is not working correctly. Double-check that you used shopware.messenger.routing_overwrite (not framework.messenger.routing ) and that your cache is cleared.
Start only the async worker in verbose mode and confirm it never picks up indexer messages:
php bin/console messenger:consume async low_priority -vvWatch the output for a minute. You should see the worker polling but never receiving an EntityIndexingMessage or IterateEntityIndexerMessage . Meanwhile, verify the indexer queue remains untouched:
SELECT COUNT(*) FROM messenger_messages WHERE queue_name = 'indexer';The count should remain the same as in step 3. If the async worker consumes indexer messages, the queue_name is not set correctly on the transport. Revisit the options.queue_name setting.
Start the indexer worker. The messages should now be consumed:
php bin/console messenger:consume indexer -vvYou should see it processing EntityIndexingMessage batches. The indexer queue count should drop to zero.
You can also use the built-in stats command:
php bin/console messenger:statsThis shows the message count per transport, including your new indexer transport.
Production Operations: Monitoring the Indexer WorkerLink to this section
Running a single indexer worker means it can become a bottleneck or single point of failure. Add the following operational guardrails to your production setup:
Monitor queue depth and age
Poll the indexer queue regularly and alert when it grows beyond acceptable thresholds:
-- Current queue depth
SELECT COUNT(*) AS pending
FROM messenger_messages
WHERE queue_name = 'indexer'
AND delivered_at IS NULL;
-- Oldest pending message (queue age)
SELECT TIMESTAMPDIFF(SECOND, MIN(available_at), NOW()) AS oldest_message_age_seconds
FROM messenger_messages
WHERE queue_name = 'indexer'
AND delivered_at IS NULL;Suggested alert thresholds (adjust to your catalog size):
| Metric | Warning | Critical |
|---|---|---|
| Queue depth | > 500 messages | > 2000 messages |
| Oldest message age | > 10 minutes | > 30 minutes |
Restart policy and healthcheck
Ensure the indexer worker container restarts automatically on failure:
indexer-worker:
# ... service config
restart: unless-stopped
healthcheck:
test: ['CMD', 'php', 'bin/console', 'messenger:stats']
interval: 60s
timeout: 10s
retries: 3Handling backlog during large imports
During bulk imports or large catalog updates, the indexer queue can grow significantly. Strategies to manage this:
- Temporarily increase —limit on the indexer worker so it processes more messages per lifecycle
- Increase batch_size in shopware.dal.batch_size to reduce the total number of messages generated (at the cost of longer transactions)
- Do not add more indexer worker replicas. This reintroduces the deadlock problem. If throughput is insufficient, address the bottleneck at the batch/query level instead
- Monitor the failed transport after large imports: php bin/console messenger:failed:show
How Shopware Handles Deadlocks InternallyLink to this section
Even with a single indexer worker, occasional deadlocks can still occur if other processes (scheduled tasks, API requests, imports) write to the product table simultaneously. Shopware handles this gracefully at multiple levels:
Level 1: RetryableTransaction (up to 10 retries)
The individual updaters inside the ProductIndexer wrap their database operations in RetryableTransaction or RetryableQuery . These catch deadlock exceptions and retry the operation up to 10 times with short delays:
// Simplified from Shopware's RetryableTransaction
for ($counter = 0; $counter < self::MAX_RETRIES; ++$counter) {
try {
return $closure($connection);
} catch (\Throwable $e) {
if (!self::deadlockRelatedException($e)) {
throw $e;
}
usleep(random_int(10, 20));
}
}Deadlock-related exceptions include DeadlockException , LockWaitTimeoutException , TransactionRolledBack , and the SAVEPOINT does not exist error.
Level 2: Symfony Messenger retry (up to 3 retries)
If the entire message handler fails (after exhausting the 10 internal retries), Symfony Messenger’s retry strategy kicks in. The default configuration retries up to 3 times with exponential backoff:
| Retry | Delay |
|---|---|
| 1st | 1s |
| 2nd | 2s |
| 3rd | 4s |
A deadlock exception does not implement UnrecoverableExceptionInterface , so it is always eligible for retry. See the Retries & Failures section of the Symfony Messenger documentation for details on configuring retry behavior.
Level 3: Doctrine transport resilience
The Doctrine transport’s DoctrineReceiver also handles deadlocks that occur while fetching messages (not during processing). If the SELECT … FOR UPDATE query deadlocks, the receiver silently returns an empty result up to 3 times before escalating. This prevents the worker from crashing due to transport-level contention.
Level 4: Failed transport
If all retries are exhausted, the message is moved to the failed transport rather than being discarded. You can inspect and retry failed messages:
# Show failed messages
php bin/console messenger:failed:show
# Retry all failed messages
php bin/console messenger:failed:retry
# Retry a specific message
php bin/console messenger:failed:retry 42Synchronous vs. Queue-Based IndexingLink to this section
With the dedicated indexer transport in place, you have two options for running the indexer:
| Aspect | Synchronous (dal:refresh:index) | Queue-based (--use-queue) |
|---|---|---|
| Execution | Single CLI process, blocks until done | Messages dispatched to queue, processed by worker |
| Deadlock behavior | Entire process aborts | Only the affected batch fails, rest continues |
| Automatic retry | None, must restart manually | Up to 3 retries with exponential backoff |
| Message safety | Progress is lost on failure | Messages persist in DB until acknowledged |
| Parallelism | N/A | Controlled via worker replicas |
Recommendation: Use —use-queue for production environments. It provides better resilience and doesn’t block your deployment pipeline.
FAQLink to this section
Why not just reduce the number of workers?
Reducing async workers affects all message processing: emails, imports, webhooks, etc. The dedicated transport approach lets you keep high throughput for general messages while constraining only the indexer to a single worker.
What about the batch_size setting?
Shopware’s shopware.dal.batch_size (default: 50) controls how many entities are processed per indexer message. Reducing this value (e.g., to 25) decreases the lock footprint per transaction and can help with deadlocks. Increasing it (e.g., to 100) makes deadlocks more likely. However, with a single indexer worker, the default of 50 usually works fine.
What if I see a message with delivered_at stuck in the table?
When a worker picks up a message, the Doctrine transport sets delivered_at to the current timestamp. This acts as a lock, so other workers won’t pick up this message. If the worker is killed or crashes while processing, the message remains with delivered_at set.
After the redeliver_timeout (default: 3600 seconds / 1 hour), the transport considers the message stale and makes it available again:
WHERE delivered_at IS NULL OR delivered_at < NOW() - INTERVAL 3600 SECONDThis is normal behavior. The message will be automatically redelivered when the indexer worker restarts. You can reduce the timeout if needed:
framework:
messenger:
transports:
indexer:
dsn: '%env(MESSENGER_TRANSPORT_DSN)%'
options:
queue_name: indexer
redeliver_timeout: 1800 # 30 minutesCan I use Redis instead of Doctrine for the indexer transport?
Yes. If you’re already using Redis for other transports, you can use it for the indexer as well. The queue_name pitfall is specific to the Doctrine transport. With Redis, each transport uses its own stream, so there’s no risk of cross-consumption. See the Redis Transport section in the Symfony docs for DSN configuration.
Should I also configure transaction_isolation = READ-COMMITTED ?
This is a good general practice for Shopware installations. The default REPEATABLE-READ isolation level in MariaDB/MySQL can increase lock contention. However, it is not a replacement for the dedicated transport solution. It only reduces the probability of deadlocks, not the root cause.
For more information on Symfony Messenger transports, routing, and worker configuration, see the official Symfony Messenger documentation.