RabbitMQ

 View Only

 What is considered "best practice" methods to drain a queue, or all queues, on a node? 1. Load balance the AMQP endpoint and use different endpoint for sending and receiving messages? 2. Set memory high watermark to 0 on node? 3. ?

HN Operations's profile image
HN Operations posted Aug 28, 2019 04:53 PM

 

Luke Bakken's profile image
Luke Bakken

Hello,

 

By "drain" do you mean you wish to consume and process messages, or just empty the queues?

 

Thanks,

Luke

HN Operations's profile image
HN Operations

Consume all messages in queue(s), not empty\purge.

HN Operations's profile image
HN Operations

No best practice for this scenario?

Luke Bakken's profile image
Luke Bakken

Hello,

 

I didn't see that you had followed up to me response.

 

I don't really understand the scenario. If you want to consume all messages, then consume them normally with an application. If you are discarding the messages, be sure to use automatic acknowledgement.

 

Thanks,

Luke

HN Operations's profile image
HN Operations

Hi, Luke!

 

Say that you have:

  • a network partition with manual resolution (2 nodes).
  • Cluster with mirrored queues, message confirm and ACK
  • On the node that you consider the losing party there are still messages queued. These cannot be lost (message confirm and ACK is not enough in this scenario).
  • The clients only connects to 1 node at a time because of a load balancer, and currently that's the node that will win when the network partition is resolved.

In this scenario i want to consume the remaining messages on the losing node before it rejoins the cluster. Ideally with as little disruption of message flow as possible (availability). What is considered best practice here?

 

Currently the solution for us is:

  • Stop message sending to the cluster (memory high watermark 0 on both nodes)
  • Failover to the losing node in the load balancer
  • consume all messages and fail back
  • Set memory high watermark >0 again
  • Restart losing node

This disrupts availability while memory high watermark is 0. Might there be a better way of doing this? A reference design or something?

HN Operations's profile image
HN Operations

Bump

Luke Bakken's profile image
Luke Bakken

You don't mention how your queues are distributed or mirrored. I still don't really understand why you need to stop publishing. Just connect to the losing node and consume messages, or use the shovel to move them to another node.

HN Operations's profile image
HN Operations

I don't exactly know what you mean by how they are distributed..

There is clustering, mirroring of all queues to all nodes in cluster and clients declare queues on demand.

 

We only use one endpoint to both consume and publish. Connecting to the other node via the load balancer would result in new messages being published to the losing side. The Dynamic shovel seems to be the right approach.

 

Does the shovel support defining queue names as matching a regex?