[OpenSIPS-Users] dlg_sharing_tag behaviour in an active/backup setup

solarmon solarmon at one-n.co.uk
Wed Jul 14 11:22:26 EST 2021


Hi Kingsley,

I use corosync/pacemaker to manage the virtual IP and I already have a
resource script to run the "opensipsctl fifo dlg_set_sharing_tag_active
vip" command when the virtual IP resource moves to another node.

However, the issue here is not about when the virtual IP resource moves
between nodes. The issue is that node 01 is by default the 'active' and
node 02 is by default the 'standby' in terms of vip tag state. But if node
02 is currently the node with the virtual IP (and it would be the active
node for the vip tag because of the script) but when you restart the node
01 opensips service, it will then become active for the vip tag - because
by design in its config it is set to be 'active' for that vip tag. So now
calls are still going to node 02 because it has the virtual IP, but in
terms of vip tag status, node 01 is now the active one for that tag.

 I suppose in a two node cluster, the question is: what are the
consequences of this vip tag status mismatch?

Now that I'm aware of issue, we need to make sure we check and set the vip
status accordingly whenever we restart the opensips service on any of the
two nodes.

Thank you.


On Wed, 14 Jul 2021 at 11:30, Kingsley Tart <kingsley at dns99.co.uk> wrote:

> If you maintain the VIP with keepalived for example, then in your VIP
> config for keepalived you could configure a notify_master script which
> keepalived would then run when the node picks up the VIP, eg:
>
>     notify_master /path/to/script/notify-up.sh
>
> and then in that script run something like this (assuming OpenSIPS
> 3.x):
>
>     opensips-cli -x mi clusterer_shtag_set_active your-tagname
>
> which would make the node that now has the VIP set the tag active on
> itself and as far as I understand it, OpenSIPS clusterer would make set
> it to "backup" on other node.
>
> Cheers,
> Kingsley.
>
>
> On Tue, 2021-07-13 at 14:08 +0100, solarmon wrote:
> > Hi Liviu,
> >
> > I took and used the 'recommended' config as advised at
> >
> https://blog.opensips.org/2018/03/23/clustering-ongoing-calls-with-opensips-2-4/
> >
> > Having to rely on an(other) script to 'fix' the issue does not sound
> > like a good idea, but I'll look into it further and try to understand
> > it.
> >
> > Thank you.
> >
> > On Tue, 13 Jul 2021 at 12:59, Liviu Chircu <liviu at opensips.org>
> > wrote:
> > > On 25.06.2021 15:12, solarmon wrote:
> > > > The typical recommended configuration for an active/standby setup
> > > > would be:
> > > >
> > > > node1:
> > > > modparam("dialog", "dlg_sharing_tag", "vip=active")
> > > >
> > > > node2:
> > > > modparam("dialog", "dlg_sharing_tag", "vip=backup")
> > > >
> > > > How can this be properly resolved or managed for these situations
> > > > where you want to take down an opensips node for maintenance?
> > >
> > > Hi,
> > > Actually, I would recommend starting with "backup / backup" (!!) to
> > > prevent exactly the scenario you described above, where you take
> > > down the former-active node for maintenance, yet it boots in
> > > "active" mode, completely opposite to the state of the VIP.
> > > Now, in order to fix the state where both tags are in "backup"
> > > mode, you'd deploy an external check-script that periodically scans
> > > the tag statuses and fixes a "backup" tag to "active" whenever a
> > > box owns the VIP.
> > > Best Regards,
> > > --
> > > Liviu Chircu
> > > www.twitter.com/liviuchircu | www.opensips-solutions.com
> > > OpenSIPS Summit 2021 Distributed | www.opensips.org/events
> >
> > _______________________________________________
> > Users mailing list
> > Users at lists.opensips.org
> > http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opensips.org
> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opensips.org/pipermail/users/attachments/20210714/763990a5/attachment.html>


More information about the Users mailing list