[OpenSIPS-Users] Multi-Proxy Environment Routing

Tito Cumpen tito at xsvoce.com
Tue Jan 27 00:42:58 CET 2015


Duane,


Try reading this tread and see if it helps. I have an environment where
proxies handle their own db and I am using rabbit_event queuing to define
where users are located along with mongo to query and fork locations.



---------- Forwarded message ----------
From: Bogdan-Andrei Iancu <bogdan at opensips.org>
Date: Thu, Jul 31, 2014 at 2:16 PM
Subject: Re: [OpenSIPS-Users] Distributed deployment
To: Tito Cumpen <tito at xsvoce.com>
Cc: OpenSIPS users mailling list <users at lists.opensips.org>, Rik Broers <
RBroers at motto.nl>


Tito,

In script, you can use any of the nosql backends via the cache related
function (cache_store(), cache_fetch(), cache_delete(), etc) including raw
queries:
    http://www.opensips.org/Documentation/Script-CoreFunctions-1-11#toc3
    http://www.opensips.org/Documentation/Script-CoreFunctions-1-11#toc9

Regards,

Bogdan-Andrei Iancu
OpenSIPS Founder and Developerhttp://www.opensips-solutions.com

On 31.07.2014 19:45, Tito Cumpen wrote:

Bogdan,

Thank you once again for sharing. I have looked into using
http://www.opensips.org/html/docs/modules/1.8.x/event_rabbitmq to subscribe
to events and delete entries as you have mentioned. Although I am bit
uncertain about how to engage nosql for a location lookup via the cfg
script. Are you executing an external script from the cfg that interfaces
with nosql in order to conduct a query ?


Thanks,
Tito


On Fri, Jul 4, 2014 at 5:43 AM, Bogdan-Andrei Iancu <bogdan at opensips.org>
 wrote:

> Hi Tito,
>
> In my case I'm using one for the modules (for the nosql part), like
> mongodb, redis, couchebase, etc. The advantages of those nosql engines are
> that you have an out-of-the-box geo-distributed db cluster; there is no
> need to try to replicate something like that via sql + http + other. At
> least that's my opinion :).
>
> Regards,
>
> Bogdan-Andrei Iancu
> OpenSIPS Founder and Developerhttp://www.opensips-solutions.com
>
> On 03.07.2014 23:00, Tito Cumpen wrote:
>
> Bogdan,
>
>
> Thanks for sharing. I was hoping to do something similar with http_db and
> sql by treating new posts with php and replicating them on a remote mysql
> db which will then be queried for location type requests. In your solution
> are employing DB_Cache module? Also are you using a queuing solution for
> events?
>
>
> Thanks,
> Tito
>
>
> On Wed, Jun 25, 2014 at 5:38 AM, Bogdan-Andrei Iancu <bogdan at opensips.org>
>  wrote:
>
>> Hi Tito,
>>
>> What I do for clustering usrloc is something like this:
>>     - each node manages the registrations independently (there is no
>> usrloc replication between nodes)
>>     - I have a nosql cluster available for all nodes
>>     - I use the AOR related events+routes to push/remove into the nosql
>> db the AOR (only) available on each node
>>     - basically the nosql "knows" which AORs are registered on which node
>>     - when a node handles a call, it looks into nosql to see which are
>> the nodes having registrations for the needed AOR -> call is parallel
>> forked to local registrations (if any) and to the other nodes (based on
>> nosql info)
>>
>> Regards,
>>
>> Bogdan-Andrei Iancu
>> OpenSIPS Founder and Developerhttp://www.opensips-solutions.com
>>
>> On 24.06.2014 15:27, Tito Cumpen wrote:
>>
>> Rik,
>>
>>
>>
>>
>> My deployment is not dependant on virtual IP. Since the failover and load
>> balancing logic resides on the client and I intend to use srv records to
>> define the weight of proxies. The problem comes to surface if a user makes
>> an attempt to register and finds himself on server A. Soon after makes an
>> attempt to register and finds the second client on server b. If a Ruri
>> request sources from server B how can I fork the request in parallel to the
>> first entry in server A?
>> Thanks for your reply but I have raised the question of using contact
>> replication before please see below:
>>
>>
>>
>> ---------- Forwarded message ----------
>> From: Liviu Chircu <liviu at opensips.org>
>> Date: Wed, Jun 11, 2014 at 1:29 PM
>> Subject: Re: [OpenSIPS-Users] binary replication
>> To: users at lists.opensips.org
>>
>>
>> Hello Tito,
>>
>> Both dialog and user location replication were actually designed to work
>> with VIPs only! From the moment the "receiving" instance takes over, it
>> should have the same pool of registered users as instance #1, and it should
>> be able to process all existing dialogues.
>>
>> Best regards,
>>
>> Liviu Chircu
>> OpenSIPS Developerhttp://www.opensips-solutions.com
>>
>> On 06/11/2014 03:27 PM, Tito Cumpen wrote:
>>
>> Group,
>>
>> Playing with the idea of using binary replication. Just curious if anyone
>> can provide a use case. Would this coupled with a virtual ip?  I am not
>> certain how the instance that accepts replications would take over.
>>
>>
>> Thanks,
>> Tito
>>
>>
>> _______________________________________________
>> Users mailing listUsers at lists.opensips.orghttp://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opensips.org
>> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>
>>
>> On Tue, Jun 24, 2014 at 5:35 AM, Rik Broers <RBroers at motto.nl> wrote:
>>
>>> I’m was also looking into this problem, which is very similar to yours.
>>>
>>>
>>>
>>> I found this and it is a perfect solution to my problem.
>>>
>>> Think this would help you too.
>>>
>>>
>>>
>>> http://www.opensips.org/html/docs/modules/devel/usrloc#usrloc-replication
>>>
>>>
>>>
>>>
>>>
>>> Vriendelijke groet,
>>>
>>> *Rik Broers*
>>> Voice Engineer
>>>
>>> *Van:* users-bounces at lists.opensips.org [mailto:
>>> users-bounces at lists.opensips.org] *Namens *Tito Cumpen
>>> *Verzonden:* dinsdag 24 juni 2014 04:54
>>> *Aan:* OpenSIPS users mailling list
>>> *Onderwerp:* [OpenSIPS-Users] Distributed deployment
>>>
>>>
>>>
>>> Hello group,
>>>
>>>
>>>
>>>
>>>
>>> I am reaching out to you because I am hitting a roadblock in designing a
>>> distributed deployment. Currently I am entertaining the idea of  using DNS
>>> srv for the sake of load balancing and availability. The main problem is
>>> sharing aors among the proxies. My requirement is to allow proxies to fork
>>> requests to remote proxies in which a user could be registered to in
>>> addition to the local server. The binary replication component will not
>>> suffice because it is tailored to virtual ip. I've noticed that opensips
>>> has a recursive timer that runs at every second to verify which
>>> registration expires with the intent of removing it. Assuming a shared
>>> mysql instance is the only option each proxy will be querying mysql which
>>> seems like a ton of activity. Can anyone advise what the best practice for
>>> scaling would be?
>>>
>>>
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Tito
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at lists.opensips.org
>>> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>>
>>>
>>
>>
>> _______________________________________________
>> Users mailing listUsers at lists.opensips.orghttp://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>
>>
>>
>
>



On Mon, Jan 26, 2015 at 6:25 PM, Duane Larson <duane.larson at gmail.com>
wrote:

> Correct.  I meant Priority.  I set my Primary Server as "0" and backup as
> "1".  For the db_mode I use 3 which is "DB Only" mode.
>
> I did look into the possibility of using Bin replication but it sounds
> like in order to use that both OpenSIPS instances have to use the same IP
> address.  Will see what I can do with t_relay'ing INVITE's to the correct
> Proxy that the client is registered with.
>
> Thanks
>
> On Mon, Jan 26, 2015 at 10:42 AM, Bogdan-Andrei Iancu <bogdan at opensips.org
> > wrote:
>
>>  Hi Duane,
>>
>> First, in regards to your DNS records - I suppose you use higher priority
>> and not higher weight ? The weight are used to control how many (as ratio)
>> times each record is used. If you want to have all the time R1 and have R2
>> only if R1 is done, then you should use priorities.
>>
>> Now, you mentioned both proxies do use the same location table in DB. But
>> what db_mode is used in the usrloc module ? If you have DB_ONLY mode, I see
>> no problem with the scenario you described (as both proxies will read/write
>> from/to single DB). If you use other db modes, you will probably need to
>> use the bin replication in usrloc module in order to sync the cache too.
>>
>> Regards,
>>
>> Bogdan-Andrei Iancu
>> OpenSIPS Founder and Developerhttp://www.opensips-solutions.com
>>
>> On 25.01.2015 02:31, Duane Larson wrote:
>>
>> Before I try to reinvent the wheel I wanted to see if there is already a
>> way to do this.
>>
>>  For redundancy I have two Proxies and I am using DNS SRV with Proxy01
>> weighted higher so that it is the primary that all clients register and
>> use.  Both proxies use the same location database.  In the event of a
>> failure on Proxy01 all clients would register with Proxy02.  Everything is
>> fine but what happens when clients register time expires and they start
>> re-registering with Proxy01 since it is weighted higher.  So now some
>> clients will be registered with Proxy01 and some with Proxy02. Everything
>> will work fine but what if Client01 (who is registered with Proxy01) calls
>> Client02 (who is registered with Proxy02)?  I think the INVITE should do
>> the following
>>
>>  Client01 -> Proxy01 -> Proxy02 -> Client02
>>
>>  On Proxy01 I could look in the location database to see what is set in
>> the "socket" field and then route it to Proxy02 but I was wondering if
>> there is already a way to do this?
>>
>>
>> _______________________________________________
>> Users mailing listUsers at lists.opensips.orghttp://lists.opensips.org/cgi-bin/mailman/listinfo/users
>>
>>
>>
>
> _______________________________________________
> Users mailing list
> Users at lists.opensips.org
> http://lists.opensips.org/cgi-bin/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opensips.org/pipermail/users/attachments/20150126/9e62c91f/attachment-0001.htm>


More information about the Users mailing list