Documentation |
Documentation -> Tutorials -> How To Configure a Federated User Location ClusterThis page has been visited 13528 times. How To Configure a Federated User Location Cluster (for OpenSIPS 2.4)by Liviu Chircu Table of Content (hide) 1. DescriptionThe "federation" clustering strategy for the OpenSIPS user location is a complete, high-end solution, offering:
2. ConfigurationFor the smallest possible setup, you will need:
Below are the relevant OpenSIPS config sections:
loadmodule "usrloc.so" modparam("usrloc", "use_domain", 1) modparam("usrloc", "working_mode_preset", "federation-cachedb-cluster") modparam("usrloc", "location_cluster", 1) # with Cassandra, make sure to create the keyspace and table beforehand: # CREATE KEYSPACE IF NOT EXISTS opensips WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'} AND durable_writes = true; # USE opensips; # CREATE TABLE opensips.userlocation ( # id text, # aor text, # home_ip text, # PRIMARY KEY (id)); loadmodule "cachedb_cassandra.so" modparam("usrloc", "cachedb_url", "cassandra://10.0.0.5:9042/opensips.userlocation") # with MongoDB, we don't need to create any database or collection... loadmodule "cachedb_mongodb.so" modparam("usrloc", "cachedb_url", "mongodb://10.0.0.5:27017/opensipsDB.userlocation") loadmodule "clusterer.so" modparam("clusterer", "current_id", 3) modparam("clusterer", "seed_fallback_interval", 5) modparam("clusterer", "db_url", "mysql://opensips:opensipsrw@localhost/opensips")
INSERT INTO clusterer(id, cluster_id, node_id, url, state, no_ping_retries, priority, sip_addr, flags, description) VALUES \ (NULL, 1, 1, 'bin:10.0.0.177', 1, 3, 50, '10.0.0.177', 'seed', NULL), \ (NULL, 1, 3, 'bin:10.0.0.179', 1, 3, 50, '10.0.0.179', 'seed', NULL);
clusterer table example (no HA)
INSERT INTO clusterer(id, cluster_id, node_id, url, state, no_ping_retries, priority, sip_addr, flags, description) VALUES \ (NULL, 1, 1, 'bin:10.0.0.177', 1, 3, 50, '10.0.0.223', 'seed', NULL), \ (NULL, 1, 2, 'bin:10.0.0.178', 1, 3, 50, '10.0.0.223', 'NULL', NULL), \ (NULL, 1, 3, 'bin:10.0.0.179', 1, 3, 50, '10.0.0.224', 'seed', NULL), \ (NULL, 1, 4, 'bin:10.0.0.180', 1, 3, 50, '10.0.0.224', 'NULL', NULL);
clusterer table example (with HA)
3. Registration flowsThe registration flow has two steps:
We need not change anything in the default script in order to achieve the above. The working_mode_preset or cluster_mode module parameters take care of this.
4. Call flowsWhen looking up an AoR, we have two options:
route { ... $var(lookup_flags) = "m"; if (cluster_check_addr("1", "$si")) { xlog("$rm from cluster, doing local lookup only\n"); } else { xlog("$rm from outside, doing global lookup\n"); $var(lookup_flags) = $var(lookup_flags) + "g"; } if (!lookup("location", "$var(lookup_flags)")) { t_reply("404", "Not Found"); exit; } ... } 5. NAT pingingSome setups require periodic SIP OPTIONS pings originated by the registrar towards some of the contacts in order to keep the NAT bindings alive. Here is an example configuration: loadmodule "nathelper.so" modparam("nathelper", "natping_interval", 30) modparam("nathelper", "sipping_from", "sip:pinger@localhost") modparam("nathelper", "sipping_bflag", "SIPPING_ENABLE") modparam("nathelper", "remove_on_timeout_bflag", "SIPPING_RTO") modparam("nathelper", "max_pings_lost", 5) We then enable these branch flags for some or all contacts before calling save(): ... setbflag(SIPPING_ENABLE); setbflag(SIPPING_RTO); if (!save("location")) sl_reply_error(); ...
May 30 06:09:58 localhost /usr/sbin/opensips[18833]: ERROR:core:proto_udp_send: sendto(sock,0x812ae0,294,0,0xbffd8800,16): Invalid argument(22) [1.2.3.4:38911] May 30 06:09:58 localhost /usr/sbin/opensips[18833]: CRITICAL:core:proto_udp_send: invalid sendtoparameters#012one possible reason is the server is bound to localhost and#012attempts to send to the net May 30 06:09:58 localhost /usr/sbin/opensips[18833]: ERROR:nathelper:msg_send: send() to 1.2.3.4:38911 for proto udp/1 failed May 30 06:09:58 localhost /usr/sbin/opensips[18833]: ERROR:nathelper:nh_timer: sip msg_send failed To silence these errors, you can hook the nh_enable_ping MI command into your active->backup and backup->active transitions of the VIP: opensipsctl fifo nh_enable_ping 1 # run this on the machine that takes over the VIP (new active) opensipsctl fifo nh_enable_ping 0 # run this on the machine that gives up the VIP (new passive) 6. FAQ6.1 Should I deploy MySQL on each node?Yes, the SQL database should be local to each OpenSIPS node. 6.2 Should I set up replication between the MySQL instances?No! The only purpose of the SQL database is to persist the registration / dialog data on each OpenSIPS restart. This allows the SIP node to run on its own, even if its datacenter is isolated from the global platform. 6.3 Should I put a MongoDB instance on each node?This is left up to you to decide. The only requirement is to have a single, global NoSQL database, with all OpenSIPS nodes being able to connect to it. This way, different datacenters can publish their AoR metadata to this database. 6.4 What is the recommended tool for the virtual IP management?From our experience, both keepalived and vrrpd have worked very well, without any issues once the configuration was properly set up. |