<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">2013/8/20 Kaloyan Kovachev <span dir="ltr"><<a href="mailto:kkovachev@varna.net" target="_blank">kkovachev@varna.net</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi,<div class="im"><br>
On 2013-08-20 18:22, Leandro Dardini wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I agree with you. Usually if you have database in multimaster<br>
replication mode, writing to one of the server is enough to make the<br>
info be written in both. Unfortunately the cdr_adaptive_odbc driver<br>
cannot use a failover configuration as working for example in the<br>
func_odbc driver. I had no choice other than to write the CDR info to<br>
both databases. I am not sure which one will be available. At the end<br>
I get two records for every call and I have to filter out the<br>
duplicates. Obviously I have no primary/unique index on the cdr table.<br>
<br>
</blockquote>
<br></div>
You may always use the call's uniquieid for the key and you need a key in order to find a duplicate and make use of the INSERT IGNORE or ON DUPLICATE KEY<div class="im"><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Maybe I was not clear, I was proposing a patch to add a parameter to<br>
cdr_adaptive_odbc.conf to allow the usage of INSERT IGNORE. This way I<br>
can configure a primary/unique index on the uniqueid of the cdr table<br>
and avoid having to deal with duplicates.<br>
</blockquote>
<br></div>
Unfortunately it won't fix your problem and in fact may cause even bigger problems ... consider the second server was offline for few hours and it is currently synchronizing with the other master - you insert a row which is not yet replicated, so there is no duplication, but then some time later you try to insert a duplicate, which have succeeded on the master, but fails on the slave - this will stop the replication with an error message and this will keep repeating for a long time until both masters are in full sync</blockquote>
<div><br></div><div>I have think about this possible problem and I don't think it will never happen. The master is storing the exact command sent by the client, so the INSERT IGNORE will be sent to the secondary. If there was some sort of "split brain", then the INSERT with duplicate keys will be ignored.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im"><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
The patch was trivial, plus it seems not to be of any interest...<br>
sorry for bothering :-)<br>
<br>
</blockquote>
<br></div>
Do not patch - store duplicate records and then filter them out by INSERT IGNORE to another table or SELECT with GROUP BY 'uniueid'</blockquote><div><br></div><div>That was my first idea, but it fails miserably when I start to build statistics over the cdr. It is 20x slower having to "select distinct" and then to apply sums functions.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5"><br>
<br>
<br></div></div></blockquote></div></div></div>