[Asterisk-Users] 2 4-port T1 cards

Jon Pounder JonP at inline.net
Wed May 28 13:09:10 MST 2003


I'll save my typing fingers somewhat on this one - you are doing great 
arguing about all the crappiness of mysql and actually backing it up with 
real examples. It is nice to see that for a change in comparison to all the 
mysql lovers that love it just "because" but have no basis to compare it to 
something with "heavy" load. I, like you don't consider massive amounts of 
selects heavy at all.

My example of heavy load where mysql could not even begin to handle the 
situation was a project with real time stock market data streamed in as 
bids and offers and trades happened, statistics computed from that in real 
time, database kept in sync live, and charts and graphs plotted in real 
time for users on the site. Now that situation had more than its share of 
inserts and updates, and a massive wad of historical data being kept just 
to add to the fun.

Might I add for record that postgres did just fine.





At 02:56 PM 5/28/2003 -0500, you wrote:
>On Wed, 2003-05-28 at 14:36, Ron Gage wrote:
> > On Wednesday 28 May 2003 02:30 pm, Steven Critchfield wrote:
> > > On Wed, 2003-05-28 at 11:02, Joe Antkowiak wrote:
> > > > 1.  Voicemail, and the voicemail itself will be stored on another box,
> > > > NFS mounted, or I might use mysql.  There will be a little bit of call
> > > > routing via iax to a separate * box with a channel bank on it.
> > >
> > > Don't use Mysql. if you ever have had to deal with it in a production
> > > environment that works it over, you will know that as it reaches it's
> > > limits, it starts a death spiral that is very difficult to recover from.
> > > For our software on a dual P3 866 with a gig of ram, the limit was
> > > around 1.5 queries a second fairly mixed update, inserts, and selects.
> > > Total file size of the database was under 200meg, and was fully cached
> > > so even though we had hardware raid 5 across 4 10K rpm ultra160 drives,
> > > it shouldn't have mattered for the selects.
> >
> > Um...
> >
> > I suppose that if MySQL can't handle more than 1.5 operations a second,
> > someone should tell the folks at Slashdot and Yahoo that their choice 
> of a DB
> > engine isn't going to scale too well.
>
>See other post about why Slashdot isn't a good argument here.
>
>It applies equally well for Yahoo. They both are select heavy, and Mysql
>is able to parallelize these just fine. I also doubt Yahoo does it's
>updates to the production databases, they probably do it to a backend
>that then gets replicated to multiple less backends that do the real
>serving of data.
>
> > For that matter, I suppose that GIS database I have here (the entire Tiger
> > census data for the state of Michigan - 1.2 million type A records alone)
> > isn't capable of handling more than 1.5 transactions a second.  So, when I
> > generate an AutoCAD script from the database records (generates a 
> full-scale
> > road map of the entire state of Michigan), it shouldn't be capable of 
> running
> > in under 10 minutes (at 1.5 transactions a second, it would take 13,333
> > minutes to run the script).
>
>Thats great for a single user. Now put 30 processes doing random updates
>and inserts while 5 users try and generate that map.
>
> > I would strongly suggest that something is seriously messed up with 
> your MySQL
> > implementation if you are only capable of getting 1.5 transactions a 
> second
> > before the "spiral of death".
> >
> > For comparison, I am running Mysql 4.0.12 on Slackware 9.0.  AMD 
> XP1800, 1 gig
> > memory, single 73 gig SCSI-Wide drive, Adaptec 29160 controller.  It 
> can do a
> > select distinct across the 1.2 million records in under 4 seconds.
>
>Again, 1 user on decent hardware, whoopeee. Try scaling that out and
>watch it die.
>--
>Steven Critchfield  <critch at basesys.com>
>
>_______________________________________________
>Asterisk-Users mailing list
>Asterisk-Users at lists.digium.com
>http://lists.digium.com/mailman/listinfo/asterisk-users





More information about the asterisk-users mailing list