[asterisk-users] Asterisk spontaneous reboot
Mark Deneen
mdeneen at gmail.com
Sun Nov 7 16:26:53 CST 2010
On Sun, Nov 7, 2010 at 3:58 PM, Jonas Kellens <jonas.kellens at telenet.be> wrote:
> On 11/06/2010 09:18 PM, Sherwood McGowan wrote:
>> On Sat, Nov 6, 2010 at 2:45 PM, Jonas Kellens<jonas.kellens at telenet.be> wrote:
>>
>>> On 11/06/2010 07:18 PM, Tilghman Lesher wrote:
>>>
>>>> On Saturday 06 November 2010 11:22:06 Jonas Kellens wrote:
>>>>
>>>>
>>>>> Hello,
>>>>>
>>>>> I just experienced a spontaneous reboot of Asterisk. This is my log file
>>>>> /var/log/messages :
>>>>>
>>>>> Nov 6 16:37:37 vps kernel: miniserv.pl invoked oom-killer:
>>>>>
>>>>>
>>>> First line. Your miniserv.pl allocated more memory than is allocated to
>>>> the system, so the dreaded OOM killer came into play and killed a selected
>>>> process. Have you considered enabling swap memory?
>>>>
>>>>
>>> I have 512 MB real RAM and 1024 of swap.
>>>
>>> bash-3.2# cat /proc/meminfo
>>> MemTotal: 524288 kB
>>> MemFree: 23760 kB
>>> Buffers: 28564 kB
>>> Cached: 348668 kB
>>> SwapCached: 6536 kB
>>> Active: 193972 kB
>>> Inactive: 231216 kB
>>> HighTotal: 0 kB
>>> HighFree: 0 kB
>>> LowTotal: 524288 kB
>>> LowFree: 23760 kB
>>> SwapTotal: 1048568 kB
>>> SwapFree: 949456 kB
>>> Dirty: 768 kB
>>> Writeback: 0 kB
>>> AnonPages: 46652 kB
>>> Mapped: 16884 kB
>>> Slab: 21000 kB
>>> PageTables: 8084 kB
>>> NFS_Unstable: 0 kB
>>> Bounce: 0 kB
>>> CommitLimit: 1310712 kB
>>> Committed_AS: 321288 kB
>>> VmallocTotal: 34359738367 kB
>>> VmallocUsed: 784 kB
>>> VmallocChunk: 34359737535 kB
>>>
>>>
>>> miniserv.pl... I have webmin running yes and it was stopped after the
>>> restart of Asterisk...
>>>
>>> So the bad one in this story is WebMin that was eating up all the memory ?
>>>
>>>
>>> Jonas.
>>>
>>>
>>> --
>>> _____________________________________________________________________
>>> -- Bandwidth and Colocation Provided by http://www.api-digital.com --
>>> New to Asterisk? Join us for a live introductory webinar every Thurs:
>>> http://www.asterisk.org/hello
>>>
>>> asterisk-users mailing list
>>> To UNSUBSCRIBE or update options visit:
>>> http://lists.digium.com/mailman/listinfo/asterisk-users
>>>
>>>
>> Yessir, that's the culprit in this case
>>
>
> Strange, today I saw this in the logs :
>
> Nov 7 17:02:18 vps kernel: crond invoked oom-killer: gfp_mask=0x201d2,
> order=0, oomkilladj=0
> Nov 7 17:02:18 vps kernel:
> Nov 7 17:02:18 vps kernel: Call Trace:
> Nov 7 17:02:18 vps kernel: [<ffffffff802bf74e>] out_of_memory+0x8b/0x203
> Nov 7 17:02:18 vps kernel: [<ffffffff8020f947>] __alloc_pages+0x27f/0x308
> Nov 7 17:02:18 vps kernel: [<ffffffff802138db>]
> __do_page_cache_readahead+0xc6/0x1ab
> Nov 7 17:02:18 vps kernel: [<ffffffff802141c7>] filemap_nopage+0x14c/0x360
> Nov 7 17:02:18 vps kernel: [<ffffffff80208e8c>]
> __handle_mm_fault+0x442/0x1445
> Nov 7 17:02:18 vps kernel: [<ffffffff8028866d>] deactivate_task+0x28/0x5f
> Nov 7 17:02:18 vps kernel: [<ffffffff8026769a>] do_page_fault+0xf7b/0x12e0
> Nov 7 17:02:18 vps kernel: [<ffffffff8025c8ff>] hrtimer_cancel+0xc/0x16
> Nov 7 17:02:18 vps kernel: [<ffffffff80263b14>] do_nanosleep+0x47/0x70
> Nov 7 17:02:18 vps kernel: [<ffffffff8025c7ec>]
> hrtimer_nanosleep+0x58/0x118
> Nov 7 17:02:18 vps kernel: [<ffffffff8026082b>] error_exit+0x0/0x6e
> Nov 7 17:02:18 vps kernel:
> Nov 7 17:02:18 vps kernel: Mem-info:
> <snip>
>
> So this time it is crond that invoked oom-killer...
Please read up on how the oom killer works. crond didn't invoke
anything, but was rather the unfortunate task chosen to be sacrificed.
More information about the asterisk-users
mailing list