[asterisk-biz] PBX Hacker IP List

John Todd jtodd at digium.com
Tue Mar 17 11:45:02 CDT 2009


On Mar 16, 2009, at 8:41 PM, Michael Jerris wrote:

>
> On Mar 16, 2009, at 8:18 PM, John Todd wrote:
>>
>> [Phil and Cyril - the quick synopsis here is that Asterisk systems  
>> are
>> being hit with some frequency with brute-force SIP password or
>> extension guessing attacks.  Asterisk can output logfiles (non-
>> customizable) of failures.]
>>
>> JR and I had been having parts of this conversation off-line, but  
>> it's
>> probably worth bringing it up here.
>>
>> I am of the opinion that a "blacklist" is probably useful for some
>> people, as an optional method to automatically configure certain
>> firewall filters or other ACLs which would deny certain IP addresses
>> from reaching the SIP stack.  This could be triggered by quantity of
>> requests within a certain time period, or number of failures, or
>> whatever.  In fact, there are people who have configured Fail2Ban
>> already to serve locally as a prophylactic for their own machines.
>> JR's point is that there would optimally be some distributed  
>> mechanism
>> which would serve to collect the IP addresses as reported by a wide
>> variety of endpoints, such that badly acting IP addresses would be
>> denied even the first step in blocking.
>
> My biggest concern is how do we handle issues such as an incorrectly
> configured client set to attempt to reconnect causing false positives,
> this seems it would be fairly common.  Is there any way we can work to
> make it depend on failures using different passwords to cause a ban
> only, instead of any sort of retry causing a ban (outside of more
> obvious dos attacks)
>
> Mike



I'd suggest that there is a numeric severity indicating suggested  
length of time to enact a ban.  That number is then used as a  
calculation factor in determining how long the ban is in place.  This  
would allow for site-specific determination of "badness" (yes,  
everything is subjective in that model, but blacklisting is actually a  
subjective model in general) and weights would also allow for local  
determination of what to do, and when.

If each system has their own ability to enact a ban based on the  
numeric calculation, and time of data received, then this would work  
quite well to both allow simply incorrectly-configured

It seems that incorrectly configured clients would only typically ever  
hit a single host.  Or is this a false assumption?  Additional weight  
(a multiplication factor?) could be applied to IP addresses which are  
seen at multiple locations.   Thus, if an IP address is seen banging  
away on 5 different servers, each server would assign a "badness"  
value to the report (probably each logfile event regexp would have a  
static "weight" assigned) and then report it back to the central  
database.  Then the database would aggregate or multiply and create a  
final weight.  Each client of the database would then download the  
values (not a hardcoded timestamp!) and calculate for themselves how  
long to ban the remote IP address.

By using this method, you can eliminate a few things: putting a  
timestamp in the central server is bad, because that allows known  
interval window attacks.  Allowing each server to more-or-less  
randomly determine how long they're going to honor a "negative" entry  
is good, because then it means that servers will come uncloaked  
without predictability, and any scanning machines will trip a few  
triggers before the entire group is exposed all at once.

Additionally, using weights instead of static time durations allows  
the local administrator to choose how heavy-handed they wish to be on  
their local system.  They can perhaps take the weights and translate  
them into very long durations, or perhaps only a short duration.   The  
effect of being listed in the database becomes a function of local  
choice, not central administration.

The central database would gradually move a weight down to "0" if no  
additional bad behavior was reported, via some sort of well known  
reduction method (weight value -1 every 1 hour of no reports of bad  
behavior, as an example.)


Other ideas:
   - the central database should support the ability of setting  
(manually) the weight to -1 or some special value, which then would be  
imported by the clients and which would automatically remove any  
existing time blocks.  This would allow a central administrator to "un- 
block" someone and and have it take effect on clients in some short  
period of time without waiting for local timers to independently wind  
down.  Of course this implies trust of the central administrators to  
do the right thing from a policy perspective... but that trust already  
implicitly exists by taking a blacklist feed in the first place.


These sound like complex ideas, and they are.  Blacklisting is not  
trivial, and doing it correctly and in a manner that scales is not  
immediately obvious, but there are well-known models from which to  
work.  The good news is that I expect there are existing packages out  
there for this type of thing and all we need to do is glue Asterisk  
logs or other output into their model.

JT


---
John Todd                       email:jtodd at digium.com
Digium, Inc. | Asterisk Open Source Community Director
445 Jan Davis Drive NW -  Huntsville AL 35806  -   USA
direct: +1-256-428-6083         http://www.digium.com/






More information about the asterisk-biz mailing list