[Asterisk-Users] Re: Advice on OS Choice

Joe Greco jgreco at ns.sol.net
Sat Oct 16 13:49:59 MST 2004


> On Friday 15 October 2004 23:28, Joe Greco wrote:
> > > I know what you're trying to do (we're both playing Devil's Advocate).
> 
> > Not really.  There is a product.  If you go into a hospital for major
> > surgery today, you stand a nontrivial chance of being hooked up to a
> > descendant of the device in question.
> 
> I'm guessing some kind of anaesthetic or vital sign monitoring system -- but 
> regardless, my argument stands -- with correct design and correct policy in 
> place Bad Things won't happen.  If you have a poor design or poor policy, or 
> if policy is not followed, well then Bad Things are MUCH more likely to 
> occur.  With or without source being available.
> 
> I suppose that having source can make the possibility for the occurance of Bad 
> Things marginally higher but it all comes down to design and policy, IMO.

As a manufacturer, you build things and sell them, and you can recommend
whatever policies you like, but after it leaves the shipping department, 
you're out of luck as to being able to guarantee any of that.

> > > If it's life critical machinery it *should* be difficult to alter the
> > > images. Routine maintenance should not include ways to alter these
> > > critical aspects of the system.
> >
> > Remove hard drive.  Mount on other system.  Putz.  Reinstall hard drive.
> 
> Firmware that boots checks image (or critical parts of image) for tampering 
> against stored checksum (checksum that gets updated when correct update 
> procedure is followed) -- Putz away, the firmware will still bring you to a 
> full stop because it detected a problem.

That's highly complex; even Sun agreed there was no practical way to do it.
With a closed source system, it wasn't considered a risk, and since
everything up to the point where we received control from the OS was at
least very difficult to putz with, it wasn't checked /prior/ to execution.
Verification of the loaded kernel image happened after it was loaded, and
was designed specifically to catch things like disk blocks going bad.

Again, the black box approach has advantages.  Could you maybe engineer
something to verify stuff at each and every step, just so you could go open
source?  Sure, perhaps, but at additional cost for more flash, and
additional cost for more development, and bad things then happen if you
do a field swap on hard drives to fix a broken unit, etc., and really it
becomes impractical.

> > We had techniques to defend against that, because the system could do an
> > integrity check, and the first part of that - the init module - was a
> > black box unknown.  If you messed with the modules, it wouldn't start.
> > If you tried to sandbox the init, it wouldn't start.  If you tried to
> > replace the init with the OS vendor's init, nothing would work at all.
> 
> Sounds great, that's exactly what I'm saying.  :-)
> 
> > You can't say "well don't let them pull the hard drive", because drives
> > can and do fail, and different versions of the product required different
> > hard drives on the same base chassis anyways.
> 
> I didn't say that.  :-)
> 
> > That's the point.  When you give someone the source to the system, they can
> > *inspect* your protection system, or worse, modify it, install it, and
> > defeat it, because then they can see how it's supposed to work, and once
> > you understand that, causing it to do something else becomes much easier.
> 
> The protection should not be defeatable even with the source, just as having 
> the algorithm for an encryption system publically available.  If you can 
> defeat it that easily then it's broken before you even sold a single unit.

That's nice to say in theory.  I'm sure Microsoft has a job for you.  ;-)

> > An encryption algorithm can be crappy or excellent.  If it is excellent,
> > you can gain an understanding of how it works, but you will be unable to
> > decrypt data without the key, which is an external factor.
> 
> > The problem with any integrity verification system which can be altered
> > is that you can inevitably alter the decision logic to indicate that the
> > image is good.
> 
> I don't recall ever stating that the protection against software tampering 
> should be alterable or even on the hard drive -- it should be part of the 
> firmware that the system uses to boot and ideally a hardware protection that 
> is not (easily) alterable -- perhaps a vendor key that gets sent whenever 
> software updates are sent... 

That's nice in theory, but potentially pretty darn expensive.  Nobody seemed
to think that it was worth the trouble, expense, etc., to get so paranoid
about it.

> To upgrade you can install the CD or reimage 
> the drive with the new image, but you have to also replace the vendor key.

And how do you do /that/?  You now need to have a keyboard attached to the
system to enter and replace the key?

The point is that's all software.  If it's open to inspection and
recompilation, it's easily open to defeat.  I can make an init system that
is very difficult to reverse-engineer, complete with interlocks with any
other items that get launched, such that NOTHING happens unless that
process is happy, but if that can be replaced by an init that doesn't give
a fsck, because someone commented out all the code and recompiled it, then
we have trouble.

So, yes, you /could/ design such a system, and if you've open sourced all
your software, then you probably /have/ to.

However, if you've just got a closed source system, such levels of paranoia
are not necessary, because the amount of work someone would have to do
would be at least an order of magnitude higher.

> Hell even simply signing the critical executables with the vendor's key will 
> stop any kind of dirty GNU hippie from making "insignificant" changes to the 
> system software; having the source does not make it any easier to update the 
> system, you still need the correct checks and balances.

We're talking specifically about the case where distributing the source
makes it trivial for someone to work around those correct checks and
balances.

> > This is substantially different from the encryption algorithm, where no
> > amount of modifying the decryption code will result in the decryption
> > code doing its work successfully without that key.
> 
> I disagree, and give my reasons in the above two paragraphs.

Your argument is "move the checks and balances up into the hardware".
Well, that has its own set of problems.

> > This is also why copy protection schemes have been successfully broken for
> > years and years and years.  It's not a question of /if/.  It's a question
> > of /how much work/.  A copy protection scheme is, after all, just a
> > variation on an integrity verification system.
> 
> Agreed.  You can't possibly tell me that you expect a court to hold a vendor 
> liable when you can prove that the person updating the software went through 
> significant hoops and drastically altered the way the system works to get 
> their rogue binaries to work?  (ok maybe you can, there have been some 
> *weird* outcomes in the justice system)

No, I'm not worried about that.  The specific case that was of concern was
what happens when someone from the hospital campus electronics shop tampers
with the system, something bad happens, and then the system is reloaded with
a non-tampered copy, because hospital policy would be to send a defective
device back to the shop?

> > If it were even possible to "properly design" such a system and arrive at
> > something that couldn't be gamed, I'm sure that the largest software
> > company on the planet would have done it by now.
> 
> This whole trusted computing initiative makes it damn hard to get around.  DVD 
> wouldn't have been broken if the implementation wasn't "corrupted" by a 
> vendor screwing up the DVDs they produced; XBox no-modchip hacks wouldn't 
> exist without software bugs...  You get the idea.

Trusted computing is always a difficult thing.  At a certain point, you
need to draw the line.  Because we had a closed source solution, we were
able to fairly safely assume that when we got handed off at init, we had
a system which was likely in a known state, and could verify the loaded
kernel/module/firmware/etc images, which was considered extremely
sufficient paranoia.  The point is that re-engineering a whole system with
more checks, firmware, keys, requirements, adding a keyboard, etc., just 
so you can use GPL'd software is really a non-starter, so in the end, only
BSD licensed projects were used and only BSD licensed projects received
the benefits of having some of our engineers working on, debugging, and
improving those projects.

... JG
-- 
Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net
"We call it the 'one bite at the apple' rule. Give me one chance [and] then I
won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN)
With 24 million small businesses in the US alone, that's way too many apples.



More information about the asterisk-users mailing list