[Asterisk-Users] Re: Advice on OS Choice

Andrew Kohlsmith akohlsmith-asterisk at benshaw.com
Sun Oct 17 20:46:17 MST 2004


On October 16, 2004 04:49 pm, Joe Greco wrote:
> As a manufacturer, you build things and sell them, and you can recommend
> whatever policies you like, but after it leaves the shipping department,
> you're out of luck as to being able to guarantee any of that.

Then, as a manufacturer, you should not be liable for what some dickhead in a 
service department is doing to it.  :-)

Like I said in my last message, litigation has a way of making things 
nonsensical.

> > Firmware that boots checks image (or critical parts of image) for
> > tampering against stored checksum (checksum that gets updated when
> > correct update procedure is followed) -- Putz away, the firmware will
> > still bring you to a full stop because it detected a problem.

> That's highly complex; even Sun agreed there was no practical way to do it.
> With a closed source system, it wasn't considered a risk, and since
> everything up to the point where we received control from the OS was at
> least very difficult to putz with, it wasn't checked /prior/ to execution.
> Verification of the loaded kernel image happened after it was loaded, and
> was designed specifically to catch things like disk blocks going bad.

I dunno -- crytographically sign the images and verify signature on boot.  
Hell even a field hard drive swap would work in this case.

> Again, the black box approach has advantages.  Could you maybe engineer
> something to verify stuff at each and every step, just so you could go open
> source?  Sure, perhaps, but at additional cost for more flash, and
> additional cost for more development, and bad things then happen if you
> do a field swap on hard drives to fix a broken unit, etc., and really it
> becomes impractical.

See above.

> That's nice in theory, but potentially pretty darn expensive.  Nobody
> seemed to think that it was worth the trouble, expense, etc., to get so
> paranoid about it.

That's what I don't understand -- they're sufficiently paranoid when it comes 
to providing source, but security through obscurity is good enough to get 
past the legal department.  Curious, really.

> > To upgrade you can install the CD or reimage
> > the drive with the new image, but you have to also replace the vendor
> > key.

> And how do you do /that/?  You now need to have a keyboard attached to the
> system to enter and replace the key?

physical cartridge or smartcard that was shipped with the updated firmware, 
and "signed off" by someone who has the access code to authorize the firmware 
update.  I dunno.

Cryptographic signature on the images with the CA being the company releasing 
the firmware is even easier.

> The point is that's all software.  If it's open to inspection and
> recompilation, it's easily open to defeat.  I can make an init system that
> is very difficult to reverse-engineer, complete with interlocks with any
> other items that get launched, such that NOTHING happens unless that
> process is happy, but if that can be replaced by an init that doesn't give
> a fsck, because someone commented out all the code and recompiled it, then
> we have trouble.

*sigh* -- this is why I am saying that the boot firmware needs to make these 
checks, not the stuff you can tinker with when you have the source.  
Bootloaders only boot the end software, they're usually not too complex and 
once done require little to no maintenance.  Keep *that* black boxed.  Put 
the interlocks *there* -- your core system is still open to many eyes and a 
lot of scrutiny.

> So, yes, you /could/ design such a system, and if you've open sourced all
> your software, then you probably /have/ to.

I would go on to say that you should have those checks and balances in place 
whether it was open or not...  Hell those DURN TERRAISTS might decide to put 
rogue firmware out to make all the nuclear medicine machinery go critical.  

Yes, this is getting silly. 

> We're talking specifically about the case where distributing the source
> makes it trivial for someone to work around those correct checks and
> balances.

You can't work around a check and balance like that -- firmware doesn't like 
the signature, it don't start up the executable.  Capiche?

We're talking about open-sourcing the main software, not the ROM bootloader 
(for lack of a better word: BIOS).

> No, I'm not worried about that.  The specific case that was of concern was
> what happens when someone from the hospital campus electronics shop tampers
> with the system, something bad happens, and then the system is reloaded
> with a non-tampered copy, because hospital policy would be to send a
> defective device back to the shop?

These devices don't have some kind of audit log in them?  Jesus.

> Trusted computing is always a difficult thing.  At a certain point, you
> need to draw the line.  Because we had a closed source solution, we were
> able to fairly safely assume that when we got handed off at init, we had
> a system which was likely in a known state, and could verify the loaded
> kernel/module/firmware/etc images, which was considered extremely
> sufficient paranoia.  The point is that re-engineering a whole system with
> more checks, firmware, keys, requirements, adding a keyboard, etc., just
> so you can use GPL'd software is really a non-starter, so in the end, only
> BSD licensed projects were used and only BSD licensed projects received
> the benefits of having some of our engineers working on, debugging, and
> improving those projects.

I wasn't saying anything about a keyboard or implementing everything -- having 
the bootloader verify the system image would have been sufficient and I gave 
several ways to ensure that.  I also gave several ways to ensure that a new 
image was "authorized" by someone who could be held liable.  adding $250 or 
even $2500 to a $50k machine for this kind of safety -- closed or open source 
-- just seems like good karma to me.

-A.



More information about the asterisk-users mailing list