Trust in IT

Ignore the sensationalist headline. This article is a good summary of the need for trust in IT, and provides some ideas for how to enable more of it.

Virtually everything we work with on a day-to-day basis is built by someone else. Avoiding insanity requires trusting those who designed, developed and manufactured the instruments of our daily existence.

All these other industries we rely on have evolved codes of conduct, regulations, and ultimately laws to ensure minimum quality, reliability and trust. In this light, I find the modern technosphere’s complete disdain for obtaining and retaining trust baffling, arrogant and at times enraging.

Posted on June 11, 2013 at 6:21 AM18 Comments

Comments

phred14 June 11, 2013 7:46 AM

For many of the other issues surrounding free vs proprietary software, this is another.

On security front, there are those who say that just because the code is open doesn’t mean that people will actually look at it, and that is true. However just because proprietary code is kept under lock and key doesn’t mean it has been properly looked at, either.

It really does come down to trust. While I’m sure there may be scoundrels adding an obscure back door into free software, I believe that that situation is very rare. On the other hand, with proprietary software I strongly suspect that their motivation more “revenue encouragement” than it is fulfilling my needs. In general I trust the motivations behind free software more than proprietary. I’ll agree that competence may be a different matter, but that’s more easily subject to market correction.

bcs June 11, 2013 8:37 AM

I find that quote interesting because it’s last point is totally contrary to my own experience: my own employer (admittedly I’ve only had one) puts a conscientious effort into gaining and retaining (by earning) our users trust.

J.D. Bertron June 11, 2013 9:13 AM

That’s never going to happen. Replace software with ‘news’ in the article, and it will immediately be apparent why.

vas pup June 11, 2013 11:03 AM

“All these other industries we rely on have evolved codes of conduct, regulations, and ultimately laws to ensure minimum quality, reliability and trust.”
Most of idustries do have outside independent quality control when their product is vital, e.g. FDA (drugs, diagnostic tools, etc.), FAA (all flying objects including drone), FCC( for communication devices), even professional orgaization not tied directly to government, e.g. UL for electric devices.
Trust what is declared, but verify( verification of quality, safety is conducted by independent trustworthy body having expertise, resources, professionals, etc.) before getting to the customers.
Do we have have any independent body/mechanism for verification of quality/security/reliability of IT products (software system and application, hardware, firmware) before they get to the market as other industries (see above have)?

RH June 11, 2013 11:34 AM

Take it one step further: there’s a whole body of proofs that basically declare that it is impossible to wholly predict the outcome of a Turing machine. We have to trust the system used to generate the Turing machine to prove its output. If the problem of trust exists all the way at the theoretical/mathematical layer, it has to be an issue all the way through.

Petréa Mitchell June 11, 2013 11:45 AM

The Economist, on a similar theme, concludes:

The problem isn’t so much that we haven’t set up a legal architecture to preserve our online privacy from the government; it’s that we haven’t set up a legal architecture to preserve our online privacy from anyone at all. If we don’t have laws and regulations that create meaningful zones of online privacy from corporations, the attempt to create online privacy from the government will be an absurdity.

Simon June 11, 2013 11:47 AM

Trust is being used here in weird ways. In one place it seems to refer to trusting people, in another place it seems trust is a kind of substance.

I don’t trust a large institution to treat me with respect? I don’t trust a six-year-old child with a loaded gun? I don’t trust a drunken pilot? I don’t trust an advisor with my money?

Icicle June 11, 2013 1:54 PM

We used to say “use the source Luke”, but actually how many of us have the time to read all sourcecode? So instead we either compile the sources without reading it or just download binaries.

Test Driven Development (TDD) is gaining ground in the software developing community. Besides catching bugs it could also be used to test software robustness.

Why not setting up a local BuildBot with your own (or shared) test scripts?

When a new release of your favourite piece of software comes out, download it and run the build-and-test script yourself.

It is much easier do do a peer review on a test script than 500K LOC.

Arclight June 11, 2013 2:41 PM

I would assert that “verifying” information technology products is much more difficult than drugs or engineering works.

In the case of pharmaceuticals, anyone can collect a sample from the drugstore and forward it to a lab for testing. It either will or will not contain the advertised compounds at a specified purity, dosage, etc. And the FDA can send inspectors to the factory, where it can be physically observed that the right ingredients are on-hand, the staff has access to the right documentation to do their jobs, etc.

The same thing applies when a bridge is built – someone pays for the design work, but it is subject to review by independent parties before and after construction, and an X-ray or core sample could be taken after the fact if there is some question about the quality of construction.

Unfortunately, our IT experience is mediated through many layers, including hardware, firmware, drivers, operating systems, networks and applications. At any one of these layers, our wishes could be subverted without us ever knowing.

As Bruce says “Bad encryption looks just like good encryption.” And “unauthorized data retention” at a data center looks exactly like “good backup procedures.”

There’s really no way to look at a cloud provider’s data center and have any idea what those racks of servers are doing. Are they deleting sent messages after 30 days? Are they forwarding your personal information to criminals overseas? Is a faulty code release causing the eCommerce server’s PRNG to only use the last 12 bits of entropy?

The only industry I’m aware of that takes this sort of thing seriously is the casino gaming sector. They actually require vendors to submit their code and tool chain for review and compilation into an “authorized” software image, which can be verified in the field at any time.

This is really only possible in a small, closed system where vendors, casino operators and the government all have an interest in making sure that slot machines are “trusted” to take and pay out the right amount of money. Trust all around is central to the health of the business model.

Unfortunately, “general purpose IT” does not have a business model that aligns the tech vendors, governments, network operators and consumers. If anything, the incentives are for less security, more opaqueness all around, and a non-empowered consumer.

We can use open-source products and sophisticate computer science tools to make software more trustworthy, but those tools can’t fix the business issues.

Arclight

Dirk Praet June 11, 2013 7:53 PM

Prominently missing in the article: the impact of FISA and CALEA II, when vendors can be legally forced to build in backdoors and any party involved (vendors, distributors, auditors) gagged from revealing they are there.

Clive Robinson June 12, 2013 4:41 AM

@ Dirk Praet,

It’s not just FISA and CALEAII.

A while ago I put a link up on one of the friday squid pages to an article about the FEDS coercing a software developer to back door his software by claiming the would make him an accessory to crime, because his software was being used by fourth party persons.

The coercion started witg “swating his home” and endangering his and his familes life as well as doing considerable damage.

After this first step in “softening him up” the threatend him with multiple counts of being an accessory and all sorts of other crime so that he would never see his family again (yes this can be done in the US court system there are special jails where prisoners are issolated indefinatly for administrative reasons and can only see court appointed persons).

After this warming up process they let slip it could all go away if he became an informant.

Eventually they told him he would have to put a backdoor in his software, which (is actually illegal) and worse still he would have to use it on the authorities behalf (that is he would have to commit repeated crimes on their behalf).

So what had this software developer done that demanded such draconian behaviour?

Simple he wrote software for use in the lessure and entertainment industries which was only sold to legitimate companies who operated entirely outside of US juresdiction.

The authorities wanted the backdoor in the software to download Credit Card Numbers and all the other details you would want for Identity Theft… Supposadly to check that US citizens were not using the software…

As Nick P observed at the time it was a story Bruce should blog about.

Clive Robinson June 12, 2013 5:08 AM

@ Simon,

Your second paragraph touches on a big failing of the use of current security models and why they security a whole lot worse than it realy should be.

Because web browsers and almost all other end user software treats you as a “single entity” it forces you into the position of being a “single entity” in all aspects of your online existance.

Now in the real physical world whilst you might be a “single entity” your behaviour is such that you have several identities or roles.

That is you might be a parent, worker, boss, junior, club member, friend etc etc. In each identity / role you behave differently not just in manner but with the information you handle as well.

A simple example you may have several different bank accounts or credit cards under your control that are compleatly unrelated. Such as your personal cheque account, your personal savings account, one or more personal credit cards, you might also have a work credit card or work account under your control as well as the accounts etc for a club or association you act as an officer for. At more than one point in time I’ve had one or all of the above under my control.

A sensible person ring fences or walls off these accounts etc off from each other, that is you generaly would not use a work account/card for personal business as doing so in many cases would be considered fradulant behaviour.

However try to use these accounts online and most user end software does just about everything to stop you setting up sensible ring fences or walls around such activities.

In some cases such as email clients you cannot have multiple mailboxes using the same underlying mail transport mechanisum because the protocol does not alow it under some limited set of circumstances so the software developers of the client makes it all circumstances to make their life easier.

Web browsers and other clients do the same thing with security certificates.

It’s all a mess of which only a little can be put down to legacy issues, most is lazyness and the wrong models, driven on by marketing droids who want bells and whistles features that can be used for specmanship not real world use (something the author of the piece aluded to).

Clive Robinson June 12, 2013 6:26 AM

@ RH,

    There’s a whole body of proofs that basically declare that it is impossible to wholly predict the outcome of a Turing machine.

Yes there is but you have to remember that the proofs are with regards to the functioning of a turing machines in their singular state or without refrence to other entities.

This limits the scope of the proofs, so when you say,

    We have to trust the system used to generate the Turing machine to prove its output.

Are you saying that the system used to generate the turing machine was its self a Turing machine, or not?

Because what you say is ambiguous and thus significantly effects your next statment,

    If the problem of trust exists all the way at the theoretical/mathematical layer, it has to be an issue all the way through.

Which is also inaccurate.

Take for instance the halting problem which very broadly indicates that for some problems we cannot know in advance that they will ever conclude. Whilst true and it can be used as a foundation for other proofs it’s mainly not relevent in our use of computers. The reason for this goes back to the design of the atom bomb at Los Alamos, they had mathmatical equations that would predict certain behaviour but had problems with the number type and behaviour of the inputs. It was realised that it was not possible in any meaningful way to get the specific answer (this problem is one weather forcasters are familiar with as are many others).

Some one realised (Fermi or Ulam) that they did not need the specific answer but just the general behaviour, thus a limited number of runs with selected inputs that approximated events would give a trend that would indicate the likely result.

That is they dodged the issue by using probability and we call it Monte Carlo methods these days.

A similar aproach can be taken to computer security (which I’ve discussed on this blog before).

Another important realisation is that the output of a Turing machine can be monitored by another entity that may be more or less than a Turing machine. That is an observer be they human or limited state machine can detect when a Turing machine is going outside of acceptable bounds in various ways.

There are two basic independent ways that this can be done,

1, Observe only the output.
2, Observe only the internal state.

Both methods can be combined and as those old enough to have had to debug code before our more modern tools were available can testify the combination is considerably greater than the individual methods.

The two methods can be further improved by the design of the system. As a TEMPEST / EmSec engineer can tell you the big problem to solve is “complexity” because of the issue of it rising at some positive power (greater than 2) of the number of enterties that can combine. Thus the solution is to “divide and conquer” you split up a problem into the minimum number of entities required to perform some small but distinct function of the overal problem you then “encapsulate” and “monitor” by either or both methods the distinct and well defined function.

Oddly this sort of approach is very much what reliability engineers have done for years. You don’t monitor a 747 as a single entity you break it’s three million od components down into individual systems and subsystems and monitor those with active systems and routien / planed maintenance.

However to do this in software requires a radical change in methods that the industry will fight against in any way it can. We know this by studying history of which an apt example is the way both Science and Artisan methods were combined to form Engineering when laws were passed to prevent deaths by boiler explosions back in Victorian Britain and subsiquently in the rest of the world (in part by indirect force of Empire but in the main due to common sense).

The issue of boiler explosion deaths is just one of many that tells us that regulation is actually good for a market because it prevents the “race for the bottom” that cripples or kills markets before they can become well found and provide both inovation and trust and be deemed trustworthy by their customers.

Scott June 12, 2013 7:54 AM

I have a strong preference for (Free Libre) Open Source development, mainly because access to the source is valuable to me – but also for security reasons. No I don’t hold that Closed Source is less secure – only that our broken trust model doesn’t allow us to measure the comparison.

Theo de Raadt wrote:-

What I am seeing is that we have a ridiculously upside-down trust
model — “Trust the developers”.

and

We never asked for people to trust us. We might have “earned some” in
some people’s eyes, but if so it has always been false, even before
this. People should trust what they test, but the world has become
incredibly lazy.

To me that means if we can’t test something ourselves either because we lack the ability or access – then we need to rely on “someone” to measure against some sort of base line on our behalf. And we need to test them. Continually.

Clive Robinson June 12, 2013 12:48 PM

@ Scott,

With regards the Theo de Raadt quote, if you do not trust the developers who do you trust?

As Bruce and many others have pointed out such things as security and trust are based on chains of things be they people or technology and overall “they are only as strong as the weakest link”.

A developer be they a “code cutter” or actual “computer scientist” is just one link in the chain, in the same way as an engineer is but one link in the design and construction of the bridge you cross, the airoplane you fly in or the car you drive.

Do you trust the “engineer” or the worker tasked with constructing the bridge, plane or car or any of their component parts?

If not why not? And if so why? Or don’t you even think about them?

The answer is I suspect in most peoples cases is they never ever think about them, and on the rare occasion somebody does they just assume that they are trusted.

As I’ve often complained on this blog and other places in the past I don’t like or trust the term “software engineer” because as an engineer in more than a couple of other domains I know that few if any software developers actually use engineering methodologies, tools or practices.

But I don’t blaim them, history shows us that artisans can by trial and error arive at the cart wheel and beam steam engine. It is managment who decide if they will pay for engineers or artisans and their reasoning is almost entirely based on externalising risk and minimising cost.

The reason we have engineering as a disapline is because artisans designed boilers and they exploded and killed people in Victorian Britain, and it became a political nightmare and Parliment had to act.

The minimal regulations they came up with stopped managment externalising risk and forced carefull investigation of all accidents and the testing required to establish the cause of an accident. The result was a rapid progress in the application of science to artisan construction and engineering as a proffessian began.

But importantly the regulation stopped the headlong “race for the bottom” that managment had engendered. This ment that new methods had to be investigated and forced inovation to happen and this actually caused the market to expand for the good of all.

I see no reason why a little carefull regulation would not improve software development giving rise to new well found inovation based on sound reasoning.

Importantly it is because of the regulation driven development that trust in engineering has reached the point where few if any people have cause to not trust it, which is why events such as the exploding fuel tanks on a certain model of car involved in a very minnor rear end shunt caused such shock and outrage.

Imagine the schock and outrage if say hair dryers motors randomly worked in reverse and sucked wet hair in onto the bare heating elements and caused the person to be electrocuted?

Nobody thinks about it happening because they trust the engineers to be aware of the risks involved and ensure it cannot happen, either by intrinsic design or the addition of safety cut outs or other preventative measures.

As such there is no such trust in commodity software development because nobody bothers with risk or it’s mitigation, only the mitigation of risk by externalising and legaliese. And as has been repeatedly demonstrated with endless patching a companies reputation is not broken by issuing broken and dangerous software, in fact their reputation is enhanced if their badly produced product gets rapidly patched at the customers expense…

Go figure how that almost unique state of play came about…

Scott June 13, 2013 5:26 AM

@Clive Robinson

Do you trust the “engineer” or the worker tasked with constructing the bridge, plane or car or any of their component parts

I trust the Roman bridge engineer…. he stood under it while a legion marched over (made a commitment). I don’t trust the VW system engineer… (he only makes a contribution). For what I don’t trust I have suspicion and blind faith.

“Chains” (or layers) of trust – yes, decrease the trustworthiness.
Theo again:-
If anything, the collaborative model we use should decrease trust,
except, well, unless you compare it to the other model — corporate
software — where they don’t even start from any position of trust.
There you are trusting the money, here you are trusting people I’ve
never met.

I like the steam analogy. It’s apt. Software now is similar to early steam power – worked on by few, fully understood by even less, enormously more powerful than the technology it replaced, and it can make massive profits for it’s operator with low overheads. It’s magic. And like early steam power it’s full of bugs and prone to disastrous failures.

The only problem I have with the analogy is the theory that safety was the motivation for regulation. I suspect the real motivator was profit – boilers blowing up and killing people is bad for business. But then I don’t think Davey invented the safety lamp to make life better for miners (there weren’t many until then).

If a good model for software certification arises it might be as a result of insurance companies and/or the business that rely on software, rather than governments acting to protect the privacy of their citizens. I note we already have a government software auditing program…. (seriously), it’s the NSA.

(ironically that post I quote is from the Open-BSD thread about the hunt for a government backdoor in the IP stack – that might have been put there so the FBI could spy on the NSA)

vas pup June 14, 2013 9:05 AM

@Scott:
“that might have been put there so the FBI could spy on the NSA)”.
Thank you for that part which triggered the following thought:
Law enforcement and Intelligence (CIA, DIA,NSA, etc.) had different set of rules for data collection and usage including interrogation tools depending on their goals. For LE – primary goal was prosecution( and LE own intel like undecover operations, CIs, etc. has established legal frame work subordinate for that goal). For Intel – the goal is information itself, covert ops. The latter cares less about admissibility in a court, but rather actual protective remedies now and here and basically was primary targeting foreigners outside US protecting interest of US.
Now it is new era of fighting terrorism when boundaries between LE and Intel are not clear and become overlapping. LE is always bound by Constitution, not Intel (before). As soon as Intel activity is now switched its focus inside the country due to changes is threats nature, that is becoming new paradigm for Intel, and legal framework for their activity inside the country required development of such legal framework in detail including privacy issues, oversight, transperancy as well defining who is watching whom and under what conditions (NSA v. FBI and vice versa) by Law.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.