Monthly Archives: November 2004

esr at UT with CACTUS

Austin seems to be attracting even more interesting people lately.

Eric S. Raymond, author of the books The Hacker’s Dictionary, The Cathedral and the Bazaar, and The Art of Unix Programming, is speaking at the University of Texas Monday 29 November.

Eric S. Raymond
After the Revolution: Coping with Open Source Success
7PM Monday 29 November 2004
ACES 2.302
University of Texas at Austin

This talk is sponsored by UT School of Information and Capital Area Central Texas Unix Society I believe there is kibitzing going on as well by EFF-Austin and the Austin Linux User’s Group


Information Security Considered Difficult

In a paper back in 2001, “Why Information Security is Hard — An Economic Perspective” Ross Anderson of the University of Cambridge gives a number of reasons why technical means will never be adequate for information security, including Internet security. As he says in the abstract:
“According to one common view, information security comes down to technical measures. Given better access control policy models, formal proofs of cryptographic protocols, approved firewalls, better ways of detecting intrusions and malicious code, and better tools for system evaluation and assurance, the problems can be solved.

“In this note, I put forward a contrary view: information insecurity is at least as much due to perverse incentives. Many of the problems can be explained more clearly and convincingly using the language of microeconomics: network externalities, asymmetric information, moral hazard, adverse selection, liability dumping and the tragedy of the commons.”

He uses a number of examples to make his point, among them distributed denial of service (DDoS) attacks that use subverted machines to launch a combined attack at a target. Particularly good machines to subvert for this purpose are end-user machines, because the typical end-user does not have much incentive to pay anything to protect against their machine being used to attack some large corporate entity with which the user has no identification. In many of the examples, the common thread is that
“In general, where the party who is in a position to protect a system is not the party who would suffer the results of security failure, then problems may be expected.”
Anderson amusingly points out that a tenth-century Saxon village had community mechanisms to deal with this sort of problem, while in the twenty-first century we don’t.

The key here is that it is an aggregate problem and we need collective measures to deal with it. In a Saxon village peer pressure may have been enough, and if that didn’t work they may have resorted to the stocks or some similar subtle measure.

Today we may have made some progress with alarming the end users by pointing out that 80% of end-user systems are infected with spyware and botnets of compromised systems are widespread. On the other hand, such numbers indicate tjhat education thus far hasn’t solved the problem. SImilarly, that anyone is still using Internet Explorer after the events of this past summer indicates that users are not taking sufficient steps.

A more obvious method would be to make the software vendors liable. Why should operating systems still be sold with open security holes right out of the box, and why should applications still be sold that have bad security designed in? An obvious answer that I don’t think the paper addresses is that some vendors of such software have enough lobbiests to prevent vendor liability laws from being passed. Anderson’s paper goes into more subtle reasons such as ease of use, number of users, switching costs, etc.

There’s an intermediate method that Anderson attributes to Hal Varian, which is to make the Internet Service Providers (ISPs) take responsibility for malign traffic originating from their users. This may be hapenning somewhat, but has its own problems, especially in implementation, which I may come back to in another post.

But the main point of Anderson’s article is clear and compelling: technical means are not sufficient to provide information security. Non-technical information security strategies are needed.


Ensuring Business Continuity for Banks

Here’s an interesting passage from a document published by the Basel committee called “Risk management principles for electronic banking

Legal and Reputational Risk Management

To protect banks against business, legal and reputation risk, e-banking services must be delivered on a consistent and timely basis in accordance with high customer expectations for constant and rapid availability and potentially high transaction demand. The bank must have the ability to deliver e-banking services to all end-users and be able to maintain such availability in all circumstances. Effective incident response mechanisms are also critical to minimise operational, legal and reputational risks arising from unexpected events, including internal and external attacks, that may affect the provision of e-banking systems and services. To meet customers’ expectations, banks should therefore have effective capacity, business continuity and contingency planning. Banks should also develop appropriate incident response plans, including communication strategies, that ensure business continuity, control reputation risk and limit liability associated with disruptions in their e-banking services.

The document also says that the reason it sets forth principles instead of rules or even best practices is that it expects that innovation will outmode anything even as specific as best practices.


InnoTech and InfraGard

At InnoTech I was followed by the FBI. Chronologically, not physically: they spoke next.

About InfraGard, which is a public-private partnership, i.e., the FBI organizes information sharing about intrusions and other security matters among businesses, academia, and law enforcement agencies. It has been going on since 1996, and has been national since 1998.

The InfraGard talk was mostly a good update on the current state of security, both online and physical, plus information on how to join.


The Bazaar

It’s been a while since the last post. I plead flu. It has advantages, though: I lost 10 pounds in 2 weeks.

I’m several conferences behind in writeups. Back at Linucon, I chatted a bit with Eric Raymond, author of The Hackers Dictionary, The Cathedral & the Bazaar, and The Art of Unix Programming.

Of those books, the most relevant to this post is The Cathedral & the Bazaar. Its thesis is pretty simple, but let me paraphrase it and oversimplify it: software built to elaborate specifications by teams of programmers, with flying buttresses and fancy rose windows isn’t necessarily better (more capable, more robust, more user-friendly, more sales, etc.) than software built by loosely knit teams of people building the parts they want to use. Closed source vs. open source. Back when I published the first printed version of Eric’s paper on this subject, this was a radical thesis. Not to its practitioners, of course, since the Berkeley Unix system for example had been produced by such methods back in the 1980s, and Linux was already spreading rapidly in the 1990s. Yet radical to those not familiar with it. Nowadays many companies are using it, and much open source software has become quite popular.

However, the idea extends beyond software, and it appears that many people have worked out aspects of it from different directions. For example, David Weinberger’s Small Pieces Loosely Joined deals with many of the same ideas related to the World Wide Web. Eric’s most recent book is also relevant, since the Unix philosophy has always involved small pieces connected together in various ways instead of large monolithic programs.

John Robb’s Global Guerillas blog has explicitly cited the Bazaar open source idea in relation to ideas of assymetric warfare. Robb had previously cited a long list of books that are more obviously about warfare, the most seminal of which is probably Boyd:The Fighter Pilot Who Changed the Art of War by Robert Coram. This is a biography of John R. Boyd, who started out as a fighter pilot (never defeated), wrote a manual on aerial jet combat that is apparently still in use, “stole” a million dollars worth of computer time in order to develop his theory of why he never lost, which led to designing airplanes including the F-15 and F-16, and eventually via intensive reading of history to a theory of warfare that has since been adopted by the U.S. Marine Corps, as well as by other, less savory, parties. It is known by various names, such as “fourth generation warfare,” “assymetric warfare,” or “highly irregular warfare.”

Someone else approaching many of the same topics is Albert-László Barabási in his book Linked, about scale-free networks; I’ve mentioned his book a number of times already in this blog.

What do all these things have to do with one another? They’re all about organizing loosely joined groups without rigid top-down command and control. They all also have to take into account how such organizations can deal with more traditional c-and-c organizations; which has what advantage; and how.

What does this have to do with Internet risk management strategies? The Internet is a loosely coupled non-hierarchical distributed network. No single organization can control it. Any organization that wants to use it would do well to accept that the Internet introduces sizeable elements that cannot be controlled and therefore risks that must be managed without direct control.