Saving the Internet?

The former CIA Director, George J. Tenet, has said what he thinks needs to be done to improve Internet security:

The way the Internet was built might be part of the problem, he said. Its open architecture allows Web surfing, but that openness makes the system vulnerable, Mr. Tenet said.
   
Access to networks like the World Wide Web might need to be limited to those who can show they take security seriously, he said.

Tenet calls for Internet security By Shaun Waterman UNITED PRESS INTERNATIONAL Published December 2, 2004

Well, that would exclude most governments from using the web.

He also says the Internet is a potential Achilles heel, and warns that

… al Qaeda remains a sophisticated group, even though its first-tier leadership largely has been destroyed.

It is "undoubtedly mapping vulnerabilities and weaknesses in our telecommunications networks," he said.

This makes me wonder several things.

  1. Is this "undoubtedly" the same sort as that Saddam undoubtedly had WMD? Some evidence would be useful here.
  2. Suppose there is actually evidence for OBL mapping Internet vulns. How exactly is destroying it before he can a solution?
  3. Wouldn’t it make more sense to map them ourselves first, and fix them? Sure, it would be expensive to add redundancy in certain cases, but compared to what?

He also said:

"I know that these actions will be controversial in this age when we still think the Internet is a free and open society with no control or accountability," he told an information-technology security conference in Washington, "but ultimately the Wild West must give way to governance and control."

He said this at an event from which he excluded the press, which makes one wonder whether it is the Internet he is worried about or is it a free and open society that is worrisome to him.

Meanwhile, several people have told me that it’s a common mantra inside Microsoft to say that the Internet is the terrorist’s best friend.  I don’t think that’s right, unless you want to extend the same argument to anything else that has free and anonymous communications, such as the Interstate Highway System.

What I think is the terrorist’s and criminal’s best friend is software that ships out of the box with vulnerabilities turned on and that has design flaws that prevent fixing easily exploited bugs.  Mr. Tenet seems to agree on that subject:

Mr. Tenet called for industry to lead the way by "establishing and enforcing" security standards. Products need to be delivered to government and private-sector customers "with a new level of security and risk management already built in."

Maybe that’s what his whole talk was about.  It’s too bad we’ll never know, due to his exclusion of the press.

-jsq

esr at UT with CACTUS

Austin seems to be attracting even more interesting people lately.

Eric S. Raymond, author of the books The Hacker’s Dictionary, The Cathedral and the Bazaar, and The Art of Unix Programming, is speaking at the University of Texas Monday 29 November.

Eric S. Raymond
After the Revolution: Coping with Open Source Success
7PM Monday 29 November 2004
ACES 2.302
University of Texas at Austin

This talk is sponsored by UT School of Information and Capital Area Central Texas Unix Society I believe there is kibitzing going on as well by EFF-Austin and the Austin Linux User’s Group

-jsq

Information Security Considered Difficult

In a paper back in 2001, “Why Information Security is Hard — An Economic Perspective” Ross Anderson of the University of Cambridge gives a number of reasons why technical means will never be adequate for information security, including Internet security. As he says in the abstract:
“According to one common view, information security comes down to technical measures. Given better access control policy models, formal proofs of cryptographic protocols, approved firewalls, better ways of detecting intrusions and malicious code, and better tools for system evaluation and assurance, the problems can be solved.

“In this note, I put forward a contrary view: information insecurity is at least as much due to perverse incentives. Many of the problems can be explained more clearly and convincingly using the language of microeconomics: network externalities, asymmetric information, moral hazard, adverse selection, liability dumping and the tragedy of the commons.”

He uses a number of examples to make his point, among them distributed denial of service (DDoS) attacks that use subverted machines to launch a combined attack at a target. Particularly good machines to subvert for this purpose are end-user machines, because the typical end-user does not have much incentive to pay anything to protect against their machine being used to attack some large corporate entity with which the user has no identification. In many of the examples, the common thread is that
“In general, where the party who is in a position to protect a system is not the party who would suffer the results of security failure, then problems may be expected.”
Anderson amusingly points out that a tenth-century Saxon village had community mechanisms to deal with this sort of problem, while in the twenty-first century we don’t.

The key here is that it is an aggregate problem and we need collective measures to deal with it. In a Saxon village peer pressure may have been enough, and if that didn’t work they may have resorted to the stocks or some similar subtle measure.

Today we may have made some progress with alarming the end users by pointing out that 80% of end-user systems are infected with spyware and botnets of compromised systems are widespread. On the other hand, such numbers indicate tjhat education thus far hasn’t solved the problem. SImilarly, that anyone is still using Internet Explorer after the events of this past summer indicates that users are not taking sufficient steps.

A more obvious method would be to make the software vendors liable. Why should operating systems still be sold with open security holes right out of the box, and why should applications still be sold that have bad security designed in? An obvious answer that I don’t think the paper addresses is that some vendors of such software have enough lobbiests to prevent vendor liability laws from being passed. Anderson’s paper goes into more subtle reasons such as ease of use, number of users, switching costs, etc.

There’s an intermediate method that Anderson attributes to Hal Varian, which is to make the Internet Service Providers (ISPs) take responsibility for malign traffic originating from their users. This may be hapenning somewhat, but has its own problems, especially in implementation, which I may come back to in another post.

But the main point of Anderson’s article is clear and compelling: technical means are not sufficient to provide information security. Non-technical information security strategies are needed.

-jsq

Ensuring Business Continuity for Banks

Here’s an interesting passage from a document published by the Basel committee called “Risk management principles for electronic banking

Legal and Reputational Risk Management

To protect banks against business, legal and reputation risk, e-banking services must be delivered on a consistent and timely basis in accordance with high customer expectations for constant and rapid availability and potentially high transaction demand. The bank must have the ability to deliver e-banking services to all end-users and be able to maintain such availability in all circumstances. Effective incident response mechanisms are also critical to minimise operational, legal and reputational risks arising from unexpected events, including internal and external attacks, that may affect the provision of e-banking systems and services. To meet customers’ expectations, banks should therefore have effective capacity, business continuity and contingency planning. Banks should also develop appropriate incident response plans, including communication strategies, that ensure business continuity, control reputation risk and limit liability associated with disruptions in their e-banking services.

The document also says that the reason it sets forth principles instead of rules or even best practices is that it expects that innovation will outmode anything even as specific as best practices.

-jsq

InnoTech and InfraGard

At InnoTech I was followed by the FBI. Chronologically, not physically: they spoke next.

About InfraGard, which is a public-private partnership, i.e., the FBI organizes information sharing about intrusions and other security matters among businesses, academia, and law enforcement agencies. It has been going on since 1996, and has been national since 1998.

The InfraGard talk was mostly a good update on the current state of security, both online and physical, plus information on how to join.

-jsq

The Bazaar

It’s been a while since the last post. I plead flu. It has advantages, though: I lost 10 pounds in 2 weeks.

I’m several conferences behind in writeups. Back at Linucon, I chatted a bit with Eric Raymond, author of The Hackers Dictionary, The Cathedral & the Bazaar, and The Art of Unix Programming.

Of those books, the most relevant to this post is The Cathedral & the Bazaar. Its thesis is pretty simple, but let me paraphrase it and oversimplify it: software built to elaborate specifications by teams of programmers, with flying buttresses and fancy rose windows isn’t necessarily better (more capable, more robust, more user-friendly, more sales, etc.) than software built by loosely knit teams of people building the parts they want to use. Closed source vs. open source. Back when I published the first printed version of Eric’s paper on this subject, this was a radical thesis. Not to its practitioners, of course, since the Berkeley Unix system for example had been produced by such methods back in the 1980s, and Linux was already spreading rapidly in the 1990s. Yet radical to those not familiar with it. Nowadays many companies are using it, and much open source software has become quite popular.

However, the idea extends beyond software, and it appears that many people have worked out aspects of it from different directions. For example, David Weinberger’s Small Pieces Loosely Joined deals with many of the same ideas related to the World Wide Web. Eric’s most recent book is also relevant, since the Unix philosophy has always involved small pieces connected together in various ways instead of large monolithic programs.

John Robb’s Global Guerillas blog has explicitly cited the Bazaar open source idea in relation to ideas of assymetric warfare. Robb had previously cited a long list of books that are more obviously about warfare, the most seminal of which is probably Boyd:The Fighter Pilot Who Changed the Art of War by Robert Coram. This is a biography of John R. Boyd, who started out as a fighter pilot (never defeated), wrote a manual on aerial jet combat that is apparently still in use, “stole” a million dollars worth of computer time in order to develop his theory of why he never lost, which led to designing airplanes including the F-15 and F-16, and eventually via intensive reading of history to a theory of warfare that has since been adopted by the U.S. Marine Corps, as well as by other, less savory, parties. It is known by various names, such as “fourth generation warfare,” “assymetric warfare,” or “highly irregular warfare.”

Someone else approaching many of the same topics is Albert-László Barabási in his book Linked, about scale-free networks; I’ve mentioned his book a number of times already in this blog.

What do all these things have to do with one another? They’re all about organizing loosely joined groups without rigid top-down command and control. They all also have to take into account how such organizations can deal with more traditional c-and-c organizations; which has what advantage; and how.

What does this have to do with Internet risk management strategies? The Internet is a loosely coupled non-hierarchical distributed network. No single organization can control it. Any organization that wants to use it would do well to accept that the Internet introduces sizeable elements that cannot be controlled and therefore risks that must be managed without direct control.

-jsq

Bandwidth Futures

Looking backwards a couple of years, here’s an interesting article about carriers hedging risks by taking out options on future bandwidth prices, among various other forms of risk management (anything except bandwidth trading): “Carriers Seek Rewards of Risk Management” by Josh Long in PHONE+, January 2002. One of the most interesting passages I think is this one about carriers not necessarily knowing themselves:
“Ciara Ryan, a partner in the bandwidth team at global consulting firm Andersen, agrees. Ryan explains the lack of visibility is due in part to mergers and acquisitions creating carriers that are an amalgam of many parts. The information pertaining to these assets has been integrated poorly, making it difficult to employ risk-management tactics, she says.

“Ryan says carriers must be able to extrapolate key bits of information from their databases to manage their network assets properly. This would include, how much they have sold on a particular route, from which point of presence (PoP) it was sold, what the service level agreement (SLA) entailed, whether an option was sold on the contract, whether a contract was a short-term lease or indefeasible rights of use (IRU) agreement and what the definite and projected sales include on particular routes.

“”Very, very few of them would be able to give you this information,” Ryan adds.”

And that’s before considering paths all the way from the carrier’s customer to its customers. If the carriers don’t even know what their own networks consist of, it would appear they can’t be expected to provide a holistic and synoptic view of the Internet, neither one by one or all together.

-jsq

Linucon Security

Among the 4 panels I was on at Linucon was one about network security. Panelists Stu Green and Michael Rash led a lively discussion about technical security measures, with much audience interaction. After a while, I asked, “So what do you do when all your technical security measures fail?” Dead silence. Then a number of the usual answers, including “Get another job.”

Then I launched into the Internet2 slides, about slowness several hops out, nonredundant links, congestion, etc. More lively discussion ensued. The panel was scheduled for an hour, but nobody wanted to stop, so we went on for another half hour.

Eventually I asked if anybody wanted to hear the Ancient Anasazi story:

They said yes, so I told them, or, rather, answered questions they asked in all the right places. You never know what will be a crowd pleaser.

-jsq

Negligence or Risk?

Here’s an interesting paper by Carter Schoenberg about “Information Security & Negligence: Targeting the C-Class.” Carter is a former homicide detective, and he takes a concrete case-study approach:
“Rather than analyzing a hypothetical situation where a company is hacked by one of several means or subjected to the involuntary mass-propagation of a virus or worm, let’s focus on a real-life incident, dissect each component supported by fact and effectively diagram a blueprint for how you cannot only be targeted for a lawsuit or criminal prosecution, but demonstrate how you will lose. This loss will inflict a financial casualty, which may dramatically impact an organization’s fiscal health.”

His example is of a financial institution that had a web page defaced, apparently because it hadn’t applied a patch to its IIS 5 (Microsoft’s Internet Information Server version 5). Yet no customer or financial data was compromised as a result of the attack. Nonetheless, the financial institution had a responsibility to provide access to financial information and transactions to its customers, and according to news reports, customers had limited or no access during the attack. So was the financial institution negligent? How would you prove it?

Laws for negligence vary per state, but in the U.S. there are now a number of national laws that take precedence, such as Sarbanes-Oxley, HIPAA, and the Gramm Leach-Bliley Act. These permit discussing this case in specific terms, including quantifiable harm, opportunity, and motive. Carter goes so far as to say:

“This scenario will ultimately lead to share holders targeting corporate executives as being personally liable seeking seizure of personal wealth and even criminal sanctions.”

Technical IT and security personnel probably won’t avoid involvement, either; at the least they may be deposed in a lawsuit.

In such a situation I think I’d be for all three of a high standard of diligence, robust D&O insurance, and Internet business continuity insurance. And if I was a system administrator, I’d want specific policies that I could follow to the letter, so as to avoid being the target of a lawsuit.

Note that Carter’s example case did not involve a cracker specifically targeting that particular financial services institution for any reason other than that it happened to be running an unpatched copy of IIS, along with a number of other organizations that were attacked. So this was a force majeure event in the sense that it was not specifically targeted and had aggregate effect. However, there were things the financial institution could have done, but didn’t, of which patching the software was only the first.

Carter includes an interesting timeline showing patching shortly after point of discovery as manageable risk, developing over time still not patched into negligence and eventually gross negligence. And he compares typical cost for a system administrator to apply a patch versus fines and fees in a court case: they differ by more than a factor of 100.

In such a situation I think I’d be for all three of a high standard of diligence, robust D&O insurance, and Internet business continuity insurance. And if I was a system administrator, I’d want specific written policies approved by corporate executives that I could follow to the letter, so as to avoid being the target of a lawsuit.

Carter also mentions settlement of a lawsuit as being preferable to litigation partly because of risk of public exposure and resulting loss of reputation and perhaps financial consequences. I wonder if we shouldn’t turn that around and establish reputation systems that discover unapplied patches, nonredundant routes, congestion, etc., inform the affected enterprise first, and after a decent interval make remaining problems public. Such a reputation system would be tricky to implement, since it would not want to invite crackers to attack. However, it might be possible to do it in such a way as to encourage enterprises to patch and deploy so as to avoid security problems. And while I’m not a lawyer or a police detective, I would guess that companies that patched when warned would be less likely to be held liable in legal cases.

Carter’s article is a very interesting nuts and bolts examination of just what legal liability might mean for a specific case. There are a number of points in there that I suspect many enterprises have not considered. Well worth a read.

-jsq