Category Archives: Internet risk management strategies

Linucon Security

Among the 4 panels I was on at Linucon was one about network security. Panelists Stu Green and Michael Rash led a lively discussion about technical security measures, with much audience interaction. After a while, I asked, “So what do you do when all your technical security measures fail?” Dead silence. Then a number of the usual answers, including “Get another job.”

Then I launched into the Internet2 slides, about slowness several hops out, nonredundant links, congestion, etc. More lively discussion ensued. The panel was scheduled for an hour, but nobody wanted to stop, so we went on for another half hour.

Eventually I asked if anybody wanted to hear the Ancient Anasazi story:

They said yes, so I told them, or, rather, answered questions they asked in all the right places. You never know what will be a crowd pleaser.

-jsq

Negligence or Risk?

Here’s an interesting paper by Carter Schoenberg about “Information Security & Negligence: Targeting the C-Class.” Carter is a former homicide detective, and he takes a concrete case-study approach:
“Rather than analyzing a hypothetical situation where a company is hacked by one of several means or subjected to the involuntary mass-propagation of a virus or worm, let’s focus on a real-life incident, dissect each component supported by fact and effectively diagram a blueprint for how you cannot only be targeted for a lawsuit or criminal prosecution, but demonstrate how you will lose. This loss will inflict a financial casualty, which may dramatically impact an organization’s fiscal health.”

His example is of a financial institution that had a web page defaced, apparently because it hadn’t applied a patch to its IIS 5 (Microsoft’s Internet Information Server version 5). Yet no customer or financial data was compromised as a result of the attack. Nonetheless, the financial institution had a responsibility to provide access to financial information and transactions to its customers, and according to news reports, customers had limited or no access during the attack. So was the financial institution negligent? How would you prove it?

Laws for negligence vary per state, but in the U.S. there are now a number of national laws that take precedence, such as Sarbanes-Oxley, HIPAA, and the Gramm Leach-Bliley Act. These permit discussing this case in specific terms, including quantifiable harm, opportunity, and motive. Carter goes so far as to say:

“This scenario will ultimately lead to share holders targeting corporate executives as being personally liable seeking seizure of personal wealth and even criminal sanctions.”

Technical IT and security personnel probably won’t avoid involvement, either; at the least they may be deposed in a lawsuit.

In such a situation I think I’d be for all three of a high standard of diligence, robust D&O insurance, and Internet business continuity insurance. And if I was a system administrator, I’d want specific policies that I could follow to the letter, so as to avoid being the target of a lawsuit.

Note that Carter’s example case did not involve a cracker specifically targeting that particular financial services institution for any reason other than that it happened to be running an unpatched copy of IIS, along with a number of other organizations that were attacked. So this was a force majeure event in the sense that it was not specifically targeted and had aggregate effect. However, there were things the financial institution could have done, but didn’t, of which patching the software was only the first.

Carter includes an interesting timeline showing patching shortly after point of discovery as manageable risk, developing over time still not patched into negligence and eventually gross negligence. And he compares typical cost for a system administrator to apply a patch versus fines and fees in a court case: they differ by more than a factor of 100.

In such a situation I think I’d be for all three of a high standard of diligence, robust D&O insurance, and Internet business continuity insurance. And if I was a system administrator, I’d want specific written policies approved by corporate executives that I could follow to the letter, so as to avoid being the target of a lawsuit.

Carter also mentions settlement of a lawsuit as being preferable to litigation partly because of risk of public exposure and resulting loss of reputation and perhaps financial consequences. I wonder if we shouldn’t turn that around and establish reputation systems that discover unapplied patches, nonredundant routes, congestion, etc., inform the affected enterprise first, and after a decent interval make remaining problems public. Such a reputation system would be tricky to implement, since it would not want to invite crackers to attack. However, it might be possible to do it in such a way as to encourage enterprises to patch and deploy so as to avoid security problems. And while I’m not a lawyer or a police detective, I would guess that companies that patched when warned would be less likely to be held liable in legal cases.

Carter’s article is a very interesting nuts and bolts examination of just what legal liability might mean for a specific case. There are a number of points in there that I suspect many enterprises have not considered. Well worth a read.

-jsq

Time for a de facto electronic mail authentication system?

David Berlind of ZDNet News says in “Catastrophic Loss for unencumbered Standards” that the IETF working group on the most promising mail authentication system has been shut down, due to technical and business differences among its participants, plus it seems Microsoft is trying to patent the solution the working group was working on.

That leaves Meng Weng Wong’s Sender Policy Framework (SPF) as the main non-proprietary solution in this space, not to mention the most widely adopted.

Berlind calls for the Internet mail industry to follow the precedent of the financial industry, in which the principal vendors banded toegher and set a de facto standard for Electronic Funds Transfer (EFT).

One of the most likely groups to do this has been meeting in DC yesterday and today: the Anti-Phishing Working Group. Both Meng Weng and someone from Microsoft are there, as well as representatives from many well-known Internet security companies and many companies affected by phishing and spam.

I don’t see an industry-wide standard coming out of this meeting, but there are more meetings planned in short order….

-jsq

PS: Thanks to Bruce Sterling for blogging about Berlind’s article.

Congressional recommendations for Internet security

Previously I mentioned Government mandates in networking and security.

Here’s a Congressional subcommittee working on government recommendations in Internet security:

Subcommittee on Technology, Information Policy, Intergovernmental Relations and the Census
,
chaired by Adam H. Putnam of Florida, part of Rep. Tom Davis’ Committee on Government Reform.

Back in June, Rep. Putnam remarked:

“Make no mistake. The threat is serious. The vulnerabilities are extensive. And the time for action is now.”

So far, Putnam’s subcommittee has been collecting information and testimony. However, he may go farther:

“Rep. Adam Putnam (R-Fla.) last fall drafted the Corporate Information Security Accountability Act of 2003, which would require companies to button down their information systems. The bill has not yet gone before the House of Representatives, but many of the proposals in Putnam’s draft as well as other recommendations are being batted about in a working group created by the subcommittee Putnam chairs, the Government Reform Subcommittee on Technology, Information Policy, Intergovernmental Relations and the Census.

In the name of protecting national infrastructure, you may be asked to conduct annual security audits, produce an inventory of key assets and their vulnerabilities, carry cybersecurity insurance and even have your security measures verified by independent third parties, if the core features of the proposed legislation make it to the floor of the House. ”

So far, this appears to stop short of mandating technology; instead sticking to best practices. We’ll see.

-jsq

Government mandates in networking and security

Phil Libin remarks, regarding a recent White House common ID mandate for federal employees and contractors:

“Just as with the development of the Internet, the federal government is once again the main initial catalyst for new technology that’s going to change the foundations of mainstream business transactions in the near future.”

Indeed, ARPA (now DARPA) funded the early ARPANET, which led to the Internet, and DCA (now DISA), among other agencies, promoted it by buying equipment from fledgling Internet vendors.

However, let’s not forget that the federal government also promulgated GOSIP, which was a requirement that computer systems sold to the federal government had to support the ISO-OSI protocol suite, which was similar to TCP/IP but different. Different in that while TCP/IP was the result of a process of multiple implementations interacting with standardization, ISO-OSI was a product of standards committees, and lacked not only many implementations, but even more many users. GOSIP was a waste of time and money. Fortunately, the U.S. government wasn’t as serious about ISO-OSI as were many European governments and the EU; in Europe OSI held back internetworking until the rapid deployment of the Internet in the U.S. and elsewhere eventually made it clear that OSI was going nowhere.

Where the U.S. government succeeded in networking was in promoting research, development, implementation, and deployment. Where it failed was when it tried to mandate a technical choice.

Hm, I see the White House directive gives the Dept. of Commerce six months to consult with other government agencies and come up with a standard. If there’s a requirement to consult with industry or academia, I don’t see it.

I hope this comes out better than, for example, key escrow, a previous government attempt to mandate authentication methods.

-jsq

Pirates, Then and Now

Andy Oram has posted a review of Villains of All Nations: Atlantic Pirates in the Golden Age by Marcus Rediker, in which he notes that old-time sea-pirates (har har!) weren’t just criminals; they were to
some extent early capitalists and pioneers of social methods such as a form of social security. The more basic point is that pirates existed partly because the more traditional economic systems of their day did not provide some things that many people wanted. One could turn that around and say that the widespread availability of deep-sea vessels enabled global piracy.

What does this have to do with the Internet? It is a new sea with its own pirates, some of them easy to spot, such as terrorists and crackers, and others in legal limbo, such as p2p software providers and users. Some people say p2p software providers are pirates, but a court just said they aren’t.

The relevance to Internet risk management is that there will be various uses of the Internet ranging from clearly legal through grey o plainly illegal, some of which may affect your enterprise. These are just risks that need to be managed. In some cases legal measures may be appropriate. In others reputation systems may suffice to change behavior. For other cases, enterprises need to protect themselves via insurance or other financial instruments. Ignoring it won’t make it go away.

-jsq

Data Objects: Forts (Geer) or Spimes (Sterling)?

Speaking the other week at different conferences, Dan Geer and Bruce Sterling provided different views of the future of Internet and world governance, or, more specifically, the continued involvement of meritocracy in it.

Dr. Dan Geer is a famous security expert with more than a passing interest in the big picture. Bruce Sterling is a famous science fiction writer with more than a passing interest in technological details. I read both their stuff all the time. It’s interesting to see them produce such variant prognostications.

Here’s Dr. Dan Geer:

“At the same time, increasing threat will, as it must, lead to shrinking perimeters thus away from a focus on enterprise-scale perimeters and more toward perimeters at the level of individual data objects. Security and privacy are, indeed, interlocking but, much as with twins in the womb, the neoplastic growth of the one will be to the detriment of the other hence the bland happy talk of there being no conflict between the two will soon be shown to be merely that. Finally, the Internet as a creature built by, of, and for the technical and ethical elite being no longer consistent with the facts on the ground, its meritocratic governance will yield to the anti-meritocratic tendencies of government(s).”

–Dan Geer, USENIX Security Symposium, 12 August 2004, page 20.

And here’s Bruce Sterling:

“ You might think, now that Hollywood slums around your gig, and even novelists show up, and Pixar drags Disney around by its big financial nose, that there were no new worlds to conquer for SIGGRAPH. But there’s one world that you direly need to conquer anyway. Even if hobbits win Oscars by the bushel full.

“Having conquered the world made of bits, you need to reform the world made of atoms. Not the simulated image on the screen, but corporeal, physical reality. Not meshes and splines, but big hefty skull-crackingly solid things that you can pick up and throw. That’s the world that needs conquering. Because that world can’t manage on its own. It is not sustainable, it has no future, and it needs one.

[After much development of the idea of spimes, which are objects that tell the user all about themselves and everything related to them….]

“The upshot is that the object’s nature has become transparent. It is an opened object.

“In a world with this kind of object, you care little about the object per se; that physical object is just a material billboard for tomorrow’s vast, digital, interactive, postindustrial support system. This is where people like you, your evolved successors, rule the earth. This is a world where the Web has ceased to be a varnish on barbarism, and where the world is now varnish all the way down.

“By making the whole business transparent, a host of social ills and dazzling possibilities are exposed to the public gaze. Everyone who owns a spime becomes, not a mute purchaser, but a stakeholder. And the closer you get to it, the more attention it sucks from you. You don’t just use it, any more than I can pick up this Treo and just make a simple phone call. This device wants to haul me into the operating system; I’m supposed to tell all my friends about it. We’re all supposed to become its darlings and its cultists, we’re all supposed to help out. Sometimes we do that willingly, sometimes we just fight for breath. We’re not customers. We’re not consumers. And with spimes, we’re not even end-users. We spend our time wrangling with the real problems and opportunities of material culture. We’re wranglers.”

–Bruce Sterling, SIGGRAPH, 9 August 2004

I suppose comparing them next to each other like this is not completely fair, since Dan was speaking about Internet security over the next decade, and Bruce was talking about the entire material world longer term.

Maybe they’re both right. Maybe first we have to go through a defensive regimented unsustainable period before we can get to a transparent integrated enhancing future.

Or maybe the details of what Dan was talking about are part of the way to what Bruce was talking about. If, as Dan recommends, we beg, borrow, or steal metrics from public health, accelerated failure time testing, insurance, portfolio management, and physics, and we distribute the resulting measurements with various forms of information sharing, plus take many of the measurements in a distributed manner, and connect that up with gizmos for people to use, don’t we get pretty close to Bruce’s spimes?

With Dan’s recommended always-on sensor network, crackers and terrorists won’t be able to sneak in exploits without them being known. This doesn’t mean exploits won’t happen; however it may mean that the perpetrators will be more likely to be caught, and faster. And that companies and individuals will have more incentive to install patches. And that vendors will have more incentive to not sell buggy software. And that insurers can better cover business losses that happen anyway.

In other words, maybe increasing threat leads not to shrinking perimiters, rather to expanding interdependence and transparency.

Security and privacy may or may not be a zero-sum game.

Security and liberty are not a zero-sum game.

-jsq

Traditional Security: the Arthashastra

According to tradition, around 300 B.C. Vishnagupta Kautilya wrote a book called the Arthashastra in which he spelled out in exhaustive detail the methods of statecraft, economics, law, war, etc. that he recommended, and that he had used to make Chandragupta Maurya emperor of India. Missing nothing, he identifies force majeure events in much the same way we do today:

Calamities due to acts of God are: fires, floods, diseases and epidemics, and famine.

Other calamities of divine origin are: rats, wild animals, snakes, and evil spirits. It is the duty of the king to protect the people from all these calamities.

He recommends the government be the guarantor not only of last resort but of first resort:

Whenever danger threatens, the King shall protect all those afflicted like a father and shall organize continuous prayers with oblations.

And he recommends specific measures:

All such calamities can be overcome by propitiating Gods and Brahmins. When there is drought or excessive rain or visitations of evil, the rites prescribed in the Atharva Veda and those recommended by ascetics shall be performed. Therefore, experts in occult practices and holy ascetics shall be honoured and thus encouraged to stay in the country so that they can counteract the calamities of divine origin.

He provides a handy table of which deities to propitiate for which calamity, for example Agni the god of fire for wildfires.

To be fair, he also includes practical instructions for specific calamities, such as:

During the rainy season, villagers living near river banks shall move to higher ground; they shall keep a collection of wooden planks, bamboo and planks.

In addition, the King is to keep stores of food and seeds to distribute in case of famine. So Kautilya advises some collective action as practical insurance.

He also discusses relative seriousness of calamities, dismissing irremediability in favor of breadth of effect. Some previous pundits had ranked fire as the most serious, because it burns things up irremediably, but Kautilya ranks flood and famine as most serious because they can affect whole countries, followed by fire and disease, then by local problems such as rats. So the concept of aggregation as used by modern insurers is apparently at least 2300 years old.

Nonetheless, Kautilya does not mention pooling finances in a form that would be recognizable as insurance. That was a risk management strategy yet to be invented in India.

The Arthashastra by Kautilya, edited, rearranged, translated, and introduced by L.N. Rangarajan.Penguin Books, 1992.

-jsq

McNamara on Security

I’ve mostly been writing about contemporary events or reports. Let’s go back 38 years, to 1966, and listen to U.S. Secretary of Defense Robert S. McNamara speak in Montreal, after he got over his earlier enthusiasm for applying scientific management and engineering to the military, and as he saw a different path forward:

“There is still among us an almost [in]eradicable tendency to think of our security problem as being exclusively a military problem–and to think of the military problem as being exclusively a weapons-system or hardware problem.”

This seems a lot like our contemporary Internet security problems: we have an ingrained tendency to think of them as technical problems. We keep adding more defensive systems, and sometimes things like spam blocking lists that amount to offensive systems.

Yet, as McNamara pointed out:

“The plain, blunt truth is that contemporary man still conceives of war and peace in much the same stereotyped terms that his ancestors did.

“The fact that these ancestors, both recent and remote, were conspicuously unsuccessful at avoiding war, and enlarging peace, doesn’t seem to dampen our capacity for cliches.”

Internet security problems keep getting worse no matter how many firewalls and patches and intrusions detection systems we throw at it. These things are all necessary, but they are not sufficient.

“A nation can reach the point at which it does not buy more security for itself simply by buying more military hardware. We are at that point. The decisive factor for a powerful nation already adequately armed is the character of its relationships with the world.”

McNamara goes on to say security is development, and to define development as economic, social, and political progress. I don’t think we can push our analogy that far. Crackers will attack just for the hell of it.

However if we abstract his point slightly, we can see the analogy. Some security problems are beyond the capabilities of a single company, no matter how large and capable the company. The power grid can fail; the telephone system can fail; and the Internet can fail. No single company can prevent those things, nor hurricanes, tornados, fires, and floods.

In the politics of nation-state security, McNamara says development is the answer and sometimes military force is needed to provide order so development can happen.

In corporate Internet security, other means are available, just as they have been since the seventeenth century: insurance and its relatives. A corporation can ameliorate its risk by pooling it with similar risks of other corporations by buying insurance, or using other financial risk-transfer instruments.

McNamara also said:

“The plain truth is the day is coming when no single nation, however powerful, can undertake by itself to keep the peace outside its own borders. Regional and international organizations for peacekeeping purposes are as yet rudimentary, but they must grow in experience and be strengthened by deliberate and practical cooperative action.”

In Internet security, cooperative action can include reputation systems such as the incident reports by CERT and US-CERT. It can also include more direct action by groups such as the Anti-Phishing working group.

The main point is the same as McNamara’s: companies can’t go it alone anymore in Internet security; various forms of cooperation are needed. These forms are new Internet risk management strategies, including financial risk instruments and reputation systems.

The following year McNamara resigned from the U.S. government and became president of the World Bank, attempting to implement what he recommended. (Whether the World Bank has succeeded is another subject.)

This speech by McNamara is surprisingly hard to find online; thanks to Dave Hughes for making it available:

“Security in the Contemporary World,”
Robert S. McNamara, U.S. Secretary of Defense,
before the American Society of Newspaper Editors,
Montreal, Canada, May 18th, 1966

It is apparently also recorded in the Congressional Record, May 19, 1966, vol. 112, p. 11114.

-jsq

Attack of the $2M Worm

Talking about risk management strategies for the Internet is often like talking about backups: people don’t want to deal with it until they see significant damage that directly affects them. Companies don’t want to spend money on insurance or time on preparing a risk management plan until they’ve experienced undeniable damage.

This Cnet iiem is relevant: “The attack of the $2 million worm.”.

“Internet-based business disruptions triggered by worms and viruses are costing companies an average of nearly $2 million in lost revenue per incident, market researcher Aberdeen said on Tuesday.

“Out of 162 companies contacted, 84 percent said their business operations have been disrupted and disabled by Internet security events during the last three years. Though the average rate of business operations disruption was one incident per year, about 15 percent of the surveyed companies said their operations had been halted and disabled more than seven times over a three-year period. ”

Of course, everyone has also heard about people and companies that didn’t have adequate backups when their equipment failed. Sometimes people listen to such stories and start making backups before their computers fail.

Backups are a risk management strategy. Other risk management strategies are like backups: they’re best put in place before they’re needed.

-jsq