Category Archives: Measurement

Common Sense Lacking for Big Perils such as Georgia Hurricane or WorstCase Worm

KClark.jpg Why it’s not good to depend on common sense for really big perils:
The models these companies created differed from peril to peril, but they all had one thing in common: they accepted that the past was an imperfect guide to the future. No hurricane has hit the coast of Georgia, for instance, since detailed records have been kept. And so if you relied solely on the past, you would predict that no hurricane ever will hit the Georgia coast. But that makes no sense: the coastline above, in South Carolina, and below, in Florida, has been ravaged by storms. You are dealing with a physical process, says Robert Muir-Wood, the chief scientist for R.M.S. There is no physical reason why Georgia has not been hit. Georgias just been lucky. To evaluate the threat to a Georgia beach house, you need to see through Georgias luck. To do this, the R.M.S. modeler creates a history that never happened: he uses what he knows about actual hurricanes, plus what he knows about the forces that create and fuel hurricanes, to invent a 100,000-year history of hurricanes. Real history serves as a guide it enables him to see, for instance, that the odds of big hurricanes making landfall north of Cape Hatteras are far below the odds of them striking south of Cape Hatteras. It allows him to assign different odds to different stretches of coastline without making the random distinctions that actual hurricanes have made in the last 100 years. Generate a few hundred thousand hurricanes, and you generate not only dozens of massive hurricanes that hit Georgia but also a few that hit, say, Rhode Island.

In Nature’s Casino, By Michael Lewis, New York Times, August 26, 2007

And of course a hurricane did hit the Georgia coast before detailed records were kept, in 1898. The article notes that before Hurricane Andrew, insurers believed that a Florida hurricane would cost max a few billion. The actual cost was more like $15.5 billion, predicted only by one woman: Karen Clark, founder of A.I.R.

Sure, the Georgia coast doesn’t have any single concentration of wealth like Miami. But it does have a swath of wealth that could be taken down by a single storm. And complacent owners who think it can’t ever happen, just like people in Thailand didn’t believe Smith Dharmasaroja before the 2004 Tsunami.

Meanwhile, on the Internet, the few insurers of Internet business continuity are winging it and most companies have no insurance at all, despite online crime becoming increasingly sophisticated, leveraging the global reach of the Internet, and the possibility of a global worm that could cause $100 billion damage still being out there.

-jsq .

Quantitative >= Qualitative

See Pete Lindstrom’s Spire Security Viewpoint for empirical evidence that mechanical quantitative diagnosis is almost always at least as good as clinical qualitative diagnosis.

There is still plenty of room for qualitative decision-making in arenas where there aren’t enough facts or the facts haven’t been quantified or there’s no baseline or there’s no mechanical method yet. But where those things are available, it’s better to use them. You’ll still need qualitative judgement for cases where the algorithm is right but it didn’t take into effect unfortunate side effects, for instance. Even then, you’ve got a better chance of knowing what you’re doing.

-jsq

Outrage: Less and More

danrather0207.jpg We’ve been discussing Outrage Considered Useful. Alex remarked in a comment:

The term "Outrage" suggests that risk cannot or should not be discussed in a rational manner.

What I think Sandman is getting at is that often risk isn’t discussed in a rational manner, because managers’ (and security people’s) egos, fears, ambitions, etc. get in the way. In a perfect Platonic world perhaps things wouldn’t be that way, but in this one, people don’t operate by reason alone, even when  they think they are doing so.

Outrage x Hazard may be a means to express risk within the context of the organization, but I like probability of loss event x probable magnitude of loss better for quantitative analysis.

Indeed, quantitative analysis is good. However, once you’ve got that analysis, you still have to sell it to management. And there’s the rub: that last part is going to require dealing with emotion.

Continue reading

Metricon: Puzzle vs. Mystery

rct_pointing2.jpg Here at Metricon 2.0, many interesting talks, as expected.

For example, Russell Cameron Thomas of Meritology mentioned the difference between puzzle thinking (looking only under the light you know) and mystery thinking (shining a light into unknown areas to see what else is out there). Seems to me most of traditional security is puzzle thinking. Other speakers and questioners said things in other talks like "that’s a business question that we can’t control" (literally throwing up hands); we can only measure where "we can intervene"; "we don’t have enough information" to form an opinion, etc. That’s all puzzle thinking.

Which is unfortunate, given that measuring only what you know makes measurements hard to relate to business needs, hard to apply to new, previously unknown problems, and very hard to use to deal with problems you cannot fix.

Let me hasten to add that Thomas’s talk, entitled "Security Meta Metrics—Measuring Agility, Learning, and Unintended Consequence", went beyond these puzzle difficulties and into mysteries such as uncertainty and mitigation.

Not only that, but his approach of an inner operational loop (puzzle) tuned by an outer research loop (mystery) is strongly reminiscent of John R. Boyd’s OODA loop. Thomas does not appear to have been aware of Boyd, which maybe is evidence that by reinventing much the same process description Thomas has validated that Boyd was onto something.

-jsq

ROI v. NPV v. Risk Management

southwestcfo.jpg There’s been some comment discussion in about security ROI. Ken Belva’s point is that you can have a security ROI, to which I have agreed (twice). Iang says he’s already addressed this topic, in a blog entry in which he points out that
Calculating ROI is wrong, it should be NPV. If you are not using NPV then you’re out of court, because so much of security investment is future-oriented.

ROI: security people counting with fingers? Iang, Financial Cryptography, July 20, 2007

Iang’s entry also says that we can’t even really do Net Present Value (NPV) because we have no way to calculate or predict actual costs with any accuracy. He also says that security people need to learn about business, which I’ve also been harping on. I bet if many security people knew what NPV was, they’d be claiming they had it as much as they’re claiming they have ROI. Continue reading

Punching Hornets

napoleoninrussia.jpg What do science fiction writer William Gibson, global guerrilla theorist John Robb, libertarian Republican presidential candidate Ron Paul, and the late historian David Halberstam agree about?
Still, it is hard for me to believe that anyone who knew anything about Vietnam, or for that matter the Algerian war, which directly followed Indochina for the French, couldn’t see that going into Iraq was, in effect, punching our fist into the largest hornet’s nest in the world.

The Late Halberstam’s Final Verdict on Bush: “He’s No Truman”, by Adam Howard, alternet.org, 5:38 AM on July 5, 2007.

One could add Napoleon in Russia and the British in America. Funny how fighting in Russia in the winter wasn’t like Italy in the summer. Continue reading

Usable Metrics

measure.jpg It’s not enough just to measure:
…most metrics that we security folks come up with are well boring are effectively useless to upper management. At best they are focused on technical management such as the CIO and CSO. Like much of the rest of our industry, we metrics folks have again failed to relate our services to the business at large.

Attacking Metrics by arthur, Emergent Chaos, 20 June 2007

You need metrics that are comparable across companies, that subsume enough information to be interesting, and that are easy to explain to executives. Something like the Apdex performance measurements. Performance and security are more intertwined than most security people yet realize. And network performance people have been dealing with selling their measurements to management for some time now. Security folks might want to see how it’s already been done.

-jsq

FISMA Failing

Shades of SOX complaints: the U.S. GAO reports that the Federal Information Security Management Act (FISMA) is failing:

When we go out and conduct our security control reviews at federal agencies, we often find serious and significant vulnerabilities in systems that have been certified and accredited. Part of it, I think, is just that agencies may be focusing on just trying to get the systems certified and accredited but not effectively implementing the processes that the certification and accreditation is supposed to reflect.

Q&A: Federal info security isn’t just about FISMA compliance, auditor says, Most agencies still have security gaps, according to Gregory Wilshusen, by Jaikumar Vijayan Computerworld, June 14, 2007

Sounds like they haven’t implemented numerous simple security measures that were known before FISMA, they don’t have processes to do so, and they don’t adequately report what they’re doing, even with FISMA. What to do?

Continue reading

Breach Discovery

bv.jpg If people know about security breaches, maybe there’s incentive for the companies whose customers they are or the governments whose constituents they are to do something about them, so this is good news:

New Hampshire, one of a handful of U.S. states that require breaches involving personal information to be reported to the state as well as to affected individuals, has made at least some breach notices it has received available on the net.

New Hampshire gets it, Chris Walsh, Emergent Chaos, 13 June 2007

Or at least if we know what’s really going on, maybe unfounded scare

Continue reading

Long Tail Field

longtailfield.gif Why long tail graphs are usually shown on a log scale:

Unfortunately, the illustration works only as a large graph, because graphed out on small paper gives us only two discernable lines, one on each axis.

A practical model for analyzing long tails, by Kalevi Kilkki First Monday, volume 12, number 5 (May 2007)

The sports field graph is a clever way of showing how the fat head of a long tail distribution can be extremely higher than the long tail; this is normally not so clear on log scale graphs.

Continue reading