Monthly Archives: July 2004

Financial Services Risk Tamed?

Here’s
a report
by PricewaterhouseCoopers and the Economist Intelligence Unit that says that quantifying credit market, regulatory, and even IT risk isn’t enough.

“But what about those areas, like reputational risk, that are both harder to measure and more sudden and severe in their impact?”

According to the report, which is based on a survey, while the internal corporate profile of risk management has increased in recent years, this has been not so much because of proactive measures as reaction to outside pressures from regulators and rating agencies. It seems most companies still see risk management as a relatively low-level activity having to do with crunching numbers of types they are already familiar with, rather than as a strategic activity that involves both quantifying additional areas of risk and making plans for types of risk that may never be quantifiable to the extent of some of the traditional areas. The report says that those companies that have made the shift to viewing risk management as such a strategic activity find it a source of competitive advantage.

“Such institutions accept that uncertainty cannot be tamed, only mitigated.”

Like the Chairman of Lloyds, the report recommends risk management plans be overseen at the board level; however it notes that that mostly isn’t happening yet. Reports like this will help make the lack of board oversight of such a plan a reputational risk.

Curiously, the report doesn’t say anything about insurance, which is one of the more obvious ways of mitigating risks that cannot be tamed.

I think a time will come not long from now when a company that does not have Internet business continuity insurance will suffer a reputational risk.

The report does mention Basel II, as not only a way of witholding enough capital to deal with risk, but also as an incentive to dramatically improve risk management policies and procedures. And it notes worries that if Basel II becomes best practices that further risk management strategies might be inhibited.

It mentions geopolitical risks beyond the control of the corporation, such as regime change, and it emphasizes the importance of risks outside the corporation involving supplies and outsourcers and the like, yet the report does not mention Internet continuity problems that could result from such sources and affect business.

No report is perfect. This one makes some important points based on real data about what companies have done to manage risk and some more things they need to do.

-jsq

Attack of the $2M Worm

Talking about risk management strategies for the Internet is often like talking about backups: people don’t want to deal with it until they see significant damage that directly affects them. Companies don’t want to spend money on insurance or time on preparing a risk management plan until they’ve experienced undeniable damage.

This Cnet iiem is relevant: “The attack of the $2 million worm.”.

“Internet-based business disruptions triggered by worms and viruses are costing companies an average of nearly $2 million in lost revenue per incident, market researcher Aberdeen said on Tuesday.

“Out of 162 companies contacted, 84 percent said their business operations have been disrupted and disabled by Internet security events during the last three years. Though the average rate of business operations disruption was one incident per year, about 15 percent of the surveyed companies said their operations had been halted and disabled more than seven times over a three-year period. ”

Of course, everyone has also heard about people and companies that didn’t have adequate backups when their equipment failed. Sometimes people listen to such stories and start making backups before their computers fail.

Backups are a risk management strategy. Other risk management strategies are like backups: they’re best put in place before they’re needed.

-jsq

Internet collapse different this time?

Back in 1996, Bob Metcalfe, inventor if Ethernet, founder of 3COM, predicted,

“The Internet is collapsing; the question is who’s going to be caught in the fall The Internet might possibly escape a “gigalapse” this year. If so, I’ll be eating columns at the World Wide Web Conference in April. Even so, Scott Bradner should still be concerned about the Internet’s coming catastrophic collapses.”

Bob Metcalfe, From the Ether, November 18, 1996 InfoWorld

Bob got a lot of press and ongoing discussion out of that prediction.

As it happened, he didn’t have any longterm data when he made it. He came to me and I supplied him some. Partly because of that data, he changed his prediction from a gigalapse to lots of little catastrophes, and ate his prediction.

It’s that time again: Internet collapse predicted. Several people have pointed me at the PFIR conference on Preventing the Internet Meltdown, which is taking place now at a hotel near LAX.

Lauren Weinstein announced this conference back in March, in conjunction with Peter G. Neumann (his usual collaborator) and Dave Farber. Farber has long been active in Internet forward thinking, and posted it on his Interesting People mailing list (which is like a blog, but in mail, and has been going on longer than any blog).

It looks like an interesting lineup, with many of the usual suspects who have been active in organizations from IETF to EFF to DHS. The first speaker listed is the same person Bob named: Scott Bradner of Harvard, long influential in IETF.

So how is this 2004’s prediction any different from 1996’s? The concerns are different. Bob said:

“Let’s be concerned that large portions of the Internet might be brought down not by nuclear war but by power failures, telephone outages, overloaded domain name servers, bugs in stressed router software, human errors in maintaining routing tables, and sabotage, to name a few weak spots.”

In other words, mostly failures in the basic routing fabric of the Internet, or in its underlying physical infrastructure.

Lauren said:

“A continuing and rapidly escalating series of alarming events suggest that immediate cooperative, specific planning is necessary if we are to have any chance of avoiding the meltdown. “Red flag” warning signs are many. A merely partial list includes attempts to manipulate key network infrastructures such as the domain name system; lawsuits over Internet regulatory issues (e.g. VeriSign and domain registrars vs. ICANN); serious issues of privacy and security; and ever-increasing spam, virus, and related problems, along with largely ad hoc or non-coordinated “anti-spam” systems that may do more harm than good and may cause serious collateral damage.”

In other words, mostly problems external to the technical infrastructure of the Internet, most of them either attacks on parts of the Internet or reactions to such attacks. A lot has changed in 8 years. Basically, use of the Internet has skyrocketed since 2000, making it an attractive target for all sorts of nuisances and attacks.

Back in 1996, Bob Metcalfe described the problem:

“Because the Internet’s builders believed that it defies management — it’s alive, they say — they punted, leaving no organized process for managing Internet operations. Where are circuits inventoried, traffic forecasts consolidated, outages reported, upgrades analyzed and coordinated? As my programming friends would say, the Internet Engineering and Planning Group and the North American Network Operators’ Group are by most accounts no-ops — they exist, but they don’t do anything.

“But the Internet is not alive. It’s actually a network of computers. And somebody, hopefully cooperating ISPs, should be managing its operations.”

In 2004, Lauren Weinstein’s description of the root cause is essentially the same:

“Most of these problems are either directly or indirectly the result of the Internet’s lack of responsible and fair planning related to Internet operations and oversight. A perceived historical desire for a “hands off” attitude regarding Internet “governance” has now resulted not only in commercial abuses, and the specter of lawsuits and courts dictating key technical issues relating to the Net, but has also invited unilateral actions by organizations such as the United Nations (UN) and International Telecommunications Union (ITU) that could profoundly affect the Internet and its users in unpredictable ways.”

Bob’s specter back in 1996 was telephone companies taking over from traditional ISPs. That one happened.

Lauren’s specter of the UN and ITU is currently in progress. It may happen, too.

However, the telcos didn’t solve the problem. Will the UN or the ITU?

Maybe they’re trying to solve the wrong problem. Maybe what the Internet needs is not, as Bob put it sometimes, to be run like AOL. Maybe what the Internet needs is more cooperative decentralization, and new means to achieve it.

According to the conference program, Lauren’s conference is dealing with many of the usual approaches, from IETF operational coordination to copyright law to government cybersecurity policies. These are all important issues, and the speakers all appear to be knowledgeable experts in their fields.

Yet there are more things that could be done. What about software vendor liability, such as Hal Varian has been calling for since 2000.

Or software diversity, such as Dan Geer and Scott Charney recently debated at USENIX, and that Geer wrote about last fall (I was one of his co-signers), and that I wrote about the year before that.

What about financial risk instruments, such as insurance, catastrophe bonds, or performance bonds, even though Wally Baer of Rand, who has written about importing such instruments from the electrical utility industry to the Internet works down the street from the conference hotel.

Or capital withholding, e.g., as in Basel II; if big international banks, which tend to be rather competitive, can get their act together, it might be worth seeing if the Internet can use any of their risk management techniques.

What about reputation systems, or the risk management plans that Lord Levene, Chairman of Lloyds, recommended back in April that every board should have at the top of its agenda?

The problem goes beyond technology or even the law, into society, politics, and finance. No single organization can run all that. Or at least I hope not.

More later.

-jsq

What is Perilocity?

It has to do with extending risk management strategies for the Internet. It starts with security and goes into new territory most people haven’t thought about in relation to the Internet.

Probably after Slammer and SoBig and Scob and the northeast power outage and assorted cable cuts, we’re all ready to admit that it’s no longer enough to say “the Internet just works.” And while we all try to keep up with patches and run firewalls, often plus intrusion detection, content caching, etc., Slammer demonstrated that even Microsoft wasn’t keeping up with its own patches, and recently we saw that even Akamai can have at least a mini-outage. What do you do when all of the technical and procedural solutions fail?

rootanim It turns out that, especially if you look at other industries, there are many answers to that question, and this blog will talk about some of them, ranging from diversity to insurance to SOX to cat bonds to Basel II. For purchasable solutions involving such things, I recommend my employer.

Here in this blog I’m speaking for myself, not for anyone else. Here you’ll see opinions and pointers, musings and memes. Things I think might be useful, and things I just think are amusing.

Is it about security? Yes.
Performance? Yes.
Financial instruments? Yes.
Reputation systems? Yes.
Shipping and joint venture companies and chronometers? Yes.
About why you should care, and how these things are related? Yes.
Topics will vary widely, and the common thread will be perilocity.

Meanwhile, Peter Cassidy has suggested a more pithy definition:

Perilocity rhymes with velocity and stands for potential impact of a potent and manifest risk.

You may be wondering why am I doing this. Well, it seems almost everybody I know who has a blog (and that seems to be almost everybody I know) has provided cogent arguments for why a blog is just the ticket for getting a new idea into the communal thoughtstream. It also fits with my usual writing style of many short pieces (I started the first for-pay non-academic newsletter published over the Internet in 1991) getting expanded into longer pieces (I’ve been known to write two columns a month) and sometimes into books (seven and counting).

The blog is not a book; it’s more like my conference talks, except shorter. Sort of stand-up networking through the network.

So here’s Perilocity!

-jsq