Monthly Archives: October 2004

Bandwidth Futures

Looking backwards a couple of years, here’s an interesting article about carriers hedging risks by taking out options on future bandwidth prices, among various other forms of risk management (anything except bandwidth trading): “Carriers Seek Rewards of Risk Management” by Josh Long in PHONE+, January 2002. One of the most interesting passages I think is this one about carriers not necessarily knowing themselves:
“Ciara Ryan, a partner in the bandwidth team at global consulting firm Andersen, agrees. Ryan explains the lack of visibility is due in part to mergers and acquisitions creating carriers that are an amalgam of many parts. The information pertaining to these assets has been integrated poorly, making it difficult to employ risk-management tactics, she says.

“Ryan says carriers must be able to extrapolate key bits of information from their databases to manage their network assets properly. This would include, how much they have sold on a particular route, from which point of presence (PoP) it was sold, what the service level agreement (SLA) entailed, whether an option was sold on the contract, whether a contract was a short-term lease or indefeasible rights of use (IRU) agreement and what the definite and projected sales include on particular routes.

“”Very, very few of them would be able to give you this information,” Ryan adds.”

And that’s before considering paths all the way from the carrier’s customer to its customers. If the carriers don’t even know what their own networks consist of, it would appear they can’t be expected to provide a holistic and synoptic view of the Internet, neither one by one or all together.

-jsq

Linucon Security

Among the 4 panels I was on at Linucon was one about network security. Panelists Stu Green and Michael Rash led a lively discussion about technical security measures, with much audience interaction. After a while, I asked, “So what do you do when all your technical security measures fail?” Dead silence. Then a number of the usual answers, including “Get another job.”

Then I launched into the Internet2 slides, about slowness several hops out, nonredundant links, congestion, etc. More lively discussion ensued. The panel was scheduled for an hour, but nobody wanted to stop, so we went on for another half hour.

Eventually I asked if anybody wanted to hear the Ancient Anasazi story:

They said yes, so I told them, or, rather, answered questions they asked in all the right places. You never know what will be a crowd pleaser.

-jsq

Negligence or Risk?

Here’s an interesting paper by Carter Schoenberg about “Information Security & Negligence: Targeting the C-Class.” Carter is a former homicide detective, and he takes a concrete case-study approach:
“Rather than analyzing a hypothetical situation where a company is hacked by one of several means or subjected to the involuntary mass-propagation of a virus or worm, let’s focus on a real-life incident, dissect each component supported by fact and effectively diagram a blueprint for how you cannot only be targeted for a lawsuit or criminal prosecution, but demonstrate how you will lose. This loss will inflict a financial casualty, which may dramatically impact an organization’s fiscal health.”

His example is of a financial institution that had a web page defaced, apparently because it hadn’t applied a patch to its IIS 5 (Microsoft’s Internet Information Server version 5). Yet no customer or financial data was compromised as a result of the attack. Nonetheless, the financial institution had a responsibility to provide access to financial information and transactions to its customers, and according to news reports, customers had limited or no access during the attack. So was the financial institution negligent? How would you prove it?

Laws for negligence vary per state, but in the U.S. there are now a number of national laws that take precedence, such as Sarbanes-Oxley, HIPAA, and the Gramm Leach-Bliley Act. These permit discussing this case in specific terms, including quantifiable harm, opportunity, and motive. Carter goes so far as to say:

“This scenario will ultimately lead to share holders targeting corporate executives as being personally liable seeking seizure of personal wealth and even criminal sanctions.”

Technical IT and security personnel probably won’t avoid involvement, either; at the least they may be deposed in a lawsuit.

In such a situation I think I’d be for all three of a high standard of diligence, robust D&O insurance, and Internet business continuity insurance. And if I was a system administrator, I’d want specific policies that I could follow to the letter, so as to avoid being the target of a lawsuit.

Note that Carter’s example case did not involve a cracker specifically targeting that particular financial services institution for any reason other than that it happened to be running an unpatched copy of IIS, along with a number of other organizations that were attacked. So this was a force majeure event in the sense that it was not specifically targeted and had aggregate effect. However, there were things the financial institution could have done, but didn’t, of which patching the software was only the first.

Carter includes an interesting timeline showing patching shortly after point of discovery as manageable risk, developing over time still not patched into negligence and eventually gross negligence. And he compares typical cost for a system administrator to apply a patch versus fines and fees in a court case: they differ by more than a factor of 100.

In such a situation I think I’d be for all three of a high standard of diligence, robust D&O insurance, and Internet business continuity insurance. And if I was a system administrator, I’d want specific written policies approved by corporate executives that I could follow to the letter, so as to avoid being the target of a lawsuit.

Carter also mentions settlement of a lawsuit as being preferable to litigation partly because of risk of public exposure and resulting loss of reputation and perhaps financial consequences. I wonder if we shouldn’t turn that around and establish reputation systems that discover unapplied patches, nonredundant routes, congestion, etc., inform the affected enterprise first, and after a decent interval make remaining problems public. Such a reputation system would be tricky to implement, since it would not want to invite crackers to attack. However, it might be possible to do it in such a way as to encourage enterprises to patch and deploy so as to avoid security problems. And while I’m not a lawyer or a police detective, I would guess that companies that patched when warned would be less likely to be held liable in legal cases.

Carter’s article is a very interesting nuts and bolts examination of just what legal liability might mean for a specific case. There are a number of points in there that I suspect many enterprises have not considered. Well worth a read.

-jsq

Linucon History

A few days ago I mentioned I was going to give a talk about Internet history at Linucon. That went well, although some of the audience seemed surprised that my estimate of the age of the Internet was about 4600 years older than the nearest contender.

Linucon itself was an interesting attempt to exploit or enable the intersection (maybe 30%) between computing and science fiction fandom. The con had a certain do-it-yourself charm, and the participants seemed pleased. They plan to do it again next year.

-jsq

Decentralizing Energy

This isn’t about the Internet, but it is about a scale-free network: oil production. The big problem with oil isn’t that it’s currently expensive, or that current sources are running short. The problem isn’t even that the U.S. gets most of its oil from the middle east: it doesn’t; the U.S. imports only a fraction of its oil, and only a fraction of that comes from the middle east. (One of the main interests of the U.S. in the middle east and other oil producing areas is to police them so that no other country decides it must develop the capability to do so.) The problem is that too much oil comes from too few suppilers, starting with Saudi Arabia and working down.

So why consider running out of oil a problem? Why not consider it an opportunity? An opportunity to shift to other and more distributed energy sources, thus removing the need to militarize the Middle-East.

Here’s a detailed proposal to do just that, funded partly by Pentagon money, and written by people who have been making practical improvements in energy efficiency for companies large and small for many years: Winning the Oil Endgame, Rocky Mountain Institute; 309 pages; $40.

Amory Lovins proposes doing it not by abandoning suburbia, rather by using profit and market to drive efficiency and shifts in energy production and delivery.

The Economist said about the book:

“Given that America consumes a quarter of the world’s oil but has barely 3% of its proven reserves, it will never be energy-independent until the day it stops using oil altogether.

“How to get there? Amory Lovins has some sharp and sensible ideas. In “Winning the Oil Endgame”, a new book funded partly by America’s Defence Department, this sparky guru sketches out the mix of market-based policies that he thinks will lead to a good life after oil.

“First, he argues, America must double the efficiency of its use of oil, through such advances as lighter vehicles. Then, he argues for a big increase in the use of advanced “biofuels”, made from home-grown crops, that can replace petrol. Finally, he shows how the country can greatly increase efficiency in its use of natural gas, so freeing up a lot of gas to make hydrogen. That matters, for hydrogen fuel can be used to power cars that have clean “fuel cells” instead of dirty petrol engines. It would end the century-long reign of the internal-combustion engine fuelled by petrol, ushering in the hydrogen age.

“And because hydrogen can be made by anybody, anywhere, from windmills or nuclear power or natural gas, there will never be a supplier cartel like OPEC—nor suspicions of “blood for hydrogen”. What then will the conspiracy theorists do?”

In the near term there will no doubt be military actions. In the long run, apparently even the Pentagon thinks we can solve the real problem.

-jsq

Internet History?

In Albert-László Barabási’s book Linked, he is refering to the early deployment of IMPs (Interface Message Processors) on the ARPANET, and he says:
“The fifth was delivered to BBN, a Massachusetts consulting firm, late in 1970…”
That must have been a short delivery, considering that BBN was where IMPs were made.

I don’t hold it against ALB that he didn’t know that; when those things were happening, he was in Hungary, which at the time had certain difficulties communicating with the rest of the world. But how many of you, dear readers, have heard of BBN?

Meanwhile, over on Dave Farber’s Interesting People list, history came up. I mentioned my upcoming talk about Internet history at Linucon to some of the posters, which drew Farber to ask “What happened to CSNet!!!!!!!!!!” Nothing, so far as I know; I didn’t try to mention every historical network, but you can be sure I will mention CSNet. Especially considering that Peter Denning has provided a nice writeup about it, in addition to the one in my book, The Matrix.

If you’re in Austin, my history talk is tonight. Y’all come.

-jsq

Small World State Change

The U.S. government has now gone through four cyberscurity czars in less than two years, with the one-day notice resignation of Amit Yoran, following Howard Schmidt, Rand Beers (who joined the Kerry campaign), and Richard Clarke (who wrote a best-selling book and testified before Congress).

Apparently one argument for pushing cybersecurity down into the bowels of DHS is that the Internet and computers in general are just another homeland infrastructure that needs securing, same as the electrical grid or airlines. Albert-László Barabási (ALB) in his book Linked remarks on how sufficiently close connectivity can cause a state change, in the same manner as water turning to ice. It isn’t electrical utility grids that are interconnecting the world as never before; it is communications systems, led by the Internet, which is rapidly subsuming many other communciations systems. All the other infrastructures increasingly depend on the Internet. Even if it isn’t actually causing a state change, the Internet has global reach and fast speed, producing both its great value via Metcalfe’s Law (many users) and Reed’s Law (many groups) and its potential for cascade failure.

The Internet’s potential for cascade failure is also because of its organization as a scale-free network with many hubs of various sizes. Yet this is also what makes it so rubust unless the hubs are directly targetted. Meanwhile, we hear comparisons to the ability of the FAA to ground all aircraft in U.S. airspace. I don’t really see that an off swtich for the U.S. portion of the Internet would make the Internet more robust, even if it were easy to separate out exactly what that portion was.

I think the U.S. government would benefit by appointing a new cybersecurity head with sufficient powers to get something done, and preferably somebody with deep understanding of the Internet. How about Bruce Schneier or Dan Geer?

-jsq

Hurricane History

Today is the anniversary of the Galveston Hurricane of 1867 that caused $1 million in damage ($12.3M in today’s dollars, or the equivalent of $1.3 billion in share of GDP). Also the Louisiana Delta hurricane of 1893, which had winds of 100 miles per hour and a 12 foot storm surge that killed 1500 people. In 1882 on this day a severe windstorm in northern California and Oregon did severe crop damage and blew down trees.


How do we (or, in this case, Intellicast) know all this?

Let’s look at the fourth major hurricane of this date: the one that struck the Georgia coast in 1898, washing away an entire island, Campbell Island, and killing 50 residents there, as well as all but one of the residents of St. Catherine’s Island. The storm surge at Brunswick is estimated at 19 feet.

“Worthy of note is the brief period of time which has seen the widespread deployment of remote sensing systems, which may accurately place the center of a landfalling storm in data sparse or lightly populated coastal regions.”
“A Reevaluation of the Georgia and Northeast Florida Hurricane of 2 October 1898 Using Historical Resources,”
Last Modified on Thursday, 8 October 1998, 11:00 AM, Al Sandrik, Lead Forecaster, National Weather Service Office, Jacksonville, FL

Sandrik provides a convenient table, Technical Advances in systems for observing tropical cyclones, 1871 through 1980. In 1898 there was some sort of Hurrican Watch Service; they had landline telegraph; and historians have access to ship’s logs. Ships didn’t get wireless telegraph until 1905, so ship’s logs were no use to people on shore at the time.

Sandrik mines newspaper reports, personal letters, and measurements of buildings that are still standing, as well as finding some oaks that were blown down but still growing, providing wind direction at that point. With all this data he concludes that the wind only blew west north of Camden County (the Georgia coastal county just north of Florida) while it reversed and blew east there. So the eye had to have come ashore in Camden County, not 30 miles farther north as previously thought. Also, this was at least a category 3 hurricane, maybe 4, not a cat 2 as previously thought.

He compares records for other hurricanes earlier and later, and concurs with another researcher that for nineteenth century hurricanes,

“…the apparent low frequency on the Gulf coast between Cedar Key and St. Marks is not believed to be real. This area is very sparsely settled and the exact point where many of the storm centers reached the coast is not known, so there has been a natural tendency to place the track too close to the nearest observing point.”(1)

In other words, nineteenth century hurricanes were more common in places that were then sparsely populated but now are beach resorts. This has consequences for disaster planning. Not only that, but 19th century hurricanes were more severe than previously thought, which means that 20th century expectations of hurricane severity in the U.S. southeast may have been too rosy (this is why the recent spate of hurricanes can’t be used as evidence of global warming, although there are plenty of other pieces of evidence for that). Understanding the past has value for risk planning.

We don’t have to do all this forensic research and statistical interpolation for modern hurricanes, because we never miss seeing one anymore, nor its track or intensity. This is because we have better means of observation and coordination.

A radiosonde network (weather balloons) was added in the late 1930s, and organized reconnaissance (hurricane chasing airplanes) in the 1940s. To draw an Internet analogy, if ship logs were somewhat like web server logs, weather balloons are perhaps like web monitoring companies.

The satellites and radar we are all used to seeing on TV and the Internet all date from the 1960s and later. The Internet analogy is what InternetPerils is doing regarding a holistic and synoptic view of the Internet.

Aircraft satellite data links were added in the late 1970s; sort of flying ship log links. Ocean buoys were added about 1973; these are perhaps like honeypots and blacknets.

As Sandrik remarks:

“…the more diverse and accurate the base information used to generate a storm track, the greater the level of confidence that can be ascribed to that particular track.”

This is why multiple sources of different types of data such as are collected by the DNS-OARC participants are important, as is the coordination being attempted by that body. We need both sensing buoys in the ocean of the Internet and ship, radar, aircraft, and satellite reconnaissance, with networks of coordination and display among them.

-jsq

Nameserver Coordination

Today I’m attending by telephone the first-ever Domain Name System Operations, Analysis, and Research Center (DNS-OARC). The attendees include operators of root DNS servers, top level domain servers, domain registries, and well known Internet researchers. Much interesting research is going on, and perhaps some of it can be more coordinated. The group also has members from major vendors. InternetPerils is a charter member.


One reason for this meeting is that DNS-OARC has received an NSF grant of $2.38M; kc of CAIDA and other participants were most complimentary of NSF. I hope this grant is a sign that NSF is coming to see collective action as at least as important as faster networks.

I can’t say much about what else went on, given that members sign a confidentiality agreement. Suffice it to say that people with related projects that might not have been aware of each other now are.

One attendee has previously publicly remarked that the Internet won’t die, because nobody has more incentive to keep it running than the miscreants that feed off of it.

I have a request from the DNS-OARC administration to mention that everyone should use BCP 38 and not peer with people who don’t do source address verification at the edges. This is a relatively new Best Practice (four years old) that is already widely deployed, although not yet widely enough.

One reason it’s still not widely enough deployed is the same reason nobody wanted to believe a tornado could hit Massachusetts. Many people see it as benefitting other people, but not themselves, because they don’t believe it could happen to them.

One thing I can do is link to my own presentation.

-jsq