Category Archives: Intenret coordination

Community Flow-spec Project

A lightning talk at NANOG 48, Austin, Texas, 22 Feb 2010, by John Kristoff, Team Cymru. See RFC 5575.

Update: PDF of presentation slides here.

+--------+--------------------+--------------------------+
| type   | extended community | encoding                 |
+--------+--------------------+--------------------------+
| 0x8006 | traffic-rate       | 2-byte as#, 4-byte float |
| 0x8007 | traffic-action     | bitmask                  |
| 0x8008 | redirect           | 6-byte Route Target      |
| 0x8009 | traffic-marking    | DSCP value               |
+--------+--------------------+--------------------------+

A few selected points:

  • Dissemination of Flow Specification Rules
  • Think of filters(ACLs) distributed via BGP
  • BGP possibly not the right mechanism
  • Multi-hop real-time black hole on steroids
  • Abuse Handler + Peering Coordinator
    = Abeering Coordinator?
  • Traditional bogon feed as source prefix flow routes
  • A la carte feeds (troublesome IP multicast groups, etc.)
  • AS path prepend++
  • Feed-specific community + no-export
He showed some examples of specs for flows (I can’t type fast enough to transcribe those).

Trust issues for routes defined by victim networks.

Research prototype is set up. For questions, comments, setup, contact: http://www.cymru.com/jtk/

I like it as an example of collective action against the bad guys. How to deal with the trust issues seems the biggest item to me.

Hm, at least to the participating community, this is a reputation system.

Solving for the Commons

So simple!

BN > BE + C

Aldo Cortesi channels Elinor Ostrom and summarizes what we need to fix Internet security by enticing the providers and users of the Internet to manage it as a commons. But first, some background.

Since at least 1997 (“Is the Internet a Commons?” Matrix News, November 1997) I’ve been going on about how Garrett Hardin’s idea of the tragedy of the commons doesn’t have to apply to the Internet, because: Continue reading

Iranian Internet Disturbances

iran20090615.gif Here’s an example of some Internet routing in Iran, in this case on the way to the Ministry of Foreign Affairs on Monday 15 June 2009. Normally, routing and latency don’t change much. Starting Saturday 13 June, the day after the election, routing and latency have become increasingly disturbed. More here.

Twitter Reschedules

whereistheirvote.jpg Twitter recognizes that a network upgrade is important, but the role twitter is playing in Iran is more important, and reschedules for 1:30 AM Iranian time. Now that’s risk management!

Would that U.S. states had all rescheduled Diebold and the like to the junk heap after the 2000 U.S. election.

Also notice who twitter’s hosting service is: NTT America. I’ve been predicting for years that the U.S. duopoly’s intransigence would lead to NTT and other competent international ISPs eating their lunch, and I see it’s beginning to happen.

-jsq

Van Meter on Barabasi and Doyle on Internet topology and risks

rdv-hakama-0609.jpg Rodney Van Meter, co-teaching a class by Jun Murai, posts notes on why Albert-László Barabási (ALB) is both right and wrong about the Internet (it is more or less a scale-free network when considered as a network of Autonomous Systems (AS), but contrary to ALB's assumption John Doyle and others have pointed out that the bigger nodes are not central, an AS as a node would be somewhat difficult to take out all at once, there are both higher and lower layer topologies that make the Internet more robust, and the Internet's biggest problem isn't topology at all:

The most serious risks to the Internet are not to individual "nodes" (ASes), but rather stem from the near-monocropping of Internet infrastructure and end nodes, and the vulnerability of the system to human error (and political/economic considerations):

Monoculture, who would have thought it?

For that matter, the Internet's ability to reroute has been very useful to ameliorate topological link breaks at the physical layer, for example undersea cables in the Mediterranean Sea twice last year.

MySpace Anti-Phishing

Shing Yin Khor of Fox Interactive Media, which owns MySpace, gave an entertaining talk at APWG in which she gave a good case that MySpace has mostly eliminated phishing ads on MySpace and is busily suppressing other phishing.
Throwing money at the issue of phishing actually works.
MySpace’s anti-phishing forces include former law enforcement people, including a former federal and state prosecutor, a former L.A. D.A., and a former FBI agent. They have successfully sued spam king Scott “ringtones” Richter and his CPA empire.

MySpace does have an advantage in actually hosting all displays and messages. It’s good to be a many-hundred-million shopping mall. She didn’t say that; I did. She did say they use MySpace specific measures such as education via Tom’s profile. Tom was one of the founders of MySpace. Every new user gets Tom as a friend, so his online persona (pictured) has 240 million friends, so that’s a channel that reaches most of their users. She did say:

Education is just as important as technical measures.
What works on MySpace will work on other social network sites.

But Shing’s theme of pro-active measures against phishing and spam is one other organizations could take to heart. Don’t think you can do nothing: you can.

Of course, if you have fewer than 200 million users, you may want to band together with other organizations, for example by joining APWG. Even MySpace does.

Debunking the Tragedy of the Commons

x7579e05.gif Interesting article here making a point that should have been obvious for forty years. When Garrett Hardin published his famous article about the “tragedy of the commons” in Science in December 1968, he cited no evidence whatsoever for his assertion that a commons would always be overgrazed; that community-owned resources would always be mismanaged. Quite a bit of evidence was already available, but he ignored it, because it said quite the opposite: villagers would band together to manage their commons, including setting limits (stints) on how many animals any villager could graze, and they would enforce those limits.

Finding evidence for Hardin’s thesis is much harder:

The only significant cases of overstocking found by the leading modern expert on the English commons involved wealthy landowners who deliberately put too many animals onto the pasture in order to weaken their much poorer neighbours’ position in disputes over the enclosure (privatisation) of common lands (Neeson 1993: 156).

Hardin assumed that peasant farmers are unable to change their behaviour in the face of certain disaster. But in the real world, small farmers, fishers and others have created their own institutions and rules for preserving resources and ensuring that the commons community survived through good years and bad.

Debunking the `Tragedy of the Commons’, By Ian Angus, Links, International Journal of Socialist Renewal, August 24, 2008

So privatization is not, as so many disciples of Hardin have argued, the cure for the non-existant tragedy of the commons. Rather, privatization can be the enemy of the common management of common resources.

What does this have to do with risk management? Well, insurance is the creation of a managed commons by pooling resources. Catastrophe bonds are another form of pooled resources, that is, a form of a commons.

On the Internet, the big problem with fighting risks like phishing, pharming, spam, and DDoS attacks is that the victims will fail if they go it alone. The Internet is a commons, and pretending that it isn’t is the problem. Most people and companies don’t abuse the Internet. But a few, such as spam herders and some extremist copyright holders (MPAA, RIAA), do. They need to be given stints by the village.

-jsq

Fast Flux Mapped

ffcrop.png Australian HoneyNet tracks Fast Flux nodes and maps them:
Below is the current locations of the storm Fast Flux hosts. This is updated every 15 minutes from our database.

I Had to change it to only show the last 6 hours of new nodes since GoogleMaps doesn't scale very well when your reaching past a few thousand markers on a map 🙂

Fast Flux Tracking, Australian HoneyNet Project, accessed 7 Aug 2008

Fast Flux, in case you're not familiar with it, refers to various techniques used by bot herders, spammers, phishers, and the like to evade blocking by rapidly changing which IP addresses are mapped to which domain names.

-jsq

U.A.E. Cable Cut of 30 Jan 2008

There’s been a lot of talk about the numerous cable cuts in the Mediterranean Sea and the Persian Gulf in the past few weeks. It’s interesting to see the Internet route around damage. Here is a visualization of the first cable cut, off Alexandria, on 30 Jan 2008.

-jsq

Web Panopticons: China and U.S.

panopticon.gif Fergie points out a university project investigating censorship:

The "Great Firewall of China," used by the government of the People’s Republic of China to block users from reaching content it finds objectionable, is actually a "panopticon" that encourages self-censorship through the perception that users are being watched, rather than a true firewall, according to researchers at UC Davis and the University of New Mexico.

The researchers are developing an automated tool, called ConceptDoppler, to act as a weather report on changes in Internet censorship in China. ConceptDoppler uses mathematical techniques to cluster words by meaning and identify keywords that are likely to be blacklisted.

University Researchers Analyze China’s Internet Censorship System, News Report, Government Technology News, Sep 11, 2007

So the Great Firewall of China watches what users are doing by actively intercepting their traffic. Meanwhile, back in the U.S. of A., how about a passive web panopticon?

Continue reading