Category Archives: Diversity

John Quarterman on Mapping Spam and Politics (audio)

At a meeting on a completely different subject, I was interviewed about SpamRankings.net. Here's the audio, and here's the blurb they supplied:

John S. Quarterman, long time Internet denizen, wrote one of the seminal books about networking prior to the commercialization of the Internet. He co-founded the first Internet consulting firm in Texas (TIC) in 1986, and co-founded one of the first ISPs in Austin (Zilker Internet Park, since sold to Jump Point). He was a founder of TISPA, the Texas ISP Association. Quarterman was born and raised in Lowndes County, where he married his wife Gretchen. They live on the same land where he grew up, and participate in local community and government.

Quarterman took some time during Georgia River Network's Weekend for Rivers to speak with the Nonprofit Snapshot about spam-mapping and small town politics.

More about Elinor Ostrom's Nobel-prize-winning work on organizing the commons, and how that applies to SpamRankings.net.

The water organization has since been incorporated as the Georgia non-profit WWALS Watershed Coalition:

WWALS is an advocacy organization working for watershed conservation of the Willacoochee, Withlacoochee, Alapaha, and Little River Systems watershed in south Georgia and north Florida through awareness, environmental monitoring, and citizen advocacy.

-jsq

eCrime Summit in Prague 25-27 April 2012

These ecrime meetings are always interesting and useful. -jsq

Press release of 29 March:

Containing the Global Cybercrime Threat is Focus of Counter eCrime Operations Summit (CeCOS VI) in Prague, April 25-27

CeCOS VI, in Prague, Czech Republic, to focus on harmonizing operational issues, cybercrime data exchange, and industrial policies to strengthen and unify the global counter-ecrime effort.

CAMBRIDGE, Mass.—(BUSINESS WIRE)—The 6th annual Counter eCrime Operations Summit (CeCOS VI) will convene in Prague, Czech Republic, April 25-27, 2012, as the APWG gathers global leaders from the financial services, technology, government, law enforcement, communications sectors, and research centers to define common goals and harmonize resources to strengthen the global counter-cybercrime effort.

CeCOS VI Prague will review the development of response systems and resources available to counter-cybercrime managers and forensic professionals from around the world.

Specific goals of this high-level, multi-national conference are to identify common forensic needs, in terms of the data, tools, and communications protocols required to harmonize cybercrime response across borders and between private sector financial and industrial sector responders and public sector policy professionals and law enforcement.

Key presentations will include:

Continue reading

Davos discovers cyber attacks

Cyber attacks made the Davos Top 5 Global Risks in Terms of Likelihood. Davos, the annual conclave of the hyper-rich and famously elected, has also discovered Severe income disparity and Water supply crisis, so maybe they’re becoming more realistic.

However, in Figure 17 on page 25 they’ve got Cyber attacks as an origin risk, along with Massive incident of data fraud or theft and Massive digital misinformation. I think they’re missing the point, which is the real origin risk is poor infosec, and the origin of that is vendors like MSFT knowingly shipping systems with design flaws and people and organizations running them while hiding such problems.

Interesting comment on page 26: Continue reading

Solving for the Commons

So simple!

BN > BE + C

Aldo Cortesi channels Elinor Ostrom and summarizes what we need to fix Internet security by enticing the providers and users of the Internet to manage it as a commons. But first, some background.

Since at least 1997 (“Is the Internet a Commons?” Matrix News, November 1997) I’ve been going on about how Garrett Hardin’s idea of the tragedy of the commons doesn’t have to apply to the Internet, because: Continue reading

Media Security: Consolidation or Diversity?

Despite unanimous vote of the Senate Commerce Committee to delay, and direct question from one of its members, (not to mention overwhelming opposition in meetings across the country), FCC Chairman Kevin Martin plans to go ahead with the media consolidation vote scheduled for tomorrow, 18 December, which, given the 3-2 Republican-Democrat makeup of the Commission, will almost certainly result in more media consolidation.
Not only John Kerry, but even Trent Lott and Ted Stevens spoke against Martin’s plan. Martin, pretending not to know that newspapers are one of the most profitable industries (and nobody on the Commerce Committee thought to ask him directly whether he knew that; they only asked him if he had seen a specific report that said that), claims that the only way to save newspapers is to let them buy television stations. The New York Times published Martin’s op-ed to this effect. (Today the Times did at least publish their own editorial criticizing his position.)

Meanwhile, three members of the House Judiciary Committee have written an op-ed calling for the impeachment of vice-president Cheney, and no major newspaper will carry it, even though one of them, Wexler of Florida, collected more than 50,000 names for it over one weekend (up to 77,000 as of this writing).

Were it left to me to decide whether we should have a government without newspapers, or newspapers without a government, I should not hesitate a moment to prefer the latter.

Letter to Nathaniel Macon, Thomas Jefferson, January 12, 1819

What would Jefferson have thought about newspapers that wouldn’t publish a call for impeachment by members of the committee that is supposed to bring such charges? And why, given such a press, is anyone even considering more media consolidation? Which is better for the security of the Republic: more media consolidation or less?

-jsq

What It Will Take to Win

gp.jpg IT and Internet security people and companies act mostly as an aftermarket. Meanwhile, the black hats are a well-integrated economy of coders, bot herders, and entrepeneurs. This is what it will take for the white hats to win:
It can seem overwhelming for security people who are typically housed in a separate organization, to begin to engage with software developers and architects to implement secure coding practices in an enterprise. While the security team may know that there are security vulnerabilities in the systems, they have to be able to articulate the specific issues and communicate some ideas on resolutions. This can be a daunting task especially if the security team does not have a prior workign relationship with the development staff, and understand their environment.

The task seems daunting also because there are so many developers compared to security people. I am here to tell you though that you don’t have to win over every last developer to make some major improvements. In my experience a small percentage of developers write the majority of code that actually goes live. The lead developers (who may be buried deep in the org charts) are the ones you need to engage, in many cases they really don’t want to write insecure code, they just lack the knowledge of how to build better. Once you have a relationship (i.e. that you are not just there to audit and report on them, but are there to help *build* more secure code) it is surprisingly easy to get security improvements into a system, especially if the design is well thought and clearly articulated. You don’t have get the proverbial stardotstar, each and every developer on board to make positive improvements, it can be incremental. See some more specific ideas on phasing security in the SD! LC here. In meantime, with security budgets increasing 20% a year, use some of that money to take your top developers out to lunch.

Secure Coding – Getting Buy In, Gunnar Peterson, 1Raindrop, 17 Sep 2007

The start of what it will take.

-jsq

Bill Gates Considered as Evil Primitive Bacterium

archaea-tree-woese.jpg Has Freeman Dyson become an evolution denier?

Whatever Carl Woese writes, even in a speculative vein, needs to be taken seriously. In his "New Biology" article, he is postulating a golden age of pre-Darwinian life, when horizontal gene transfer was universal and separate species did not yet exist. Life was then a community of cells of various kinds, sharing their genetic information so that clever chemical tricks and catalytic processes invented by one creature could be inherited by all of them. Evolution was a communal affair, the whole community advancing in metabolic and reproductive efficiency as the genes of the most efficient cells were shared. Evolution could be rapid, as new chemical devices could be evolved simultaneously by cells of different kinds working in parallel and then reassembled in a single cell by horizontal gene transfer.

But then, one evil day, a cell resembling a primitive bacterium happened to find itself one jump ahead of its neighbors in efficiency. That cell, anticipating Bill Gates by three billion years, separated itself from the community and refused to share. Its offspring became the first species of bacteria—and the first species of any kind—reserving their intellectual property for their own private use. With their superior efficiency, the bacteria continued to prosper and to evolve separately, while the rest of the community continued its communal life. Some millions of years later, another cell separated itself from the community and became the ancestor of the archea. Some time after that, a third cell separated itself and became the ancestor of the eukaryotes. And so it went on, until nothing was left of the community and all life was divided into species. The Darwinian interlude had begun.

Our Biotech Future, By Freeman Dyson, New York Review of Books, Volume 54, Number 12 · July 19, 2007

Has he sold out for an admittedly very fetching simile?

Continue reading

Precision Can Hide Accuracy

target.png Metrics are good, but just because they’re precise doesn’t mean they’re useful:
I’ve been thinking a little bit about “threat/vulnerability” pairing. You know the drill, go out, get a scan – match the scan data to existing exploits, and voila! You’ve got risk.

Now regular readers and FAIR practitioners know that I don’t believe this exercise gives you risk at all. In fact, in FAIR terms, I’m not sure this exercise does much for finding Vulnerability.

My Assertion To You: The industry loves T/V pairing because it is precise. It looks good on paper, and if you’re a consultant doing it, it looks like you’ve earned your hourly rate. We love The precision of T/V pairing gives us a false sense of accuracy.

Accuracy, Precision, And Threat/Vulnerability Pairing, Alex, RiskAnalys.is, 23 July 2007

He goes on to point out you also need to consider who’s likely to attack you, as in such Threat Agents, as he calls htem, may be too stupid to use a given exploit, or too smart to use it because they’ve got a better way. He recommends some statistical analysis to help out.

I’d also recommend more basic steps, such as not using IE and shifting away from other monoculture software until you’ve got a mix of software from different sources. Those things will usually get you in trouble with sales and marketing, however, because hey, they’ve never had any problems, well, not many, and it’s not their job to fix them. The precise thing isn’t necessarily the right thing.

-jsq

European Firefox

xiti-200707-europe.png Here’s some good news. Firefox market share in Europe is almost 28% according to XitiMonitor. In Germany it’s 38%, and several other countries have higher usage. Opera is at 3.5% and Safara is at 1.7% in Europe.

I’d be more pleased if it was a quarter each by three different browsers, with half a dozen others taking the other quarter, but this is much better diversity than 98% IE.

-jsq Continue reading

No Word?

Got my hopes up on this one:
It appears that Science, the journal of the America Association for the Advancement of Science, itself the largest scientific society in the world, has updated its authoring guidelines to include advice for Office 2007 users. The news is not good.
“Because of changes Microsoft has made in its recent Word release that are incompatible with our internal workflow, which was built around previous versions of the software, Science cannot at present accept any files in the new .docx format produced through Microsoft Word 2007, either for initial submission or for revision. Users of this release of Word should convert these files to a format compatible with Word 2003 or Word for Macintosh 2004 (or, for initial submission, to a PDF file) before submitting to Science.”
SCIENCE PUBS REJECT ARTICLES WRITTEN IN WORD 2007, by Rob Weir, Rob Weir Blog, Thursday, May 31, 2007
And here I thought maybe they were rejecting Word entirely. Ah, it could happen. Most papers in physics, mathematics, and computer science journals are already formatted in TeX, if I’m not mistaken. So there is some diversity in publishing software; it’s not all a monoculture.

Meanwhile, the main reason Science rejected Word 2007 is that it is not backwards compatible with previous versions of Word, thus illustrating the Microsoft dilemma: stick with the old and retain customers, or fix problems and lose some. Not so big a dilemma with Word, perhaps. How many submittors to Science are there, as compared with business Word users? But much more of a problem for security fixes that require breaking backwards compatibility.

-jsq