Category Archives: IT Securiiy

Online Crime Pays

dollarsign.jpg Why Internet security professionals are losing:

Today, few malware developers use their own code. They write it for the same reason commercial software developers do: to sell it for a healthy profit. If you’ve ever bought anything online, buying from them may be disconcertingly familiar. If you want to break into a computer or steal credit card numbers, you can buy the necessary software online, just like almost anything else. More than that, you can find user friendly, point-and-click attack applications that have been pre-tested and reviewed by experts, and read through customer feedback before making your purchase.

You might even be able to buy technical support or get a money back guarantee. Some developers offer their malware through a software-as-a-service model. If you prefer an even more hands-off approach, you can simply buy pre-screened credit card numbers and identity information itself, or sign a services agreement with someone who will do the dirty work for you. As in many other industries, money has given rise to professionalism.

Online crime and malware development has become a full-blown and extremely profitable commercial enterprise that in many ways mirrors the legitimate software market. "We’re in a world where these guys might as well just incorporate," says David Parry, Trend Micro’s Global Director of Security Education. "There’s certainly more money in the cybercrime market than the antivirus market. The internet security industry is a drop in the bucket; we’re talking about hundreds of billions of dollars."

Computer crime is slicker than you think, By David Raikow, CRN, 16 August 2007 08:04AM

Makes you wonder how long until traditional security companies get bought out by newly-IPOed offshore malware corps.

-jsq

To Insure or Not to Insure?

firewallmovie.jpg Iang reminds me that it was on his blog, Financial Cryptography, that I saw the rough estimate of how much an identity theft costs, that is, about $1,000.

He follows up on my post of yesterday about LifeLock, discussing a company called Integrity which insures identities in Second Life. Or, actually, insures any lawsuits resulting from "inappropriate content", whatever that is.

Then he gets to the real quesion:

How viable is this model? The first thing would be to ask: can’t we fix the underlying problem? For identity theft, apparently not, Americans want their identity system because it gives them their credit system, and there aren’t too many Americans out there that would give up the right to drive their latest SUV out of the forecourt.

On the other hand, a potential liability issue within a game would seem to be something that could be solved. After all, the game operator has all the control, and all the players are within their reach. Tonight’s pop-quiz: Any suggestions on how to solve the potential for large/class-action suits circling around dodgy characters and identity?

If Insurance is the Answer to Identity, what’s the Question?, Iang, Financial Cryptography, September 11, 2007

This wraps right around to the original reaction of the person from whom I heard it (hi, Anne Marie) on a list that is silent.

I have several thoughts about this:

Continue reading

Are You Ready for Some Football Storm?

storm_nfltracker_2.jpg What do you do with the world’s fastest supercomputer? Use it to follow football, of course!
Today we started seeing new Storm mails and the web pages changed layouts completely. Now the theme is National Football League (NFL) which is timely considering the 2007 NFL season started on the 6th of September. The website even has the correct score, statistics, and schedule information.

Storm and NFL, by Patrik, F-Secure Weblog, Sunday, September 9, 2007

It’s sort of like gambling on the game; gambling that some suckers will think the site is legit.

-jsq

PS: Seen on Fergie’s Tech Blog.

Aged Old Code

pic_large21yearold.jpg Old wine or whisky can become more complex and interesting. Old code becomes insecure:
Or at least become more vulnerable. I’ve recently been helping a client with their secure coding initiative and as a result I’ve been reading Mike Howard and Dave LeBlanc’s Writing Secure Code which reminded me of an important aspect of maintaining a secure code base which often gets overlooked: That is that as code ages it becomes insecure.

Evolve or Die, by arthur, Emergent Chaos, August 29, 2007 at 7:47 AM

The state of the art in discovering vulnerabilities advances. I remember when nobody worried much about buffer overflows. Related to that, programs get used in environments they weren’t written for. Who really cared about buffer overflows on the early Internet when just getting it working for a few researchers was the goal? Related to that, the number of people motivated to break code keeps increasing, especially those with monetary motivation. With enough eyes are bugs are shallow also means with enough eyes all vulnerabilities become easy to find. Or, in this postmodern world, even computer programs are largely what people perceive them to be, and those perceptions change.

For example, Jeff Pulver perceives Facebook’s video messages as videophone. How long before somebody perceives it as a phishing method? Where there’s humans there’s humint.

-jsq

Outrage: Less and More

danrather0207.jpg We’ve been discussing Outrage Considered Useful. Alex remarked in a comment:

The term "Outrage" suggests that risk cannot or should not be discussed in a rational manner.

What I think Sandman is getting at is that often risk isn’t discussed in a rational manner, because managers’ (and security people’s) egos, fears, ambitions, etc. get in the way. In a perfect Platonic world perhaps things wouldn’t be that way, but in this one, people don’t operate by reason alone, even when  they think they are doing so.

Outrage x Hazard may be a means to express risk within the context of the organization, but I like probability of loss event x probable magnitude of loss better for quantitative analysis.

Indeed, quantitative analysis is good. However, once you’ve got that analysis, you still have to sell it to management. And there’s the rub: that last part is going to require dealing with emotion.

Continue reading

Outrage Considered Useful

peter_sandman.jpg There’s a bit of comment discussion going on in Metricon Slides, and Viewed as PR about counting vs. selling, in which the major point of agreement seems to be that even at a metrics conference there weren’t a lot of metrics presented that were strategic and business-like.

Let’s assume for a moment that we have such metrics, and listen to Peter Sandman, whose website motto is Risk = Hazard + Outrage:

Sometimes, of course, senior management is as determined as you are to take safety seriously. And sometimes when it’s not, its reservations are sound: The risk is smaller than you’re claiming, or the evidence is weak, or the precautions are untested or too expensive. But what’s going on when a senior manager nixes your risk reduction recommendation even though you can prove that it’s cost-effective, a good business decision? Assume the boss isn’t too stupid to get it. If the evidence clearly supports the precautions you’re urging, and the boss isn’t dumb, why might the boss nonetheless have trouble assessing the evidence properly?

As a rule, when smart people act stupid, something emotional is usually getting in the way. I use the term “outrage” for the various emotion-laden factors that influence how we see risk. Whether or not a risk is actually dangerous, for example, we are all likely to react strongly if the risk is unfamiliar and unfair, and if the people behind it are untrustworthy and unresponsive. Factors like these, not the technical risk data, pretty much determine our response. Risk perception researchers can list the “outrage factors” that make people get upset about a risk even if it’s not very serious.

The Boss’s Outrage (Part I): Talking with Top Management about Safety by Peter M. Sandman, The Peter Sandman Risk Communication Web Site, 7 January 2007

He goes on to outline several reasons management might get upset.

Continue reading

Count ‘Em All By Hand

ButchHancock.gif I admire Matt Blaze, and I only hope he was being sarcastic in the entire post in which, after pointing out that California just decertified three major voting machine manufacturors due to massive security problems, he wrote:
How to build secure systems out of insecure components is a tough problem in general, but of huge practical importance here, since we can’t exactly stop holding elections until the technology is ready.

The best defense: Ad hominem security engineering. Matt Blaze, Exhaustive Search, 6 August 2007

Well, yes, yes we can. Continue reading

Metricon: Puzzle vs. Mystery

rct_pointing2.jpg Here at Metricon 2.0, many interesting talks, as expected.

For example, Russell Cameron Thomas of Meritology mentioned the difference between puzzle thinking (looking only under the light you know) and mystery thinking (shining a light into unknown areas to see what else is out there). Seems to me most of traditional security is puzzle thinking. Other speakers and questioners said things in other talks like "that’s a business question that we can’t control" (literally throwing up hands); we can only measure where "we can intervene"; "we don’t have enough information" to form an opinion, etc. That’s all puzzle thinking.

Which is unfortunate, given that measuring only what you know makes measurements hard to relate to business needs, hard to apply to new, previously unknown problems, and very hard to use to deal with problems you cannot fix.

Let me hasten to add that Thomas’s talk, entitled "Security Meta Metrics—Measuring Agility, Learning, and Unintended Consequence", went beyond these puzzle difficulties and into mysteries such as uncertainty and mitigation.

Not only that, but his approach of an inner operational loop (puzzle) tuned by an outer research loop (mystery) is strongly reminiscent of John R. Boyd’s OODA loop. Thomas does not appear to have been aware of Boyd, which maybe is evidence that by reinventing much the same process description Thomas has validated that Boyd was onto something.

-jsq

ROI v. NPV v. Risk Management

southwestcfo.jpg There’s been some comment discussion in about security ROI. Ken Belva’s point is that you can have a security ROI, to which I have agreed (twice). Iang says he’s already addressed this topic, in a blog entry in which he points out that
Calculating ROI is wrong, it should be NPV. If you are not using NPV then you’re out of court, because so much of security investment is future-oriented.

ROI: security people counting with fingers? Iang, Financial Cryptography, July 20, 2007

Iang’s entry also says that we can’t even really do Net Present Value (NPV) because we have no way to calculate or predict actual costs with any accuracy. He also says that security people need to learn about business, which I’ve also been harping on. I bet if many security people knew what NPV was, they’d be claiming they had it as much as they’re claiming they have ROI. Continue reading

Security ROI: Possible, but Not the Main Point

gordon.jpg Many people have argued about wondered whether information security can have a computed Return on Investment (ROI). The man who co-wrote the book on ROI, Managing Cybersecurity Resources: A Cost-Benefit Analysis says yes, it’s possible, but in general, “maximizing the ROI (or IRR [real economic rate of return]) is, in general, not an appropriate economic objective.” What, then?
Rather than trying to derive the ROI of security investments, a much better strategy is to work on the related issues of deriving an optimal (or at least desirable) level of information security investments and the best way to allocate such investments. This strategy is the focus of the Gordon-Loeb Model (for a brief summary of the focus of this model, and a link to the actual paper, go to: (http://www.rhsmith.umd.edu/faculty/lgordon/Gordon%20Loeb%20Model%20cybersecurity.htm

Email from Dr. Lawrence Gordon: Security ROI possible but not optimal, use other metrics, Posted by Kenneth F. Belva, bloginfosec.com, 18 July 2007

Belva reads the recommended paper and finds it to say:
The Gordon-Loeb Model also shows that, for a given level of potential loss, the optimal amount to spend to protect an information set does not always increase with increases in the information set’s vulnerability. In other words, organizations may derive a higher return on their security activities by investing in cyber/information security activities that are directed at improving the security of information sets with a medium level of vulnerability.
From which Belva concludes that “we do understand Information Security to have a return.” Well, yes. Continue reading