Sunday, December 15, 2013

Who's driving this bus?

I saw this brief interview excerpt on a friend's facebook posting:  Carl Sagan interview.  I found the full interview here.  It got me thinking about what I do on a daily basis.  As a security consultant I am ever struggling to explain computer concepts and computer security concepts to clients for the purpose of eliciting an informed risk-based decision on some issue that I'm working on for them.  I'm constantly amazed at how little people understand or even want to understand about computers and computer security.

A day in the life


I overheard a conversation at work on Friday where one of the telecom people was working an incident where an employee was attempting to dial into a conference call and was asked to put in their social security number for access or verification or some such thing.  After putting in the SSN, the system immediately hung up.  The person dialed back and got the normal prompt for the teleconference that asked for the conference code.  Now aside from the combination of simple tricks that were used to fool this person into divulging private personal information; the fact remains that far too many people immediately disengage their brain when their hands touch a keyboard.  They instantly assume that they know nothing about computers and don't bother to even try to think.  They simply react like rodents in an experimental lab pressing a button for a food pellet.  I have many stories like this one.

Now it is easy to rationalize my employment by saying that "if it weren't for people like that I wouldn't have a job."  Many people are fond of reminding me of this fact.  It is painfully true and it pains me to admit as much.  However, when the majority of the computer using public is so under-educated about computers and technology, and more importantly, computer security, it is extraordinarily dangerous.  Think for a few moments on how much of our lives are ruled by ones and zeros.

Road Trip!


Take a virtual trip with me to the grocery store as an example.  Simply getting into my car, I encounter several bits of technology that can be vulnerable to attack.  My key transmits a signal that unlocks my car door.  This can be vulnerable to a replay attack.  Many cars have navigation systems, mine doesn't, but I use my phone for that.  We'll deal with those separately.  Nav system, has a cache, which can be read.  Where was I last and when?  My house, my work?  You could certainly extrapolate when I'm not at home and how long it takes me to get to work.  Smart phones, remember, I don't have a Nav system, are basically just small versions of laptops.  Many issues there.  My calls, my locations, my contacts, my emails.  The list goes on.  What about the black box on my car that the insurance companies have quietly developed tremendous skill in decoding.  Originally meant for the car makers and mechanics, but insurance companies have become quite the skilled hackers of these things.  Even the run flat tire detection system in my car has been recently theorized to have vulnerabilities.  Of course there is all the tracking that government agencies do with my electronic toll pass outside of the toll collection that they tell you about.  Remember I said we were going to the store?  We haven't left my driveway yet.

Ok, down the road we go


At the end of my neighborhood is a stop light, controlled by a computer, with a sensor so emergency vehicles can make it go red for cross traffic, sometimes red for all traffic or sometimes green for them.  Can't imagine this is any less vulnerable to exploit than any other system made by humans.  Well, actually, I know these systems are vulnerable.  Along the way we pass a public park with sports fields.  Lighting systems computer controlled from a remote location several states away, maybe even in another country.  A few more traffic lights and we are at the store.  Before we even get into the store I'd point out to you the video surveillance cameras.  Let's head into the store.  More cameras.  Lighting systems, cooling systems, heating systems, fire alarm systems, back up power systems...all computer controlled and likely capable of calling for help in the event of a system failure or alarm.  That's just the basics that run the place.  Of course there will be a panic alarm and security alarms in the event of robbery.  Then there are the systems for tracking and managing inventory.  Computer networks designed to place orders to suppliers and distributors all over the country.  Coupons, cash, loyalty discounts, credit card transactions, instant coupons based upon my shopping habits, bar codes...all managed by computers and ALL vulnerable to compromise.  For the sake of this discussion, we'll ignore that my grocery store has a bank in it, but we'll use the more general case of another organization having a presence in the store, like coffee, pharmacy, cleaner, fast food, or florist.  This is a third party connection to each of those organizations' infrastructure that could be possibly shared with the store for connectivity or even services like loyalty or credit card authorization.  My grocery store has wireless too.  Oh and let's not even get into the electrical grid that powers all this stuff.  What fun!  Dizzy enough yet?

What could go wrong?


This is just a typical trip to one store, but all of these systems along the route are vulnerable to compromise, both physical, over the air, or over the wire.  But most people go about their lives quite oblivious to the technological near-disaster that looms all around every day.  The basic reason many assume is, "well, it works, so it must be safe, right?  And I'm sure they have technical people that are addressing all those security issues."  Well, the good news is there are some very competent and skilled people doing just that.  The bad news is there simply aren't near enough of them to go around.  The worse news is there's even more that don't have a clue that there is a problem with many or any of these systems.  And worse still, there are those that refuse to acknowledge the problems exist, even with presented with very strong evidence and expert advice.  If someone doesn't think there is a problem, how likely do you think they will be trying to fix it...or even monitor for bad things caused by bad people?

I'm not a huge fan of twelve step programs, but I will cite a piece of their wisdom.  The first step to recovery is acknowledging that you have a problem.  And people...Sagan was dead on balls accurate.  We live in a society driven by technology that few understand, especially when it comes to security risks.  We all need to learn more and demand more from ourselves.  We need to demand more from the people that make the products and technology.  Finally and most importantly, we need to demand expertise and accountability from the regulators and lawmakers so that it is impossible to ignore the dangers.

Thursday, December 5, 2013

To do a great right, do a little wrong...

The oft' quoted Shakespeare play, Merchant of Venice, Act 4, Scene 1 leads the commentary today.

It began with Edward Snowden releasing details of the NSA's classified and all encompassing monitoring program.  As more and more details of this program continue to be revealed, I find it impossible to believe that any part of the government, in aggregate or individually, maintains oversight of the NSA's activities.  If they are operating in any way, shape, or form, outside of the oversight of the government, they are, by definition, breaking the law.  Even if they are simply lying by omission.

Having done many security assessments of organizations much smaller than the NSA, it is routine to find volumes of surprising details that few, if anyone, knew were going on prior to the assessment.  Rules that were assumed to be in place and protecting the organization, but are not.  Commonplace.  So why would it be any different in government?  They routinely operate within more pressing budgetary constraints than normal business.  You could likely successfully argue that they waste much more of that money as well, so whether it is never there or wasted away, the effect is the same.  However, with less budget comes less people available to do the work that should be done.

Time management meets IT process


Time management theory maintains that a task that can be done at any time shall be done at no time.  Thus you can extend this logical precept to IT jobs and their related tasks.  If it isn't someone's specific job to do, it will not get done.  Further, good security practice, including the practice of granting and renewing security clearances, mandates that no one who is a requester of a security method can be the approver of the request.  No self-approval.

"NSA, are you doing things that are on the up and up?"
"Yes, we are."

"NSA, do I want to know what you are doing?"
"No, you don't."

That's self-approval and it is a fundamentally flawed security concept.  Any security practitioner will tell you that when you break common security best practices, bad things happen.  If transparent, repeatable, auditable, and, most important, sensible security processes are not in place, you have no security.  You may sleep well at night because someone told you things are fine, but consider this question:

"NSA, that evidence in your database says Joe did something wrong.  Are you sure?"
"Yes."
"Can I see the evidence?"
"No, just trust us.  It's true."

You'll forgive me if I wish to see the proof *couMADOFFgh*, and the chain of custody (audit trail) that shows how the database entry got there.  Having recently touched on the topic of 'trust but verify' in a previous blog.  I've spoken of that subject elsewhere in blogs, but I'll not cite the source for personal reasons.

Five billion mobile device records

One of the hot stories today is more information being released from the Snowden data that says the NSA is absorbing five billion mobile device records of geolocation data and call correlations daily.  Having worked in a security operations center and monitoring far less than hundreds of millions of end points of data, I can attest that when we determined that an incident was occurring or a change in the rules that monitor all that data was made, we made sure our logic was sound.  We used peer review in the open-source meaning of the word, so that our own viewpoint of one possible way to filter the facts didn't cloud what we were trying to see.  Others would and could weigh in on whether or not our solution would likely deliver an accurate view.  The NSA sees what they want and are true believers, a notably dangerous psyche to employ for logical analysis when used as your only measure.  Many times in security we find curious things.  It is far better to maintain an open mind than to instantly 'know' the answer before you have all the facts.

“The most elementary and valuable statement in science, the beginning of wisdom is ‘I do not know’.  I do not know what that is.” Mr. Data, ST:TNG

This is something true believers cannot do.  Without oversight (peer-review), we'll never know if their conclusions are correct.  ...And for the record, we should not trust that they are deleting the records that have no value.  A good security practitioner would have audit records to prove it and not simply say 'trust me.'

In the Merchant of Venice, Shylock was so burned because he was so focused on proving a specific point, he lost track of the big picture.

It worked before...


The NSA continues to argue that their methods work in catching the bad guys, but also make such claims without proof.  "Trust us, we caught them before they did something bad."  Can you prove it? "We can't comment on existing legal cases..."

In closing, I'll leave with two of my favorite quotes (both from the same paper):
“The argument that the same risk was flown before without failure is often accepted as an argument for the safety of accepting it again. Because of this, obvious weaknesses are accepted again and again, sometimes without a sufficiently serious attempt to remedy them, or to delay a flight because of their continued presence.” – R. P. Feynman

“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.” – R. P. Feynman

Tuesday, December 3, 2013

Offensive Security - to play or not to play?

It has been quite a while since I have written after being accustomed to writing almost weekly to twice weekly in other blogs.  Well, suffice to say that not much has been moving me to write lately.  That is not to say that there haven't been things happening in security.  There have been happenings, but it's all seemed so banal.  Maybe it is just that the noise has risen so high that the constant din makes it seem that nothing is happening.  Signal acquired.

I watched a show called Chasing Madoff and it was once again one of those moments where several major concepts coalesced in my mind to form an article.  Something worth saying.

One of the many very compelling points of the Chasing Madoff program was the fact where the SEC was given an investigation wrapped up with a bow on it and on numerous occasions flatly refused to do their job.  They refused to investigate when all they had to do was confirm the facts that were documented for them.  The idea then struck me with clarity...why should they?  They have no real incentive.  No recourse if they don't do their job.  Well, you could argue that what did happen, could happen...global markets collapse, governments go bankrupt.  Of course, if you are lucky (or unlucky, depending on your point of view) enough to be a government employee, you still get paid even though the government shuts down, so whether you do your job or you don't, you still get paid.  We've seen this very recently.

"The average person should be aware that when you get a brokerage statement it is only a piece of paper representing what that person thinks they own." -- quote from former Madoff Account holder #41-245711, from Chasing Madoff.

This has been a personal point that I have made to people in deep conversations about the macro implications of computer security for many many years.  The vital numbers that we believe in to live our lives daily are simply bits and bytes in a computer database.  Whatever those bits and bytes say, people believe.  Today the database says you have $1000 in your checking account, tomorrow it says you are overdrawn by $500.  Even though you did nothing, you are suspect and have to prove your innocence.  Intelligent people willingly acknowledge this, but fail to really absorb the information about what it means.

So what does it mean?  I would suggest that, at a minimum, you doubt just about everything that says the system works if you study hard and play by the rules.  There exists overwhelming evidence that rich people are rich because they cheated the system at some point in time or are actively cheating the system to gain or maintain their wealth.

By now you are certainly saying...so this is a blog about security, right...why the remedial history lesson on finance and wealth?

What's changed since then? (and what does it have to do with security?)


Having had numerous personal experiences and other, what I call 'near-miss' experiences, with regulators, this kind of willful ignorance by regulators goes on every day.  The Madoff incident was not a freak occurrence.  It is an everyday happening in nearly every industry that I have seen that has government regulatory controls.  It is obvious that no lessons have been learned from the Madoff incident because the system remains, for all intents and purposes, intact and without any noticeable change.

If the designated protectors refuse to protect you and do the job that they are supposed to do, what options do you have?

I was asked the following question in a job interview recently:  "What is your stance on active defense?  How do you feel about attacking those that attack you [in cyberspace]?"  Aside from the fact that it was definitively one of the toughest questions that I have EVER been asked in a job interview, it is also a very pressing issue that involves all security practitioners today.  Much more so than I even had thought about up until that day.

Essentially, if there are those that refuse to play by the rules, how far is reasonable to push beyond those same accepted norms in retaliation using like weapons and tactics?

Until I actually spent many hours pondering this very question, I previously dismissed it with a simple answer - 'not smart, too high of risk.'  For the record, this was not the answer that I gave that day in the interview.  Prior to that day, however, there just seemed to be too much to lose if you were an organization of any size:  Legal repercussions, impact to reputation, regulatory response...

Risk...what risk?


Hold it right there....  Regulators?  They are virtually useless, so there goes that risk.  So what about the other risks?

Legal?  What is an evil government going to do if you strike back at their theft of your intellectual property or data?  Surely they would mount a legal claim that you attacked them after they stole your information?  Ok, that is out.  Tell someone that they stole your data and you are attacking them and your response is illegal?  No, that won't work either.  Where could they possibly file such a claim?  No single country has proven legal jurisdiction over another in such international border matters, especially when it comes to the very fuzzy area of cyberweapons and attacks.  People don't care about a cyberattack even remotely as much as a real bomb going off.  Which is to say that if it isn't a real bomb in their back yard, they just don't care.  So there goes the legal risk.  Well, that leaves issues of risk to reputation.

If you are in a situation where you are considering striking back at someone for stealing valuable data, you are already through the looking glass when it comes to reputation.  Maybe you have an obligation to report that you lost some personal data, but that is about where it ends as far as this concern goes.  The marketing snow-job machine takes over and whitewashes away most of your worries.  A bit of credit protection here, a vague press release there and you are virtually absolved.

If traditional issues of risk are gone, what's left?


Well, once you get rid of the traditional business risk, there is nothing left but battlefield tactical and strategic risk.  What remains are troop and weapon strength assessments coupled with defensive strength due to your position and posture.  Very few security practitioners have experience with such assessments so your traditional trusted advisers are likely to be far outside of their element here.  Aside from the tactical battle assessment, you would need a strategic assessment as well.  How long is this fight likely to last?  Will it escalate or will the adversary go and find some low hanging fruit elsewhere.  Is your adversary's weapon skill limited to long period research weapons or are they able to fashion improvised weapons and use guerrilla tactics?  This is getting into the deep weeds quick.

If you choose this course and properly recognize that the traditional business rules of risk simply do not apply, you cannot be so foolish to think that no rules of risk apply.

And always remember the final question of traditional war as it is quite applicable here....does anyone really win?

Thursday, October 3, 2013

The security implications of the US Government shutdown

Well, after much encouragement from friends and colleagues, I have created a new home in this blog.  Welcome to the inaugural edition!

What better to tackle than one of my favorite targets of high noise and low signal? Let's take a slightly different look at the government and their latest schoolyard sandbox silliness; their inability to agree on a budget and shut down the government.  In all honesty, I didn't think they would get to this state.  I guessed they would cut a last minute deal and work out their differences.  Well, like five-year-old's at the playground, they have quit playing together, picked up their marbles, and gone home.  Of course, they still get paid, which is a crime in its own right; once again showing that they are willing to hurt lots of people to prove their private points as long as they are not among those truly affected.

Ok, all that aside, I didn't want to write about that issue in particular.  For those of us that have worked in the government contracting infosec space will attest, the heavy lifting of the government's computer security business is done by contractors.  Who are now out of work, afaik.  I have had client's government intel feeds say they are shutting down due to the budgetary shutdown, so I'm extrapolating a bit here.  I'm sure there may be a few projects that have some secret funding or protected funding, but I doubt they could or would talk about it either way.  Ponder both of those possibilities for a few seconds....