Thursday, July 17, 2014

Implications of Quantum Capabilities, TAO, and other nasty tricks

From the just in case you weren't paying attention file...I know I haven't been keeping up on my reading for quite some time.

Original source article here.

A comprehensive internal presentation titled "QUANTUM CAPABILITIES," which SPIEGEL has viewed, lists virtually every popular Internet service provider as a target, including Facebook, Yahoo, Twitter and YouTube. "NSA QUANTUM has the greatest success against Yahoo, Facebook and static IP addresses," it states. The presentation also notes that the NSA has been unable to employ this method to target users of Google services. Apparently, that can only be done by Britain's GCHQ intelligence service, which has acquired QUANTUM tools from the NSA.
 ...and, of course, Bruce Schneier clued me in that I missed it.  And because the article that led me to that one is hugely important too...here it is.

Now, yesterday I wrote about why 'they' do it, with 'they' being a reference to certain group of bad guys.  The question today is...well, not why the NSA does it, we know the answer there, but rather how certain are you that they (the NSA) are only doing this stuff to the bad guys?  Because, to be honest, a lot of the monitoring tools sound like there are targeted at normal citizens.  Or at least the widely used internet services that vast segments of the internet citizenry frequent.  The list of sites and the tools to exploit them are not only the domain of bad guys, but regular people all over the world.  While I'm sure that people with bad intentions use those sites too, I would expect a bit more cloak a dagger than just hiding in all the noise in plain sight (or site?) on Facebook, Yahoo, Twitter and YouTube.  However, I suppose, it is easier to poison the waterhole rather than track the 'critters' as they move through the woods.

I think that one of the most worrying aspects of this type of information is that when my peers and colleagues talk about this vulnerability or that vulnerability, that there exists a whole host of exploits and things that we DON'T know about.  In fact, even the vendors don't know about; as opposed to quietly know about and are working on a fix, but haven't mentioned publicly yet.  It is not just the governments that are keeping these things a secret, but to a certainty the bad guys have their own bag of tricks they are not keen to share (but are very willing to sell).

Other people making choices for me...


Now this list of compromises has caused me to think and notice on Facebook that videos that people share of cute and funny things have started playing automatically.  I used to have to click on something to make it play, which I was happy with.  I really hate the fact that someone else at some point in time decided that I automatically want to play and see every video of a cat or dog doing something odd, strange, cute, or funny.  I did learn that you can turn this functionality off, by the way.  Again, the assumption that you want to opt-in unless you specifically opt-out is maddening.

There can be a myriad of reasons why I may not want to drink at the massive bandwidth firehose that characterizes many popular sites these days; first among them is that I don't trust every bit of eye-candy left out there.  This list of government tools and capabilities is foremost among them.  An old trick by bad guys is to leave something out in the open that lures you to interact with it and suddenly the trap is sprung.  Greek story of the Trojan horse, anyone?  Variations of this trick come in all forms.  Think vendor conferences and a vendor booth with fish bowl of free USB memory sticks...complete with a chunk of stealth malware to infect your system when you plug it in.  Old trick, by today's timeline measured in internet speed, almost certainly a derivation of Ludicrous Speed.

Internet warning labels anyone?


I really like the trend in various state's legislation that requires the caloric content of restaurant menu items to be posted with the item itself.  It allows me to make a choice.  Now, obviously, like most people, I may choose to have that high calorie dessert once in a while, but at least I know the implications of my choice.

We really need some legislation to require the choices be left to the individual when it comes to internet content...maybe some warning language like on cigarette cartons.  "This link cannot be guaranteed to be safe.  Clicking it may have dire consequences, including allowing your government or a foreign government or an evil hacker organization to follow your every move."  I would have no problem with any elective setting to turn off such warnings and allow all content to flow automatically based upon user choices.  User choice being a key concept here.

Of course, all these government tools and compromises could be a major part of the reason of why we don't have such legislation...heck, they could even rig the polls that might sample public opinion as to whether we feel it would be a good idea or not.

Monday, July 14, 2014

Why do they do it?

Well, a completely different source, from my usual dose of NPR, got me to scratch my head and inspired me to write today.  I was reading a slightly older post from a colleague at work who shared a link to an article...and began to think that there was much more to the subject than was being discussed.

The article from mid-June about why Russian hackers are so good is here.

One point that is very much missed is the simple fact that the good guys have to be right all the time.  The bad guys only have to be right once.  That certainly slants the numbers in your advantage if your failures are basically ignored and only your successes count.  A very simply point, but consider this too...every country in the world could have iron-clad security protection laws, yet one does not.  As long as bad guys have a safe harbor of their own to ply their craft, they will operate with impunity from that base of operations like the pirates of 17th century that sailed the turquoise blue waters of the Caribbean.  This is an unrealistic description of a slightly shy of the ideal world where only one country would have less than iron-clad laws.  However, the reality is that anywhere in the world where there economic disparity exists, there exist opportunities for money to be made by hook or by crook.  This lends a Robin Hood-like charm to those that would steal from the 'rich' and give to the 'poor.'  This condition also gives a voice to those that see themselves akin to Robin the Hood and makes those that would otherwise play the role of the Sheriff of Nottingham less likely to enforce the laws, if any such exist, and care much less than they might otherwise be so inclined.

The Enemy of My Enemy


The Chinese have a saying, "the enemy of my enemy is my friend." If there is a country that has my country under it's economic or military thumb, how eager might I be to bother to do anything other than encourage, albeit quietly, some computer hacker that is stealing from my enemy or causing them economic heartache?  Simple question, huh?  If I don't like my neighbor and you are stealing from my neighbor's house, why would I care?  Ok, maybe in good conscience you might care a little, but what if your neighbor was a rich, pompous, jerk that did nothing but jump up and down and shout how awesome they are and it really sucked to have to live near them and see that all the time and no one liked him or her?...would you care then? Not so much, huh?

Let's take that a step further, what if this horrible excuse for a human was your neighbor and this person stealing their stuff was selling it real cheap at the swap meet? And some other less fortunate people in your neighborhood were able to buy some of this stuff for cheap and have a better life...would you be so quick to cry foul and demand that your local lawmakers or law enforcers do something to try to stop it? Dumb question, huh?

Certainly there are lots of historic reasons why a group of people become practiced at what might otherwise be considered questionable skills when they are fighting against an oppressor to survive.  Without being too controversial (what? no controversy...I'm outta here.), I'll point to the examples from the US Revolutionary war as an one easy point of emphasis where questionable skills were used by the 'oppressed' against an 'oppressor.'  The soon-to-be-US stole assets from the British overlords to fund their new country.  We call them 'startups' today.  Should that mean that such skills, maybe being the easier path to tread than the path of hard work and innovation, should culturally become the norm?  Obviously not...would be the morally correct answer.

So when do you change from criminals to a respectable society?

I would hope that the answer to this question would be quite obvious...when you have something to lose.

Let's go back to the Chinese saying again.  What is the enemy of my enemy from our perspective? Hopelessness...or rather having nothing to lose.  Wouldn't it make more sense to help these fellow humans past the stage of hopelessness and teach them how to create their own intellectual valuables that they can cherish and thereby desire a system of laws of their own to protect those valuables?

Recognize the symptoms of the real disease.  Hopelessness, pure and simple.  If you have nothing to lose you are willing to ignore nearly every legal and moral precept to improve your condition.  The catalyzing event is when you suddenly accumulate enough capital (intellectual or real) that you feel you have to worry about someone else wanting to take it from you.

(Now that leaves no excuse for those three-letter-agencies out there that simply are evil because they can be...sorry...couldn't resist one controversial dig.)

Wouldn't it be better all around for those of us that have plenty to teach those that don't have much how to create their own business, with all the computer bells and whistles?  Better than trying to go into their country and export our businesses to their country for the purpose of exploiting their resources so we can have more stuff?  Now there's a risk management tactic you won't learn in school.

Wednesday, June 11, 2014

The things we don't intend to share

Well, it's been quite a while since I have felt inspired to write, however, my local NPR station got me thinking once again.

I've heard NPR call them 'driveway moments' when you are listening to a story and find yourself no longer driving and parked in your driveway with the engine running and still listening because the story is so good.

The story began with a reference to Edward Snowden, which immediately perked my ears up, but it was just an intro line.  I have heard the teasers for this story for a bit now and was interested to hear the results.  Very interesting to say the least.  Here's the story about the things that can be learned from commercial software monitoring of your technology and the data that it sends and receives.  Now, this is distinctly different from what three letter agencies (TLAs) have at their disposal, so keeping that in mind is important when considering the revelations of this story.

As of this writing, the NPR web site shows two parts to the story, but I believe from my listening to the radio that there may be more coming.

Now, I know this subject matter very well, however, it still does occasionally remind me of how much I ignore because of simple convenience, like most other people do.  This, in and of itself, is an interesting aspect of this story that really isn't called out.  That issue is how much of our privacy we give up as a matter of convenience.  This is not a characteristic of European privacy law, but that is an article for another day.  At any rate, the reporter commented on his surprise on this issue during the initial set up of the experiment and the confirmation that everything was working and some back and forth questions as to whether or not he was actively using his cell phone.  He was not, but the monitoring team was seeing substantial traffic from his phone to the internet while it was 'quietly' sitting on his desk and in lock mode.

One of the other things that this story highlights well is the 'side channel way' that adversaries can get your data.  Rarely does a compromise result from a direct frontal assault.  The story mentions that every system has old programs and it can be those programs that leak data.  Add a few more bits and pieces of micro-facts from other programs and you have a significant piece of data or a coherent piece of information or even a story about you and your information.  This results from a couple of different illustrative points from the story.  The first is the way that the 'Steve's iPhone' in the story can lead from just any Steve in the country or world, to a specific Steve at NPR in Menlo park.  Simple google search after that and you have your specific Steve Henn.  The other illustrative point is what I call the famous last words compromise...'well, I didn't expect them to do that!'  This was illustrated by the 'adversaries' in this instance using several other side channels of information that are older programs that are less likely to be considered for patching or even running in a stable fashion and not needing patching...yet leaking data and not secure.

So what is the take away?  Well, certainly we need some monumental legislative change in this country for privacy.  We need to change our legal privacy expectations from 'opt-out' to 'opt-in.'  This means that any provider of hardware or software would only be allowed to your personal information if you specifically grant them access (opt-in).  This is the basic model that European privacy law follows.  By contrast, our legislative structure allows organizations to collect our data with minimal notification in legalese, rather than plain and easy to understand language, UNLESS we specifically say they cannot (opt-out).  That simple fact alone is the foundation of much of what is the basis of our continuing software problems that allow adversaries to perpetrate their craft.

The second take away is a bit more ambitious and far reaching and could have serious economic impacts.  Strap yourself into your chair for this one....  Software and hardware manufacturers need to be held criminally and fiscally responsible for security flaws.  I'll let that thought sit with you for a moment, because it is a big one.

I'll wait.  Go ahead and ponder it.

Yes, that does mean...

Yeah, and that too...

But it also means that security would be mandated by design and become an integral part of the economic product design decisions that every maker of software and hardware goes through at some level.  When security becomes a required feature of our computer landscape and tantamount to holding car makers responsible for the proper functioning of required safety features like seat belts and air bags, you will see some effective security happen.

Until then...we will continue to have 'well, I didn't expect them to do THAT!?!?"

Sunday, December 15, 2013

Who's driving this bus?

I saw this brief interview excerpt on a friend's facebook posting:  Carl Sagan interview.  I found the full interview here.  It got me thinking about what I do on a daily basis.  As a security consultant I am ever struggling to explain computer concepts and computer security concepts to clients for the purpose of eliciting an informed risk-based decision on some issue that I'm working on for them.  I'm constantly amazed at how little people understand or even want to understand about computers and computer security.

A day in the life


I overheard a conversation at work on Friday where one of the telecom people was working an incident where an employee was attempting to dial into a conference call and was asked to put in their social security number for access or verification or some such thing.  After putting in the SSN, the system immediately hung up.  The person dialed back and got the normal prompt for the teleconference that asked for the conference code.  Now aside from the combination of simple tricks that were used to fool this person into divulging private personal information; the fact remains that far too many people immediately disengage their brain when their hands touch a keyboard.  They instantly assume that they know nothing about computers and don't bother to even try to think.  They simply react like rodents in an experimental lab pressing a button for a food pellet.  I have many stories like this one.

Now it is easy to rationalize my employment by saying that "if it weren't for people like that I wouldn't have a job."  Many people are fond of reminding me of this fact.  It is painfully true and it pains me to admit as much.  However, when the majority of the computer using public is so under-educated about computers and technology, and more importantly, computer security, it is extraordinarily dangerous.  Think for a few moments on how much of our lives are ruled by ones and zeros.

Road Trip!


Take a virtual trip with me to the grocery store as an example.  Simply getting into my car, I encounter several bits of technology that can be vulnerable to attack.  My key transmits a signal that unlocks my car door.  This can be vulnerable to a replay attack.  Many cars have navigation systems, mine doesn't, but I use my phone for that.  We'll deal with those separately.  Nav system, has a cache, which can be read.  Where was I last and when?  My house, my work?  You could certainly extrapolate when I'm not at home and how long it takes me to get to work.  Smart phones, remember, I don't have a Nav system, are basically just small versions of laptops.  Many issues there.  My calls, my locations, my contacts, my emails.  The list goes on.  What about the black box on my car that the insurance companies have quietly developed tremendous skill in decoding.  Originally meant for the car makers and mechanics, but insurance companies have become quite the skilled hackers of these things.  Even the run flat tire detection system in my car has been recently theorized to have vulnerabilities.  Of course there is all the tracking that government agencies do with my electronic toll pass outside of the toll collection that they tell you about.  Remember I said we were going to the store?  We haven't left my driveway yet.

Ok, down the road we go


At the end of my neighborhood is a stop light, controlled by a computer, with a sensor so emergency vehicles can make it go red for cross traffic, sometimes red for all traffic or sometimes green for them.  Can't imagine this is any less vulnerable to exploit than any other system made by humans.  Well, actually, I know these systems are vulnerable.  Along the way we pass a public park with sports fields.  Lighting systems computer controlled from a remote location several states away, maybe even in another country.  A few more traffic lights and we are at the store.  Before we even get into the store I'd point out to you the video surveillance cameras.  Let's head into the store.  More cameras.  Lighting systems, cooling systems, heating systems, fire alarm systems, back up power systems...all computer controlled and likely capable of calling for help in the event of a system failure or alarm.  That's just the basics that run the place.  Of course there will be a panic alarm and security alarms in the event of robbery.  Then there are the systems for tracking and managing inventory.  Computer networks designed to place orders to suppliers and distributors all over the country.  Coupons, cash, loyalty discounts, credit card transactions, instant coupons based upon my shopping habits, bar codes...all managed by computers and ALL vulnerable to compromise.  For the sake of this discussion, we'll ignore that my grocery store has a bank in it, but we'll use the more general case of another organization having a presence in the store, like coffee, pharmacy, cleaner, fast food, or florist.  This is a third party connection to each of those organizations' infrastructure that could be possibly shared with the store for connectivity or even services like loyalty or credit card authorization.  My grocery store has wireless too.  Oh and let's not even get into the electrical grid that powers all this stuff.  What fun!  Dizzy enough yet?

What could go wrong?


This is just a typical trip to one store, but all of these systems along the route are vulnerable to compromise, both physical, over the air, or over the wire.  But most people go about their lives quite oblivious to the technological near-disaster that looms all around every day.  The basic reason many assume is, "well, it works, so it must be safe, right?  And I'm sure they have technical people that are addressing all those security issues."  Well, the good news is there are some very competent and skilled people doing just that.  The bad news is there simply aren't near enough of them to go around.  The worse news is there's even more that don't have a clue that there is a problem with many or any of these systems.  And worse still, there are those that refuse to acknowledge the problems exist, even with presented with very strong evidence and expert advice.  If someone doesn't think there is a problem, how likely do you think they will be trying to fix it...or even monitor for bad things caused by bad people?

I'm not a huge fan of twelve step programs, but I will cite a piece of their wisdom.  The first step to recovery is acknowledging that you have a problem.  And people...Sagan was dead on balls accurate.  We live in a society driven by technology that few understand, especially when it comes to security risks.  We all need to learn more and demand more from ourselves.  We need to demand more from the people that make the products and technology.  Finally and most importantly, we need to demand expertise and accountability from the regulators and lawmakers so that it is impossible to ignore the dangers.

Thursday, December 5, 2013

To do a great right, do a little wrong...

The oft' quoted Shakespeare play, Merchant of Venice, Act 4, Scene 1 leads the commentary today.

It began with Edward Snowden releasing details of the NSA's classified and all encompassing monitoring program.  As more and more details of this program continue to be revealed, I find it impossible to believe that any part of the government, in aggregate or individually, maintains oversight of the NSA's activities.  If they are operating in any way, shape, or form, outside of the oversight of the government, they are, by definition, breaking the law.  Even if they are simply lying by omission.

Having done many security assessments of organizations much smaller than the NSA, it is routine to find volumes of surprising details that few, if anyone, knew were going on prior to the assessment.  Rules that were assumed to be in place and protecting the organization, but are not.  Commonplace.  So why would it be any different in government?  They routinely operate within more pressing budgetary constraints than normal business.  You could likely successfully argue that they waste much more of that money as well, so whether it is never there or wasted away, the effect is the same.  However, with less budget comes less people available to do the work that should be done.

Time management meets IT process


Time management theory maintains that a task that can be done at any time shall be done at no time.  Thus you can extend this logical precept to IT jobs and their related tasks.  If it isn't someone's specific job to do, it will not get done.  Further, good security practice, including the practice of granting and renewing security clearances, mandates that no one who is a requester of a security method can be the approver of the request.  No self-approval.

"NSA, are you doing things that are on the up and up?"
"Yes, we are."

"NSA, do I want to know what you are doing?"
"No, you don't."

That's self-approval and it is a fundamentally flawed security concept.  Any security practitioner will tell you that when you break common security best practices, bad things happen.  If transparent, repeatable, auditable, and, most important, sensible security processes are not in place, you have no security.  You may sleep well at night because someone told you things are fine, but consider this question:

"NSA, that evidence in your database says Joe did something wrong.  Are you sure?"
"Yes."
"Can I see the evidence?"
"No, just trust us.  It's true."

You'll forgive me if I wish to see the proof *couMADOFFgh*, and the chain of custody (audit trail) that shows how the database entry got there.  Having recently touched on the topic of 'trust but verify' in a previous blog.  I've spoken of that subject elsewhere in blogs, but I'll not cite the source for personal reasons.

Five billion mobile device records

One of the hot stories today is more information being released from the Snowden data that says the NSA is absorbing five billion mobile device records of geolocation data and call correlations daily.  Having worked in a security operations center and monitoring far less than hundreds of millions of end points of data, I can attest that when we determined that an incident was occurring or a change in the rules that monitor all that data was made, we made sure our logic was sound.  We used peer review in the open-source meaning of the word, so that our own viewpoint of one possible way to filter the facts didn't cloud what we were trying to see.  Others would and could weigh in on whether or not our solution would likely deliver an accurate view.  The NSA sees what they want and are true believers, a notably dangerous psyche to employ for logical analysis when used as your only measure.  Many times in security we find curious things.  It is far better to maintain an open mind than to instantly 'know' the answer before you have all the facts.

“The most elementary and valuable statement in science, the beginning of wisdom is ‘I do not know’.  I do not know what that is.” Mr. Data, ST:TNG

This is something true believers cannot do.  Without oversight (peer-review), we'll never know if their conclusions are correct.  ...And for the record, we should not trust that they are deleting the records that have no value.  A good security practitioner would have audit records to prove it and not simply say 'trust me.'

In the Merchant of Venice, Shylock was so burned because he was so focused on proving a specific point, he lost track of the big picture.

It worked before...


The NSA continues to argue that their methods work in catching the bad guys, but also make such claims without proof.  "Trust us, we caught them before they did something bad."  Can you prove it? "We can't comment on existing legal cases..."

In closing, I'll leave with two of my favorite quotes (both from the same paper):
“The argument that the same risk was flown before without failure is often accepted as an argument for the safety of accepting it again. Because of this, obvious weaknesses are accepted again and again, sometimes without a sufficiently serious attempt to remedy them, or to delay a flight because of their continued presence.” – R. P. Feynman

“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.” – R. P. Feynman

Tuesday, December 3, 2013

Offensive Security - to play or not to play?

It has been quite a while since I have written after being accustomed to writing almost weekly to twice weekly in other blogs.  Well, suffice to say that not much has been moving me to write lately.  That is not to say that there haven't been things happening in security.  There have been happenings, but it's all seemed so banal.  Maybe it is just that the noise has risen so high that the constant din makes it seem that nothing is happening.  Signal acquired.

I watched a show called Chasing Madoff and it was once again one of those moments where several major concepts coalesced in my mind to form an article.  Something worth saying.

One of the many very compelling points of the Chasing Madoff program was the fact where the SEC was given an investigation wrapped up with a bow on it and on numerous occasions flatly refused to do their job.  They refused to investigate when all they had to do was confirm the facts that were documented for them.  The idea then struck me with clarity...why should they?  They have no real incentive.  No recourse if they don't do their job.  Well, you could argue that what did happen, could happen...global markets collapse, governments go bankrupt.  Of course, if you are lucky (or unlucky, depending on your point of view) enough to be a government employee, you still get paid even though the government shuts down, so whether you do your job or you don't, you still get paid.  We've seen this very recently.

"The average person should be aware that when you get a brokerage statement it is only a piece of paper representing what that person thinks they own." -- quote from former Madoff Account holder #41-245711, from Chasing Madoff.

This has been a personal point that I have made to people in deep conversations about the macro implications of computer security for many many years.  The vital numbers that we believe in to live our lives daily are simply bits and bytes in a computer database.  Whatever those bits and bytes say, people believe.  Today the database says you have $1000 in your checking account, tomorrow it says you are overdrawn by $500.  Even though you did nothing, you are suspect and have to prove your innocence.  Intelligent people willingly acknowledge this, but fail to really absorb the information about what it means.

So what does it mean?  I would suggest that, at a minimum, you doubt just about everything that says the system works if you study hard and play by the rules.  There exists overwhelming evidence that rich people are rich because they cheated the system at some point in time or are actively cheating the system to gain or maintain their wealth.

By now you are certainly saying...so this is a blog about security, right...why the remedial history lesson on finance and wealth?

What's changed since then? (and what does it have to do with security?)


Having had numerous personal experiences and other, what I call 'near-miss' experiences, with regulators, this kind of willful ignorance by regulators goes on every day.  The Madoff incident was not a freak occurrence.  It is an everyday happening in nearly every industry that I have seen that has government regulatory controls.  It is obvious that no lessons have been learned from the Madoff incident because the system remains, for all intents and purposes, intact and without any noticeable change.

If the designated protectors refuse to protect you and do the job that they are supposed to do, what options do you have?

I was asked the following question in a job interview recently:  "What is your stance on active defense?  How do you feel about attacking those that attack you [in cyberspace]?"  Aside from the fact that it was definitively one of the toughest questions that I have EVER been asked in a job interview, it is also a very pressing issue that involves all security practitioners today.  Much more so than I even had thought about up until that day.

Essentially, if there are those that refuse to play by the rules, how far is reasonable to push beyond those same accepted norms in retaliation using like weapons and tactics?

Until I actually spent many hours pondering this very question, I previously dismissed it with a simple answer - 'not smart, too high of risk.'  For the record, this was not the answer that I gave that day in the interview.  Prior to that day, however, there just seemed to be too much to lose if you were an organization of any size:  Legal repercussions, impact to reputation, regulatory response...

Risk...what risk?


Hold it right there....  Regulators?  They are virtually useless, so there goes that risk.  So what about the other risks?

Legal?  What is an evil government going to do if you strike back at their theft of your intellectual property or data?  Surely they would mount a legal claim that you attacked them after they stole your information?  Ok, that is out.  Tell someone that they stole your data and you are attacking them and your response is illegal?  No, that won't work either.  Where could they possibly file such a claim?  No single country has proven legal jurisdiction over another in such international border matters, especially when it comes to the very fuzzy area of cyberweapons and attacks.  People don't care about a cyberattack even remotely as much as a real bomb going off.  Which is to say that if it isn't a real bomb in their back yard, they just don't care.  So there goes the legal risk.  Well, that leaves issues of risk to reputation.

If you are in a situation where you are considering striking back at someone for stealing valuable data, you are already through the looking glass when it comes to reputation.  Maybe you have an obligation to report that you lost some personal data, but that is about where it ends as far as this concern goes.  The marketing snow-job machine takes over and whitewashes away most of your worries.  A bit of credit protection here, a vague press release there and you are virtually absolved.

If traditional issues of risk are gone, what's left?


Well, once you get rid of the traditional business risk, there is nothing left but battlefield tactical and strategic risk.  What remains are troop and weapon strength assessments coupled with defensive strength due to your position and posture.  Very few security practitioners have experience with such assessments so your traditional trusted advisers are likely to be far outside of their element here.  Aside from the tactical battle assessment, you would need a strategic assessment as well.  How long is this fight likely to last?  Will it escalate or will the adversary go and find some low hanging fruit elsewhere.  Is your adversary's weapon skill limited to long period research weapons or are they able to fashion improvised weapons and use guerrilla tactics?  This is getting into the deep weeds quick.

If you choose this course and properly recognize that the traditional business rules of risk simply do not apply, you cannot be so foolish to think that no rules of risk apply.

And always remember the final question of traditional war as it is quite applicable here....does anyone really win?

Thursday, October 3, 2013

The security implications of the US Government shutdown

Well, after much encouragement from friends and colleagues, I have created a new home in this blog.  Welcome to the inaugural edition!

What better to tackle than one of my favorite targets of high noise and low signal? Let's take a slightly different look at the government and their latest schoolyard sandbox silliness; their inability to agree on a budget and shut down the government.  In all honesty, I didn't think they would get to this state.  I guessed they would cut a last minute deal and work out their differences.  Well, like five-year-old's at the playground, they have quit playing together, picked up their marbles, and gone home.  Of course, they still get paid, which is a crime in its own right; once again showing that they are willing to hurt lots of people to prove their private points as long as they are not among those truly affected.

Ok, all that aside, I didn't want to write about that issue in particular.  For those of us that have worked in the government contracting infosec space will attest, the heavy lifting of the government's computer security business is done by contractors.  Who are now out of work, afaik.  I have had client's government intel feeds say they are shutting down due to the budgetary shutdown, so I'm extrapolating a bit here.  I'm sure there may be a few projects that have some secret funding or protected funding, but I doubt they could or would talk about it either way.  Ponder both of those possibilities for a few seconds....