Wednesday, May 16, 2018

Cell carriers are selling your private location data

This article got me thinking about my standing policy of always opt out of every option available on privacy agreements.

Fortunately, living in a California, I have that option and the additional option that I must be notified annually about my rights to opt out of company's selling my private data. Of course, there are so many of these agreements, I can never remember if I had accidentally forgotten to send one in, so I make sure that whenever I get one of the privacy notices that I check all the opt out boxes and mail it back.

I know that some people would say "Who cares if some company knows where I go? It's not like I work on a secret military base or for the CIA or something." Well, that's just not the point.

One of the 'obscure' things that I like to point out when people ask me about information security is that in the information age data is money. Pure and simple, it's just that easy. I refuse to simply give away my information for free so that someone else can go and sell it for their own profit, and give me nothing in exchange.  I know that in the marketing departments of these organizations, someone, somewhere says "Look, people are stupid. They'll just give us this info for free and we can make lots of money off of that!"

Two really big things piss me off about that sentiment. One, their assuming that I'm stupid and am incapable of understanding the question if I were openly asked it. Two, when they assume that I wish to automatically opt into giving away my money (private information) for free and hide that fact in some ridiculously long privacy agreement that includes every legal term in the language and stealthily bury my implicit agreement somewhere deep in a document the length of a bible of legal-ese.

The reason this "P.T. Barnum - there's a sucker born every minute" business model works in the US is our nationwide defacto information privacy policy of everyone must opt out or you are automatically assumed to have opted in. In the European Union (EU), they have a different model; citizens are assumed to have automatically opted out, unless they specifically indicate that they wish to opt in. This is the general basis of the whole GDPR thing that everyone is fussing over here in the US when doing business with citizens in the EU; and for that matter 'doing business' can mean that an EU citizen simply visits your web site and reads your blog and doesn't transact any business other than looking at your information.

What happens when every citizen is automatically assumed to wish to opt into the broadest possible interpretation of information sharing is a basic erosion of our personal freedoms. Specifically mentioned in the article is that law enforcement agencies are claiming that they are not performing warrant-less searches because they are simply looking into the data that is provided by a third party and it was that third party that performed the search and the subject of that search specifically opted in to it.

First...really? Law enforcement performing a real time search of my location via a third party application somehow isn't an unconstitutional warrant-less search? So by that logic, if a bank robber steals from a bank, and I mug him as he is leaving the bank with all the stolen money, I can keep the money and have committed no crime, right? Somehow I feel the bank wouldn't see it that way.

Second, I don't recall opting into allowing warrant-less searches of my personal data, and had I been explicitly made aware that I might possibly be doing so, I would never have allowed it. [*update - A hacker strikes back and shows law enforcement weak passwords, that enable would enable others unrestricted access to this resource and the ability to track virtually any cell phone in the US and Canada in real time.]

The whole issue with Facebook and Cambridge Analytica follows this general pattern. Someone that knows me in some capacity opts into a survey or Facebook app and shares their information...fine, it's their right to do so. However, when Facebook and Cambridge Analytica assume that because that person has mentioned me in a post somewhere on their page that I too wish to be opted into their information harvesting is absolutely wrong. I don't know the specific legal term for the concept involved here, but I know that my rights can't be waived without my specific permission. I really hope that the 'shadow profiles' that these companies keep and have come out as a part of this particular controversy are given a thorough consideration by our legislature and rapidly brought under some legal framework. One of the biggest problems with these shadow profiles is that if I don't know such a shadow profile exists and have no direct business relationship with the organization and never been presented with a privacy policy to agree or disagree with; I have no knowledge or warning that I should request to opt out.

So the lesson today is this...when it comes to privacy agreements, always opt out. [**even though my standing preferences with my cell carrier were saved as opt out of everything, they have added a new option for 'third party sharing.' I had to go in and set this to off or opt out for each phone number on my account because they assumed that I wished to opt in when they added this new choice.] Ultimately, you have no idea what future use of your data someone might come up that may not exist today for them to warn you about even if you could comprehend the tiny printed legal-ese of most privacy agreements. Since they are not willing to be clear about what they may do with your data, the safest policy is to say no; you can't have it, you can't share it, and you certainly can't sell it without paying me for it.

Thursday, June 22, 2017

Just because Hollywood makes it scary, doesn't mean it isn't real or really scary

I was reading an article from Dark Reading that came through my inbox, and it looked pretty interesting from the start...but it took me about 2 seconds to overcome his 'severe limitation' scenario with Bluetooth and cars.

How about a cell phone or smart phone attached to the car somewhere? That sure extends my ability to have an extended range persistence platform to be within range of your local Bluetooth. There are tons of other ways that these type exploits exhibit themselves. My car has a nav system, so there is a cell phone somewhere in the mix of that system and makes my car pretty much online all the time.

Yeah, Hollywood takes liberties with cyber issues and science issues, but it's a common shortcut in storytelling. Give the audience enough to get the general point across and don't delve too deeply into the details, because they don't care. It's not relevant to the story. Does that mean that there isn't risk there? Absolutely not. There is no question that there is risk there.  But the real question when it comes to these CIA/NSA (et. al.) scenarios is this; am I a high enough value target for someone to come after me in this way? For most of us, the answer is a resounding 'no.' However, there are a myriad of other scenarios where these type exploits can be leveraged for lesser destructive means, like say...a suspicious spouse, or an overly aggressive background check. 

These devices were never created with security as a design requirement. So why should it be a surprise that they have a poor track record when put to the test? To say there is low risk with things like IoT and connected cars is doing just what Hollywood is doing. Telling part of the story to make a point. Bad guys don't take no for an answer when it comes to accessing something that they want. The question you need to ask is "Is there anything in this that makes it worth their while to come after (usually this means money) or any common scenarios where I could inadvertently come into the line of fire?" Just because someone isn't at war with you, specifically, doesn't mean you won't become collateral damage. A few worms (as a delivery mechanism) have made a resurgence of late and just because we thought they were all but dead doesn't remove them as a viable means of propagation under the right conditions.

It gets a lot scarier when it comes to things like medical devices and 'SCADA,' just look to Ukraine...oh yeah, that was in the news this week too

Thursday, May 7, 2015

Learn the rules to the game or get off the field!

The war of data classification and preventing data exfiltration

I've been working for a company for a few months now and it's been pretty interesting to start.  However, as time has dragged on and the new and shiny things aren't so shiny any more, the badness starts to rear it's head.  It's following the same pattern as most engagements.  You start off and they love your energy and ideas.  Eventually, you get a feel for the lay of the land and start making some recommendations that require real work.  After initially balking about not wanting to do real security work, they start throwing out terms and catchphrases that they hope will buy points and effectively allow them to buy some security.  Spoiler Alert...yeah, those advanced technologies that are all the buzz right now...they require work too.  And what's worse, they have some heavy prerequisites, like defining and learning more than a few rules.

Some notes on a debate with a friend has been sitting here in my account as a draft and just waiting for me to polish it off into some complete and coherent thoughts.  This particularly long flight between JFK and SAN seemed like a good time to weave a few ideas together that have been pooling up in my idea box. Enough of the side trip, lets get back on the path.

My friend and I were discussing the initial problem stated above and a point was made about the tools of the trade need to be dumbed down.  First, I find that idea pretty offensive.  The idea that we Americans, I should be correct to say North Americans, and specifically all those flabby people that can't pull themselves away from the latest reality show to bother to learn a bit about computers that have become embedded in our lives...what was I saying?  Oh yeah, that we USA folk can't be bothered to learn enough to innovate or even understand something unless you put it in a beer commercial during and NFL game and we need to have security dumbed down.  Heck, we can dig into the NFL rules enough to know the expected inflation pressure of a NFL football, but we have a really tough time figuring out an effective way to put a security label on data and consistently handle that data in a secure manner.

So what does that rant boil down to?  Companies need to pull their heads out of their collective backsides and realize that security is not just a footnote, but a key component of their business plan, as much as finance, HR, and legal.  It is as important to have solid security strategy as it is to have solid business strategy and solid legal and HR strategy. When I keep seeing security as an afterthought or relegated to a sub-group of IT, I know that it is going to be a bumpy engagement and I'll need a Valium at the end of each day just to keep going back.  And it's not just mid-size corporations.  Look at what happened to Sony just recently (well, not so recently, these notes are old, but still...the point is salient).  I wouldn't call them mid-sized.  These are not isolated incidents.  It is a systematic and willful ignorance of the rules of the game that is being played.  How can you ask the team owner for the right player skills and equipment when you don't understand the basic rules of the game, much less the advanced strategy that is built upon the thorough understanding of the basic rules?

Bad assumptions make the game harder 

Another point brought up in our discussion was about the importance of watching for and noticing the exfiltration of important data. Well, yes, that is very important, and I tried to argue that my friend was making a false assumption that companies know what is important and what is not and can readily recognize the difference between the two.  How can you expect most companies to know what is important and what is not when they put dumb policies in place that append stupid legalese to the end of emails that needlessly label every single email as confidential and proprietary data?  That is an abomination.  We all intuitively know that not every email is confidential or proprietary.  So why would you label it as such?  Now you have to handle it as such or admit that you don't know what is confidential and what is not and don't know how to handle it anyway.  Friends, this is the stuff that goes on out there every day.  I wish I were making this up, but I'm not.

My current client and numerous past clients all had some level of concern about data exfiltration.  Of course the obvious questions are asked of their 'trusted security advisor;' I put that in quotes because that is how they think of you until you tell them that it's gonna take some work, and then they label you a nut-job.  So, they ask you, how do we keep the important stuff from going out the front door?

"Well, what stuff is important?"
"You know...the secret stuff."
"No, I don't know.  How would you suggest that I tell the very complex tools what to watch for if you can't tell me?"

So much for dumbing down the, it's more impotant to learn the rules before you can begin to play and have any expectation of winning.

So where do you begin?

How do you begin playing a game?  Well, you start simple and work your way up.  Start with bulk classification and putting some rules in place to ensure that a big pile of data that is pretty much important to us can get nowhere near the front door. Make sure everyone understands the rule and knows their role in ensuring the team plays by it.  That idea alone is pretty big, because you have build on that as the rules get more complex. As you get better, you can start to add more complexity.  Different data types, different rules about who can see it, however, it is important to keep in mind that in every game there has to be a referee.  In this game it is the data custodian.  Basically a fancy term for someone who gets to decide if a bit of data makes the cut for a particular classification or not.  This person needs to be on their game and engaged at all times because we all know that if players aren't constantly watched, someone will try to sneak something past you.

Friday, November 14, 2014

What is the outlook for the future of cyber-defenses?

I was recently asked to comment on what I felt the outlook was for cyber defensive initiatives into the next ten years.  One of the specific aspects of the question was if I felt there might be a serious global cyber incident that resulted in billions of dollars of damage and/or massive loss of life.

Below are some of the thoughts that were within my answer and some more elaboration on the raw examples that I gave.

First, nearly all organizations in every sector that I have worked and consulted are living in denial that any significant threat exists. As long as those individuals that sit in the ivory towers of public and private (non-military) organizations fail to acknowledge that there is a threat there will be no call to action.  No call to action and those that control the purse strings will not feel the need to authorize spending on anything other than old, outdated, and ineffective cyber defenses that were last known to be effective, albeit marginally, over twenty years ago.

Why would they go after THAT?

When there is no perceived threat or the threat doesn't seem relevant to your area of influence, there is no feeling of need to act.  Therefore far too few resources are dedicated to defense. Meanwhile the bad guys continue to up their games, refine their tools, and develop their offensive technologies and strategies.  Consider a recent hack on NOAA (National Oceanic and Atmospheric Administration), which also includes the NWS (National Weather Service), both US governmental organizations. The flawed thinking of leadership today is "Who would bother to hack that?  There's no money there, the information they have is freely shared.  No need to spend much to protect that, right?" WRONG! Until we start thinking like the bad guys we will never be able to defend against their strategies.  Well, when the weather is good, it's not such a big deal to be without this information for a few days, but what about when the weather is bad?  What about critical forecasting data during times of bad weather? Approaching hurricanes? Storms that generate tornadic activity? Floods? Think if during the approach of hurricane Katrina that the weather data said that it wasn't going to be a category 5 hurricane, but rather a category 1 and no real need to evacuate?  Even a delay of 12 hours could cause a catastrophic loss of additional lives. Now, there are other communications systems and methods that would help to mitigate the bad data provided by NOAA and NWS web sites, however, let's keep following this line of thinking.  What other systems might have similarly lower levels of security resources dedicated to their protection?  Well, in my area after a previous round of wildfires the local government implemented a 'reverse 911' system.  Unlike 911, where you call for emergency service and they have your location data to be able to find you, this service is an opt-in system.  You have to sign up to receive notifications from the government about local disaster risk, threat information, evacuation routes or need to evacuate, etc. Obviously, there is a risk that hackers could steal that personal information, but the government promises to otherwise not share it, although that is not what we are talking about.  What if that information were unavailable or outright wrong during a period of emergency when minutes and seconds may count?  Having a compromise to such a system and cause it to distribute wrong information during a period of emergency would be quite an effective weapon.

But wait...there's more.

Now, the latest information about this was that it was 'just' a compromise of the web sites that report weather data, but let's start thinking like the really bad guys.  What other technologies do these organizations have access to?  Well, weather radar...could be interesting...and satellites.  Hey, not that could be really interesting! If, as a bad guy, I could break into NOAA and NWS, I might be able to get at the computer systems or protocols that communicate and control the weather satellites.  Now, I'm sure this is a reach and I know nothing about satellite communication protocols and frequencies, but it may be possible to attempt to communicate with other satellites using these systems...maybe communications satellites (data or broadcast media).  Maybe it would be enough to just be able to cause a DOS attack against them.  That would be pretty effective to take out some of those alternate forms of communication if a compromise of the web data were showing incorrect information during a time of crisis.

This is how the bad guys think and this is exactly what management doesn't want to hear and is far too quick to dismiss as pure fantasy; the stuff of science fiction and an over-imaginative, over-paranoid brain...until it happens and they see it on TV.

A war without rules

While our military and other government's militaries are engaging in a cyber arms race, there are few regulations or laws globally that prevent the use of military grade cyber weapons against civilian targets. Meanwhile, China, most notably, has a huge proliferation of middle-class development. It is those middle class people that crave jobs, and while their research and development investments are low, their efforts to steal intellectual capital and technology, that is the basis of middle class job creation, are very high. Even the systems that control the fundamental elements of our capital markets have been compromised and will continue to be compromised. In some instances governments are buying the tools from the hackers and keeping them secret rather than allowing the rest of us to know about it and try to protect ourselves.

Prospects for the future

The talent pool of people that are trained in security technologies and strategies is far too thin to cover the demand and that demand is only escalating. Management is reluctant to invest in cutting edge security training often believing it to be gratuitous and generally not a good return on investment. Every client I work for has many talented people that simply just don't think in terms that lead to effective security.  They trust everything works and don't suspect that things will go wrong.  Many of them are stretched so thin that simply making the system work is all that they have time to do. If you don't think about how things can go wrong, you certainly won't devote a lot of time to pondering what to do if things go wrong.  There are also people out there that are at their limit of training and barely able to understand the systems they are working on. It is tough to think in creative directions, like the bad guys do, when you are struggling to control the systems in your area of responsibility.

Will there be attacks in the next ten years that will cost billions of dollars and massive loss of life? Most definitely. In order to prevent it would require a paradigm shift in our view of the current technology that we posses and the risks that we are facing daily.

The famous last words spoken when a new attack is discovered is "I didn't expect them to do that." It is that very refrain that has become our battle cry of defeat.

Thursday, July 17, 2014

Implications of Quantum Capabilities, TAO, and other nasty tricks

From the just in case you weren't paying attention file...I know I haven't been keeping up on my reading for quite some time.

Original source article here.

A comprehensive internal presentation titled "QUANTUM CAPABILITIES," which SPIEGEL has viewed, lists virtually every popular Internet service provider as a target, including Facebook, Yahoo, Twitter and YouTube. "NSA QUANTUM has the greatest success against Yahoo, Facebook and static IP addresses," it states. The presentation also notes that the NSA has been unable to employ this method to target users of Google services. Apparently, that can only be done by Britain's GCHQ intelligence service, which has acquired QUANTUM tools from the NSA.
 ...and, of course, Bruce Schneier clued me in that I missed it.  And because the article that led me to that one is hugely important it is.

Now, yesterday I wrote about why 'they' do it, with 'they' being a reference to certain group of bad guys.  The question today is...well, not why the NSA does it, we know the answer there, but rather how certain are you that they (the NSA) are only doing this stuff to the bad guys?  Because, to be honest, a lot of the monitoring tools sound like there are targeted at normal citizens.  Or at least the widely used internet services that vast segments of the internet citizenry frequent.  The list of sites and the tools to exploit them are not only the domain of bad guys, but regular people all over the world.  While I'm sure that people with bad intentions use those sites too, I would expect a bit more cloak a dagger than just hiding in all the noise in plain sight (or site?) on Facebook, Yahoo, Twitter and YouTube.  However, I suppose, it is easier to poison the waterhole rather than track the 'critters' as they move through the woods.

I think that one of the most worrying aspects of this type of information is that when my peers and colleagues talk about this vulnerability or that vulnerability, that there exists a whole host of exploits and things that we DON'T know about.  In fact, even the vendors don't know about; as opposed to quietly know about and are working on a fix, but haven't mentioned publicly yet.  It is not just the governments that are keeping these things a secret, but to a certainty the bad guys have their own bag of tricks they are not keen to share (but are very willing to sell).

Other people making choices for me...

Now this list of compromises has caused me to think and notice on Facebook that videos that people share of cute and funny things have started playing automatically.  I used to have to click on something to make it play, which I was happy with.  I really hate the fact that someone else at some point in time decided that I automatically want to play and see every video of a cat or dog doing something odd, strange, cute, or funny.  I did learn that you can turn this functionality off, by the way.  Again, the assumption that you want to opt-in unless you specifically opt-out is maddening.

There can be a myriad of reasons why I may not want to drink at the massive bandwidth firehose that characterizes many popular sites these days; first among them is that I don't trust every bit of eye-candy left out there.  This list of government tools and capabilities is foremost among them.  An old trick by bad guys is to leave something out in the open that lures you to interact with it and suddenly the trap is sprung.  Greek story of the Trojan horse, anyone?  Variations of this trick come in all forms.  Think vendor conferences and a vendor booth with fish bowl of free USB memory sticks...complete with a chunk of stealth malware to infect your system when you plug it in.  Old trick, by today's timeline measured in internet speed, almost certainly a derivation of Ludicrous Speed.

Internet warning labels anyone?

I really like the trend in various state's legislation that requires the caloric content of restaurant menu items to be posted with the item itself.  It allows me to make a choice.  Now, obviously, like most people, I may choose to have that high calorie dessert once in a while, but at least I know the implications of my choice.

We really need some legislation to require the choices be left to the individual when it comes to internet content...maybe some warning language like on cigarette cartons.  "This link cannot be guaranteed to be safe.  Clicking it may have dire consequences, including allowing your government or a foreign government or an evil hacker organization to follow your every move."  I would have no problem with any elective setting to turn off such warnings and allow all content to flow automatically based upon user choices.  User choice being a key concept here.

Of course, all these government tools and compromises could be a major part of the reason of why we don't have such legislation...heck, they could even rig the polls that might sample public opinion as to whether we feel it would be a good idea or not.

Monday, July 14, 2014

Why do they do it?

Well, a completely different source, from my usual dose of NPR, got me to scratch my head and inspired me to write today.  I was reading a slightly older post from a colleague at work who shared a link to an article...and began to think that there was much more to the subject than was being discussed.

The article from mid-June about why Russian hackers are so good is here.

One point that is very much missed is the simple fact that the good guys have to be right all the time.  The bad guys only have to be right once.  That certainly slants the numbers in your advantage if your failures are basically ignored and only your successes count.  A very simply point, but consider this too...every country in the world could have iron-clad security protection laws, yet one does not.  As long as bad guys have a safe harbor of their own to ply their craft, they will operate with impunity from that base of operations like the pirates of 17th century that sailed the turquoise blue waters of the Caribbean.  This is an unrealistic description of a slightly shy of the ideal world where only one country would have less than iron-clad laws.  However, the reality is that anywhere in the world where there economic disparity exists, there exist opportunities for money to be made by hook or by crook.  This lends a Robin Hood-like charm to those that would steal from the 'rich' and give to the 'poor.'  This condition also gives a voice to those that see themselves akin to Robin the Hood and makes those that would otherwise play the role of the Sheriff of Nottingham less likely to enforce the laws, if any such exist, and care much less than they might otherwise be so inclined.

The Enemy of My Enemy

The Chinese have a saying, "the enemy of my enemy is my friend." If there is a country that has my country under it's economic or military thumb, how eager might I be to bother to do anything other than encourage, albeit quietly, some computer hacker that is stealing from my enemy or causing them economic heartache?  Simple question, huh?  If I don't like my neighbor and you are stealing from my neighbor's house, why would I care?  Ok, maybe in good conscience you might care a little, but what if your neighbor was a rich, pompous, jerk that did nothing but jump up and down and shout how awesome they are and it really sucked to have to live near them and see that all the time and no one liked him or her?...would you care then? Not so much, huh?

Let's take that a step further, what if this horrible excuse for a human was your neighbor and this person stealing their stuff was selling it real cheap at the swap meet? And some other less fortunate people in your neighborhood were able to buy some of this stuff for cheap and have a better life...would you be so quick to cry foul and demand that your local lawmakers or law enforcers do something to try to stop it? Dumb question, huh?

Certainly there are lots of historic reasons why a group of people become practiced at what might otherwise be considered questionable skills when they are fighting against an oppressor to survive.  Without being too controversial (what? no controversy...I'm outta here.), I'll point to the examples from the US Revolutionary war as an one easy point of emphasis where questionable skills were used by the 'oppressed' against an 'oppressor.'  The soon-to-be-US stole assets from the British overlords to fund their new country.  We call them 'startups' today.  Should that mean that such skills, maybe being the easier path to tread than the path of hard work and innovation, should culturally become the norm?  Obviously not...would be the morally correct answer.

So when do you change from criminals to a respectable society?

I would hope that the answer to this question would be quite obvious...when you have something to lose.

Let's go back to the Chinese saying again.  What is the enemy of my enemy from our perspective? Hopelessness...or rather having nothing to lose.  Wouldn't it make more sense to help these fellow humans past the stage of hopelessness and teach them how to create their own intellectual valuables that they can cherish and thereby desire a system of laws of their own to protect those valuables?

Recognize the symptoms of the real disease.  Hopelessness, pure and simple.  If you have nothing to lose you are willing to ignore nearly every legal and moral precept to improve your condition.  The catalyzing event is when you suddenly accumulate enough capital (intellectual or real) that you feel you have to worry about someone else wanting to take it from you.

(Now that leaves no excuse for those three-letter-agencies out there that simply are evil because they can be...sorry...couldn't resist one controversial dig.)

Wouldn't it be better all around for those of us that have plenty to teach those that don't have much how to create their own business, with all the computer bells and whistles?  Better than trying to go into their country and export our businesses to their country for the purpose of exploiting their resources so we can have more stuff?  Now there's a risk management tactic you won't learn in school.

Wednesday, June 11, 2014

The things we don't intend to share

Well, it's been quite a while since I have felt inspired to write, however, my local NPR station got me thinking once again.

I've heard NPR call them 'driveway moments' when you are listening to a story and find yourself no longer driving and parked in your driveway with the engine running and still listening because the story is so good.

The story began with a reference to Edward Snowden, which immediately perked my ears up, but it was just an intro line.  I have heard the teasers for this story for a bit now and was interested to hear the results.  Very interesting to say the least.  Here's the story about the things that can be learned from commercial software monitoring of your technology and the data that it sends and receives.  Now, this is distinctly different from what three letter agencies (TLAs) have at their disposal, so keeping that in mind is important when considering the revelations of this story.

As of this writing, the NPR web site shows two parts to the story, but I believe from my listening to the radio that there may be more coming.

Now, I know this subject matter very well, however, it still does occasionally remind me of how much I ignore because of simple convenience, like most other people do.  This, in and of itself, is an interesting aspect of this story that really isn't called out.  That issue is how much of our privacy we give up as a matter of convenience.  This is not a characteristic of European privacy law, but that is an article for another day.  At any rate, the reporter commented on his surprise on this issue during the initial set up of the experiment and the confirmation that everything was working and some back and forth questions as to whether or not he was actively using his cell phone.  He was not, but the monitoring team was seeing substantial traffic from his phone to the internet while it was 'quietly' sitting on his desk and in lock mode.

One of the other things that this story highlights well is the 'side channel way' that adversaries can get your data.  Rarely does a compromise result from a direct frontal assault.  The story mentions that every system has old programs and it can be those programs that leak data.  Add a few more bits and pieces of micro-facts from other programs and you have a significant piece of data or a coherent piece of information or even a story about you and your information.  This results from a couple of different illustrative points from the story.  The first is the way that the 'Steve's iPhone' in the story can lead from just any Steve in the country or world, to a specific Steve at NPR in Menlo park.  Simple google search after that and you have your specific Steve Henn.  The other illustrative point is what I call the famous last words compromise...'well, I didn't expect them to do that!'  This was illustrated by the 'adversaries' in this instance using several other side channels of information that are older programs that are less likely to be considered for patching or even running in a stable fashion and not needing patching...yet leaking data and not secure.

So what is the take away?  Well, certainly we need some monumental legislative change in this country for privacy.  We need to change our legal privacy expectations from 'opt-out' to 'opt-in.'  This means that any provider of hardware or software would only be allowed to your personal information if you specifically grant them access (opt-in).  This is the basic model that European privacy law follows.  By contrast, our legislative structure allows organizations to collect our data with minimal notification in legalese, rather than plain and easy to understand language, UNLESS we specifically say they cannot (opt-out).  That simple fact alone is the foundation of much of what is the basis of our continuing software problems that allow adversaries to perpetrate their craft.

The second take away is a bit more ambitious and far reaching and could have serious economic impacts.  Strap yourself into your chair for this one....  Software and hardware manufacturers need to be held criminally and fiscally responsible for security flaws.  I'll let that thought sit with you for a moment, because it is a big one.

I'll wait.  Go ahead and ponder it.

Yes, that does mean...

Yeah, and that too...

But it also means that security would be mandated by design and become an integral part of the economic product design decisions that every maker of software and hardware goes through at some level.  When security becomes a required feature of our computer landscape and tantamount to holding car makers responsible for the proper functioning of required safety features like seat belts and air bags, you will see some effective security happen.

Until then...we will continue to have 'well, I didn't expect them to do THAT!?!?"