Search
Top
Contact
RSS
Monday
Feb032014

Get the Boiling Oil Ready 

I've blogged before on the topic of computer security and the need for approaches like "asymmetric warfare" to the security problems that our industry – actually our entire society – is experiencing. The recent Target breach is yet another example of how out-of-control the situation is becoming.

I believe we are now on the cusp of a large shift in the corporate and governmental stance on this problem. And this shift may finally begin to turn the tide.

Going On Offense

First you have to understand that as an industry, we've always been in a defensive posture when it comes to cyber attacks. This has been a natural consequence of US law providing no protection for retaliatory responses. Any actions you take against an attacker must not violate the same laws that the attacker violated when they attacked you.

This stance is a purely defensive one – meaning only the US government has the right to retaliate against the hackers, whether that be by legal means or cyber attack. The problem is that the US government doesn't have the resources to effectively track, prosecute and/or retaliate against the hackers. It is not that there are so many hackers; it's that there are so many weak spots for them to attack.

The Internet is like the Wild West where there was one US Marshal for many hundreds of square miles with bands of bandits roaming around. The key difference is back in the Wild West, everybody was armed with weapons to defend themselves. The current state of cyber attacks is that victims get to wear all the body armor they like – but they cannot raise a hand in response.

You cannot win a war if you are always on defense.

An Internet Castle Doctrine

The clear precedent for changing this situation is the concept of self-defense. You can legally take the life of another human being if you do so in defense of your own life. This concept has been around for a very long time and is well tested by and supported by the law and the courts.

In addition to self-defense is the Castle Doctrine. While laws supporting this doctrine do not exist in all states, the concept is pretty simple – the immunity of self-defense is extended to your abode. In other words, your home is treated as your "castle" and you can use lethal force to defend it.

What I believe is needed now is a cyber version of the Castle Doctrine – an "Internet Castle Doctrine". Laws supporting an Internet Castle Doctrine would closely follow the principles of the Castle Doctrine and self-defense. These laws would protect you or your organization if you choose to retaliate against a cyber attack in an offensive fashion.

It seems that most cyber security professionals agree that it is time for this change. Only 30% of IT security leaders were not ready to pursue non-defensive responses to cyber attacks because "too many legal and ethical questions" remain.

Weaponizing Cyber Security

In the same way that the need for self-defense feeds the gun industry, an Internet Castle Doctrine is likely to feed an industry producing cyber self-defense weaponry. CrowdStrike is a startup that came out of stealth mode in 2013 pursuing new approaches to responding to cyber attacks. While they are not offering cyber weaponry yet, they are working on "active defense" systems. There are bound to be more startups quietly working on this too.

Along with the creation of this cyber weapons market will come the inevitable counter arguments that will fuel "cyber weapon control" efforts. As James Lewis, a senior fellow with the Center for Strategic and International Studies put it, enabling counter attacks

Create[s] the risk that some idiot in a company will make a mistake and cause collateral damage that gets us into a war with China.

Yes, collateral damage and unintended consequences are a real concern. However, we have the same concern with guns and self-defense and yet seem to manage well enough. And, at the moment, there doesn't appear to be any other viable alternative to weaponizing and counterattacking.

So, next time the barbarians start attacking your castle walls don't just fill the moat and raise the drawbridge – start thinking about boiling some oil.

Friday
Nov082013

Get Rid of Your Safety Net

I just read a remarkable article about the rise and fall of a startup in San Francisco called Everpix. They developed a web-based photo organizing and archiving application. As is the trend, they used a freemium model for the service they offered:

The service seamlessly found and uploaded photos from your desktop and from online services, then organized them using algorithms to highlight the best ones.

As is also the trend, the company was filled with young and talented entrepreneurs. The more complicated explanation about why Everpix failed is understandable:

The founders acknowledge they made mistakes along the way. They spent too much time on the product and not enough time on growth and distribution. The first pitch deck they put together for investors was mediocre. They began marketing too late. They failed to effectively position themselves against giants like Apple and Google, who offer fairly robust — and mostly free — Everpix alternatives. And while the product wasn't particularly difficult to use, it did have a learning curve and required a commitment to entrust an unknown startup with your life's memories — a hard sell that Everpix never got around to making much easier.

At the micro level, that makes a lot of sense. But there is a larger macro effect going on here that they (and the author of the article) touch on without even realizing it:

“You look at all the problems that we’ve had, and it’s still nothing,” he said. “I have more respect for someone who starts a restaurant and puts their life savings into it than what I’ve done. We’re still lucky. We’re in an environment that has a pretty good safety net, in Silicon Valley.”

The main problem in my opinion was not that they didn't execute business strategy correctly – at most that is just a symptom. The real problem is that they operated with a "pretty good safety net".

Working Without a Net

The major problem that many companies fall into when receiving angel or venture funding is that they aren't challenged every moment of every day to do what it takes to survive. If you look at it from a Maslow's hierarchy of needs perspective, the moment the pressure of survival is removed from a fledgling entrepreneurial effort the focus will tend to drift towards longer-term and less relevant aspects of starting a business.

This shift of focus is a death sentence for many companies because at this early stage they still have no inherent ability to survive without the funding – just like a new born infant. If critical care is not taken to make the company self-sustaining at an early stage it will likely not be when the money runs out.

I'm not saying that everybody should be bootstrapping every entrepreneurial effort. Many times, funding enables strategic maneuvers and is the fuel that feeds a "fire" with growth – and with good maneuvering and good growth often comes more stability and longevity.

But if bootstrapping is a viable option, my belief is that the majority of the time, the company that emerges will be a lot stronger and healthier.

So, get rid of unnecessary safety nets around your entrepreneurial efforts and focus on the details of basic survival. Pay close attention to the life blood of your company – its profitabilty – early and often and make sure it is on a trajectory that will make your company self-sustaining as quickly as possible.

As the saying goes "live each day as if it were your last".

Tuesday
Oct082013

Apple Will Be the King of Indoor Location Services

Without saying so, Apple has entered the battlefield of indoor location services. And it appears they are going to win – and win big.

Their first move was when they released the iPhone 4s with Bluetooth Low Energy (LE) support in it. Many observers have been expecting Apple to ship NFC support in their phones for a long time. Not only do they continue to disappoint in that area they instead shipped Bluetooth LE with little fanfare.

Their second move was the acquisition of WiFiSLAM in March of 2013. WiFiSLAM was a small startup company that was doing some amazing work marrying up machine learning with WiFi triangluation and raw sensor input from a mobile device's compass, inertial and gyrosopic sensors. Their work promised to dramatically improve the ability for a mobile device to determine its location indoors.

Then in June of 2013 Apple announced iOS7 and there was a little remarked feature buried in the slides - iBeacons. While official information about iBeacons from Apple is under NDA, some reverse engineering and a lot of speculation has revealed that iBeacons are a protocol enhancement to Bluetooth LE that enables devices that conform to the iBeacon protocol to integrate with iOS Core Location services. The result is that iOS applications are able to detect and respond to events that indicate that the device has moved into and out of a region defined by the iBeacon.

Finally, when Apple announced the iPhone 5s, there was an interesting new chip onboard the phone: the M7 coprocessor. This chip is "designed specifically to measure motion data from the accelerometer, gyroscope, and compass".

Individually, each of these moves seem relatively incremental until you put them altogether in this context:

  • Create inexpensive iBeacons that can be placed indoors that not only define a region but are registered at a particular location
  • Use Bluetooth LE in mobile devices to communicate with them without draining the battery
  • Pull high precision motion data from the M7 coprocessor, again without draining the battery
  • Integrate all of the above using WiFiSLAM's algorithms
  • Make it all simple to integrate into iOS applications.

The result is going to be an amazingly inexpensive, power efficient and simple to use and operate indoor location system. In typical fashion, Apple is addressing the entire ecosystem of the indoor location problem in a very innovative way – the result of which will again be significant market domination.

Tuesday
Sep242013

Rise of the Virtual Machines

UPDATE 2: Since my last update, I discovered Mainframe2 another pretty amazing take on virtualization.

UPDATE: Since I wrote this post, I discovered Docker, another very interesting direction in virtual machines.

"Virtualization" is a term that's used pretty regularly – but exactly what do we mean when we use that term? There's roughly three levels of virtualization in use today:

  1. Hardware and CPU emulation. This is the lowest level of virtualization. Most recent implementations emulate a standard PC architecture and run machine code. Despite the recent growth in this area (VMWare, Parallels), this type of virtualization actually dates back to 1972 when IBM released VM/370 for mainframes.
  2. Byte code emulation. This higher level emulation implements an abstract computing environment (registers, memory, instructions) only – there is no true hardware emulation. This type of VM assumes it is running in the context of an operating system that provides all of the facilities hardware peripherals would normally provide. The Java VM, Microsoft's CLR, Python, Ruby and Perl are all examples of this type of VM.
  3. Sandboxing. This high level virtualization modifies the behavior of operating system calls to implement a container ("sandbox") in which normal applications become isolated from other applications and system resources as a matter of policy. Some examples are Apple's App SandboxiOS App Sandbox and Linux Containers.

What all three of these techniques have in common is that they mediate access between an application (or operating system) and it's execution environment. The goals are to increase security, manage resources more reliability and efficiently, and to simplify deployments. Theses benefits are behind the rapid rise of hardware virtualization over the last 5 years.

What's interesting is the parallels between virtualization and the web. A virtualization instance (or virtual machine – VM) creates a virtual environment for each application/operating system to operate within for both security and efficiency reasons. Web browsers also do the same thing with Javascript – each web page has it's own execution environment. You could call it level 2.5 virtualization as it it shares aspects of level 2 and 3 virtualization.

Virtualization can be a mind bending exercise – especially when you start looking at things like JSLinux. JSLinux is a hardware VM implemented in Javascript that runs inside of a web page. The demo is pretty amazing – it boots a relatively stock Linux kernel. The mind bending part is when you realize this is a level 1 VM implemented inside a level 2.5 VM. Technically, you should be able to run a web browser inside of JSLinux and launch yet another nested VM instance.

The Blue Pill

Where is all of this going? With almost 4 different types of VMs and proofs of concept that intermix them in reality altering ways (The Matrix anyone?) it seems we haven't reached the apex of this trend yet.

One path this could follow is Arc. Arc takes the browser in a slightly different direction from JSLinux. Arc takes Virtual Box and packages it into a browser plugin which is then combined with a specification for specing web downloadable virtual machines. This makes the following possible: install the Arc plugin into your browser, visit a web page with an Arc VM on it and the Arc plugin downloads the spec, assembles and launches the VM and you wind up securely running a native application via the web.

In other words, visiting a web page today could turn into launching a virtualized application tomorrow.

While there are clear efficiency and security benefits to this, there's also a huge developer benefit: developers would be able to implement web applications in almost any conceivable fashion. They can choose the operating system, the GUI toolkit and anything else they like as the basis of their application. The situation where the web tends to be the lowest common denominator gets turned on its head. Developers are now freed to develop with whatever tools provide the best experience and value to the end-user.

This fanciful scenario implies a possible future trend: the death of standards. Web standards exist and are adopted because they benefit developers – they create a homogenous ecosystem which is easier to write for and deploy into. But, if virtualization takes us to the point there the characteristics of the execution environment are so isolated from the application that they have no impact on it, why bother with standards?

If this outcome comes about, the irony will be that the Java VM shot too low when it first arrived on the scene claiming "write once, run anywhere". Maybe we are really going to end up with "create anything, run anywhere".

Friday
Sep062013

The Economics of RFID Performance

I've built many types of RFID applications in the past: passive/active, presence/area/position, HF/UHF/UWB. Regardless of the technology and the application, there's always been a disconnect between a users' concept of how RFID should work versus the real-world behavior.

The basic concept of RFID is that you can identify an object at a distance using some sort of wireless protocol. There are many different technologies and techniques for doing this but they all fundamentally rely on some sort of RF communication protocol.

I've written previously about the issues with unlicensed RF communications. The unstated implication is that if you could see RF communications the way we can see visible light, we would be stunned by the storm of RF constantly swirling around us. It would be like the inside of a dance club with disco balls, lasers and colored lights shining everywhere.

But even in "quiet" environments, RF communications still suffer significantly from fundamental limitations of physics. For instance, water absorbs most RF signals – and the human body is the most common mobile container of water when it comes to RFID applications. If the line of sight between a WiFi base station and a WiFi device is blocked by more than 3 bodies, the signal will be completely lost. RF signals are also affected by metal which can reflect, absorb and even redirect signals.

As a result, performance can be unpredictable in even the simplest deployments.

The root of the disconnect I referred to earlier is that end-users don't perceive these complexities because they aren't visible to the naked eye – you have to visualize the situation mentally to understand what is really going on. Their naive (but rational) assumption is that RFID should just work – and work reliably.

While you could spend a lot of energy educating end-users about these environmental complexities, you are probably better off framing the entire issue in economic terms which can be summed up in the following chart:

What this chart is saying is that most RFID systems (and applications) have to make a tradeoff between cost and performance. The trade off is made such that, on average, you get a reasonable level of performance for a fixed cost. Many times, "on average" will be something like 75% of the time and "reasonable" performance level will be 95% accuracy in reads.

So, generally speaking, a fixed investment in equipment gets you a high (but not perfect) level of performance. Within that fixed cost you can tune things, rearrange equipment and application parameters and take other steps to linearly improve performance by say 1%. Once you reach the limit of those techniques, you can then begin to do things like add redundant equipment to the setup for another linear increase in performance by say another 1% – but with a faster increase in cost.

Now you are at the stage where actions are best described as "heroic" and costs begin to rise exponentially. For instance, you begin to look into building custom antennas, boosting system powers beyond regulatory limits and hand-selecting RFID tags for their individual performance characteristics. Yet all of this might get you another 1% linear increase in performance.

And therein lies the lesson: there is no cost-effective way to get 100% accuracy out of an RFID system.

So take my advice and start RFID projects off with the graph above and a lesson in economics. It will save you and your customer a lot of grief.