Wednesday, March 30, 2011

Patterns of Success - Sam Adams


I first met Sam Adams back in 1992. I was an independent consultant giving advice on object technology and Sam was working at Knowledge Systems Corporation, helping customers learn how to develop applications using Smalltalk.
He had this kind of magic trick where he would sit in front of the computer and ask somebody to describe a business problem and as the person was talking he would be building the application in front of your eyes. Every 5-10 minutes he would present the latest iteration and ask if this was the solution he/she was talking about. Very Agile development before its time. Sam and I both moved on to IBM where we were part of IBM's first Object Technology Practice. In 1996, Sam was named one of IBM's first Distinguished Engineers and has spent the past 10 years in IBM Research.

John - Thanks for joining me on the Patterns of Success interview series. What kind of projects have you been working on recently?

Sam - Last year I worked on IBM's Global Technology Outlook (GTO). Every year IBM Research goes through an extensive investigation of major trends and potential disruptions across all technologies that are relevant to IBM's business. My GTO topic area was peta-scale analytics and ecosystems. This topic emerged from our thinking about commercialization of our current BlueGene high performance computing technology as we push higher toward exascale computing. Another major influence was the coming disruptions in systems architecture anticipated when very large Storage Class Memories (SCM) become affordable over the next 5 years.

John - Let me calibrate this another way. When you talk about the Bluegene and the peta-scale how does that compare to the recently popular Watson computer that won the Jeopardy! match?




Sam - In terms of raw computing power, Watson is about an order of magnitude less powerful than a BlueGene/P, which can provide sustained calculations at 1 petaflop..

John - That helps.

Sam - Another trend that we considered and an area I have been working on for the last three years is the single-core to multi-core to many-core transition. How are we going to program these things? How are we going to move everybody to a massively parallel computing model? One problem we are working on is that CPU availability is no longer the limiting factor in our architectures.The most critical factor these days is I/O bandwidth and latency. As we move to a peta-flop of computing power we need to be able to feed all those cores as well as empty them of results very, very quickly. One of the things we realized is that this scale of compute power will need a new model of storage, something beyond our current spinning disk dominated approach. Most current storage hierarchies are architected assuming that CPU utilization was the most important factor. In the systems we envision, that is no longer the case. Current deep storage hierarchies (L1 - L2 - DRAM - Fast Disk - Slow Disk - Tape) have lots of different latencies and buffering built in to deal with the speed of each successive layer. Petascale systems such as those we envision will need a very flat storage hierarchy with extremely low latency, much closer to DRAM latency than that of disks.

John - It seems to me that one of the more significant successes in this area has been the map/reduce, Hadoop movement used by Google for their search engine. How does the research you are working on compare/contrast to this approach?

Sam - We see two converging trends, the supercomputing trend with massively parallel computing being applied to commercial problems, and a trend of big data / big analytics which is where Hadoop is being used. The growth of data on the internet is phenomenal, something like 10 fold growth every five years. The business challenge is how do you gain insight from all this data and avoid drawing in the flood. Companies like Google and Amazon are using Hadoop architectures to achieve amazing results with massive data sets that are largely static or at least "at rest". In the Big Data space, we talk about both data-at-rest and data-in-motion. The storage problem and map/reduce analytics are largely focused on massive amounts of data at rest. But with data-in-motion you have extreme volumes of fast moving data with very little time to react. For instance, imagine dealing with a stream of data like all the transactions from a stock exchange being analyzed in real-time for trends. IBM has a product call Infosphere Streams  that is optimized for such data-in-motion applications.
So the combination of many-core supercomputers, data-at-rest analytics, and data-in-motion analytics at the peta-scale is where the leading edge is at today.

John - So with the data-in-motion stream analytics is not one limitied by the performance of the front end dispatcher which looks at the event in the stream and then decides where to pass it? If the stream keeps doubling will not that component eventually choke?

Sam - Everything is bound by the ingestion rate. However, the data is not always coming in on the same pipe. Here you are getting into one of the key architectural issues... the system interconnect. In most data centers today use a 1Ge or 10Ge inter-connect bandwidth. This becomes a bottleneck, especially when you are trying to move hundreds of terabytes of data all around the data center.

John - So as much as we hold Google up as a massive computing system with its exabytes of storage and its zillions of processors, it is dealing with a very parallel problem, with all the search queries coming in over different communications infrastructure to different data centers dealing with random data sets. Compare this to a weather forecasting application that can reduce the problem to separate cells  for parallel operation but must assemble all these results to produce the forecast.

Sam - The most difficult parallel computing problems are the ones that require frequent synchronization of the data and application state. This puts a severe strain on I/O, shared resources, locking of data, etc. 
At the end of the day the last bastion we have for performance improvements is in reducing the latency in the system. And to reduce end-to-end latency, we must increase the density of the system. Traditional chip density has just about reached its limit because of thermal issues (There has been some work at IBM Zurich that could shrink a supercomputer to the size of a sugar cube). Beyond increasing chip density there has been a growth in the number of cores, then the number of blades in a rack, and the number of racks in a data center, and the number of data centers that can be shared for a common problem. While each tier of computing increases the computing power enormously, the trade-off is that the interconnect latency increases significantly and eventually halts further improvement in overall system performance.
One big area for innovation in the next 5-10 years will be how do we increase this system density, primarily by reducing the interconnect latency for each computing tier. The ultimate goal would be for any core to access any memory element at almost the same speed.

John - So in your area of research on high performance computing, particularly working with customers who have tried to adopt some of these emerging ideas, what have been the successful outcomes, and did customers do anything special to be successful? I guess because you are in IBM Research, even the work with a customer is considered an experiment with a high risk of failure.

Sam - If you look at the whole shift towards massive parallelism, the successes have, unfortunately, all been in niches. I say unfortunately because we would love to have some general solution that applies to all computing problems. The example we spoke of earlier with Google using massive parallel computing to solve its search problem. They have optimized their solution stack from the hardware up through the OS to their application architecture. It solves their problem but it is a niche solution. 
The functional programming folks have introduced solutions like Haskell that supports concurrency and parallelism. The problem with functional programming is that the programming model provided in the various languages that is not intuitive enough and difficult for the large majority of programmers to grasp. Contrast this with the success of the object oriented movement. The programming model mapped cleanly with the real world and still allowed the programmer to manage the organizational complexity. 

John - And in the OO programming model each object is separated from other objects by a defined set of sending and receiving communications. So, in theory, these objects could be distributed and run concurrently.

Sam - We need something like that to be successful with high performance parallel computing... a programming model that allows someone to develop in the abstract without explicitly thinking about the issues involved with the underlying system implementation, and then a very clever virtual machine that can map the code to the chips / cores / blades / servers / data centers so that the best performance is achieved.

John - It seems like some of the successes have been because the nature of the problem happened to fit the ability of the technology at that time. 

Sam - To a point. For example in the Google search problem it is often quite challenging for the programmer to figure out the map and reduce details so that it works efficiently. So successes have been niche areas where the application was exploited to successfully use parallelism.

John - Like with weather forecasting. Because the forecast is based on the combination of many cells, with each cell representing the physical conditions within a given space, then calculations for each cell are the same with the results varying depending on the initial conditions. To increase the accuracy of the forecast, increase the number of cells in the model. The algorithm stays the same. You just need more resources. 

Sam - If you increase the number of cells (for example going from a 10km resolution to a 1km resolution) you also have to increase the frequency of the calculation because the physical conditions change more rapidly for any one cell at that resolution. This requires a lot more resources. But the algorithm does stay basically the same. An excellent example of a niche solution. IBM Research actually did this with a project call Deep Thunder.

John - Now tell me about some failures to launch. Examples of where the technology just did not work out as expected, And some of the reasons why.

Sam - Rarely do I see the issue being the emerging technology. More often, it is the surrounding ecosystem of people, business models, and other systems not willing to adapt to the disruption the emerging technology introduces. Could we have built an iPhone thirty years ago? Well maybe. But it would not have mattered. The ecosystem was not in place, a wireless internet, an app store business model, third party developers building apps like Angry Birds, or Twitter. A generation of consumers familiar with carrying cell phones. All these elements needed to be in place. Somebody has to come up with a compelling application of the emerging technology that demonstrates real value in order to move people over Moore's chasm.





John - So bringing us back to the area of emerging high performance computing... Is this a reason why IBM develops computers like Watson? To demonstrate a compelling application of the technology?

Sam - We tackle these grand challenge problems for a couple of reasons. One of them is to actually push technology to new levels. But the other is to educate people on what might be possible. After developing Watson to solve a problem on the scale of Jeopardy! we will see pilots using data in fields like medicine and energy and finance. Domains that have enormous amounts of unstructured data. 

John - Final topic is THE NEXT BIG THING. In the area of high performance computing what do you think we will see in about three years that will be a disruptive innovation?

Sam - I think there will be a widespread adoption of storage class memory. This means 100's of gigabytes to petabytes (on high end systems) of phase change memory or memristor-based memory. Flash memory will be used early on but it has some issues that will not let it scale to the higher end of what I envision. What you are going to see a movement away from disk based systems. Even though disks will continue to decrease in cost, you reach a tipping point where the cost of the storage class memory is cheap enough when you consider the 10,000 times lower latency.
The other significant change will be the many-core processors available for servers. By many-core, I mean at least 100 cores. This will dramatically increase the capacity for parallel processing on typical servers and open up fresh territory for innovation.
Taken together these two trends will produce systems that are very different architecturally from those we see today. For example, we will see the emergence of operating systems based on byte addressable persistent memory instead of the class file metaphor. Content-addressable memories will also become more common, which will support more biomorphic styles of computing.

John - So if this three year projection of many-core processors and storage class memory comes to pass, how will our day-to-day lives be different?

Sam - I think you will see a lot more mass customization of information. Custom analytics, tuned to your needs at that time, will produce predictions of what you might be interested in at that very moment. Aside from the obvious retail applications, like the shopping scene in "Minority Report", think how this could impact healthcare, government, engineering and science. Consider how these timely yet deep insights could affect our creativity.

John - Thanks for sharing your insights with us Sam.

Wednesday, March 23, 2011

Patterns of Success - Ward Cunningham

When I joined the Object Technology Practice at IBM, Sam Adams taught me about this cool way of capturing an object model called CRC. He had gotten this technique from an x-colleague at Textronix... Ward Cunningham.  
My use of CRC and other personal interactions with Ward are covered on my web site.
As his LinkedIn profile states
 "I have devoted my career to improving the effectiveness of technical experts, mostly by creating new computer tools, but also by radically simplifying methods"


This will be the focus of my interview with Ward for Patterns of Success


John - Ward, thanks for taking the time for this interview. As I explained in the email I am looking to cover three topic areas:

  • Patterns of Success
  • Failures to Launch
  • THE NEXT BIG THING
But first I wanted to ask you about the work you are doing at AboutUs as the Chief Inventor. I got an account at AboutUs back in 2008, but never really used it that much. Then in preparation for this interview I thought I would go back and dust it off to become familiar with the changes. I currently use Google Sites, Blogger, LinkedIn, and Twitter to give eTechSuccess an internet presence. What is the value add that AboutUs will provide me?


Ward  - I would say our focus now is in helping small business use those services and especially in search engine optimization. We realize that it is getting traffic to the site, the right people to the site that matters.


John  - So in the areas of Patterns of Success, what are some patterns that you have seen over the years?


Ward - I think there are a couple of different kinds of success. And one is getting your job done on time. And the thing there is not to make the job bigger then it needs to be.We are sometimes unsure of what we are supposed to do so we do everything what we might be asked to do. Sometimes developers avoid having a conversation with the customer asking "Would it be OK if we just did this?" A large part of Agile is the notion that we plan often, so we do not make these giant plans of everything we might want.  Instead we say "Maybe we should do the first half and see if maybe thats enough".


John  - Is that just a matter of we don't know where we are going til we get there, meaning these big plans try to anticipate things way out in the future OR is it that we think better in the small, in smaller units of complexity?


Ward - Its more like its easy to imagine software that has almost unbounded number of problems that you don't think about in the beginning. For example I wrote a report program once that sorted on the first column only in ascending order. People told me it was a terrible program.


John - Why did they think it was terrible?


Ward  - Oh because it should sort on any column or select any combination of columns to sort on. And the problem was not that it could be programmed that way but that I could not make an easy to use interface that would explain how it would work. When my users opened my report sorted on the first column it was easy to understand and they could get onto reading the report.


John  - So it was good enough to get the job done?


Ward  - It was good enough for the moment. In XP terms that would be called taking a split. Lets split the functionality into a release that is basic and then talk about adding extras in a later release.
So the idea is to be willing to do less and that is a skill that comes with confidence. If you do not feel that you need to defend your programming ability or ability to conceive a system then it is easier to do something in a minimal way. I think that has grown into the concept of a minimally marketable product. That is at the product level, but as an individual programmer it sure feels good to get something done at the end of the day.
So a very important skill is the ability to separate out of a big project lots of little projects that are worth doing and doing quickly.


So that is one type of success. But I want to shift to another kind of success that I call exceeding expectations.When it comes to exceeding expectations I have a little saying... "The path to exceeding expectations probably does not go through meeting expectations".
In other words, if you are going to delight somebody, you are going to give them something that they didn't expect. So if the first thing you do is do everything that is expected and second do something that is beyond.... it is too linear. It is like delivering the asked for twelve sort functions and saying you are exceeding expectations by giving the customer fourteen sort functions. 
For example, nobody asked for wiki, so how was it that I was able to make something so popular? Well there is a certain minimalism that allowed me to make it, but more important there are things in there were not expected like spec linking just because I was playing around with Hypercard and trying to figure out what it could do. So instead of trying to meet expectations you have to redefine the problem and ask what if they asked for this? Could I do that better then this?
One thing I discovered pretty early on is that if I went into staff meetings and delighted people with one thing they would forget about all the things I was supposed to do.


John  - But there is a kind of genius... an inventing light bulb going on in the developer's head when they're listening to the customer saying what they want and offering up the unexpected,  theres something about their own domain knowledge, thinking outside the box, their inventiveness that allows them to give back the unexpected. What is it? Are there  just some individuals that can do this? Or is there a prescription that someone can follow to achieve the result?


Ward  - The formula is to do a lot of it. Over many attempts to build software you build up patterns that you can draw on to solve the next problem. I look back at my own career and I started computer programming for fun. I did not take the class my high school offered when they got a computer, instead I sneaked in during my free period, made up problems and solved them. And even during my professional career I have done a lot of good work for my employers or clients but the stuff I am most know for I just did for fun. That willingness you have to invest your own time on a project gives you the freedom to turn a problem around and play with different solutions.


John  - We've been speaking of wiki as a collaboration tool. Have you had a chance to play around with Google Wave?


Ward  -  Yes. I thought Wave was fantastic. I told people that Wave was more like wiki then wiki. I think that one of the things that happened to Wave was that people did not know how to write in the medium. When wiki first started people did not understand that you need to revise the document relentlessly to make it match your current understanding.
People ended up using Wave in a very conversational way instead of this document emergent way. When they could not get in touch with the people they needed to, they would just stop using Wave.


John  - Well lets hope Google has learned some patterns from Wave and refactors that knowledge into some of their new and improved services.


Ward  - If we want to talk about a Failure to Launch then Wave would be a good example. You need a critical mass of participation to be successful. That is also a classic problem with wikis. Companies will tell me that they need some of that wiki stuff and if it fails it is because the community around the wiki never formed correctly. First they need to be given a sense of what they are supposed to do in the wiki. Then you have to help them do it until they get good. Ah! Here is a formula for being successful at propagating ideas


  1. You have to have a technology, a computer tool that supports the propagation.
  2. You have to have a methodology, a way to use the tool to deliver its promise
  3. You need to have a community, the correct number of people using the tool and following the methodology.
In fact I talked to a group at Microsoft once and they told me that they had a wiki but were not getting much use from it. I asked them how they were using it and they told me they would put meeting notes in. I asked what the entry was called and they said "Meeting notes December 19th."  And i said that entry name did not roll off the tongue... they were just replacing a paper system. They did not have a methodology.... so I gave them a methodology.. at the end of the meeting for the last five minutes what were the three most important ideas that were surfaced and what would be the proper name for that idea... so that it would be the page on the wiki and enter the vocabulary of the community. This is a style of note taking that gives the wiki power.


John - Have you written down this methodology of how to use a wiki?


Ward - No, you might get me started on that, though. There is a nice book on wikipatterns that I wrote the forward for. It included several patterns for how an organization should launch the wiki.


John  - So give me another dramatic failure to launch.


Ward  -  Hmmm. You know most of my ideas flop. But out of the failures it sensitizes me to the missing element for success... so whenever I fail it teaches me something I don't know how to do. 


John  - Let me change gears to our final topic... THE NEXT BIG THING. What do you think it will be three years out?


Ward - I recently made a prediction for Cutter Consortium. Something that could happen but isn't happening. And that something is the way systems are evolving... Software as a Service. I think that there will be refactoring across system and organizational boundaries.We need to allow APIs to evolve without allowing things to break.


John - What technology does this refactoring run on?


Ward - Well I have seen something in the Eclipse platform called refactoring scripts. I can save a refactoring script and can send it to you, where you can run it against your program without me having to know what the internals of your program are. In order for this to work, I would need dozens of examples of use of my API including refactoring scripts that can be applied to my demo programs. As part of my SLA I would promise to provide refactoring scripts for each of my API demo programs whenever I made a change to the interface.


John - So lets use a concrete example. Suppose we both work at Walmart and are working with Proctor and Gamble on a new Order Management System. We have an API and would send P&G some refactoring scripts that could modify their Order Management System. Right?


Ward - We have this dream that if we hold the API constant we can change anything behind it without impacting our users. But I believe that is false. Anything worth doing is exposed through the API.
 If this is going to really work we will need to evolve those services so they do not have intimate knowledge of the other side. Thats why I say refactoring across organizations. Suites of demo programs and scripts that can be applied to them, available to the community of cooperating companies.


John - So if I go up to Eclipse Foundation will I find an example of refactoring scripts?


Ward - Yep, In fact when I was doing research for my Cutter article, I found a blog post that gave that example but it did not suggest  the refactoring across organizations. In fact when I spoke with the authors they did not think it would be a good idea.  They thought the API should remain stable. But if we are going to make something that has emergent properties instead of re-writing big programs over and over again we need to figure out a way for them to evolve.


John - Well thank you for taking the time to speak with me and share your ideas.





Friday, March 4, 2011

Mobile Device Dilemma part IV

I finally upgraded my mobile device from a Blackberry Bold to an HTC Inspire. Many of the trade-off issues are documented in parts I-III. The four Blackberries below are my last four mobile devices (photo courtesy of HTC Inspire).



My AT&T contract for the Bold was due to expire this April and that is the trigger for me to upgrade. In the process of getting the Inspire I considered a few other options. Below are my factors in deciding to go with the Inspire.

Atrix - This device was available at the same time as the Inspire and while I liked the screen resolution, I was not attracted to the screen size. However, the main reason for not going with the Atrix was I would not take advantage of the peripheral  options like the laptop and multi-media dock.

Infuse - This was a close second for me in my decision process. However, I did not want to wait until May/June to get the device and the Samsung build quality and Android update frequency had turned me off. I was attracted to the Super AMOLED + display with Gorilla Glass but not enough to wait.

Samsung S2 - While announced for Europe at the Mobile World Congress, this device was probably in the late summer for AT&T. Not sure I would get that big a boost from the dual core. I was intrigued by the Near Field Communication Chip but I think that will be something I latch onto in my next phone two years from now.

Thunderbolt - The "4G" from AT&T is a joke out of the box. If I was willing to take my family plan over to Verizon, I could have jumped on LTE. However, I decided that where I live in Raliegh would not light up until mid summer and did not want to wait.

Given all the above options, I chose the Inspire because of HTC build quality, and the fact that in a few weeks I will root the phone to get much faster network data speeds. Stay tuned for my adventures in rootville.

Twenty-four hours later:

  • I am still getting used to the on-screen keyboard but can see myself going up the learning curve.
  • The battery life was a concern before purchase, but is not really an issue for me now. I have the power saver enabled and have not run low yet. With my typical use, I got a full fifteen hours.
  • I have been downloading some apps and widgets from the HTC Sense Hub and the Android Market. Very easy. Instead of using one of the six alternative screens beyond Home, I find myself opening up the all apps and scrolling around. Figuring out a better way to get to a specific app is something I need to work on.
  • I saw a post on malware from Android Apps and decided to purchase an anti-virus app (Anti-Virus Pro)
  • Finally, I am disappointed that there is not a tighter integration with my Google Apps account. Gmail, Calendar, and Contacts are well integrated, but others like Docs are based on a web interface and you have to manually create a bookmark or shortcut to quickly access. 

Wednesday, March 2, 2011

Patterns of Success - Jim Stikeleather

One of the benefits of our modern social networking tools like LinkedIn is being able to meet people virtually. Jim reached out to me and invited me to join his network on LinkedIn. For people that I have not met before, I like to review their background a bit before hitting the accept button. In Jim's case, he had been a CTO at Perot Systems, MeadWestvaco and others. I asked him if he would be interested in participating in Patterns of Success and he said yes.













John - Thanks for spending some time with me on this interview. First off, how did you find out about me? What drove you to send me an invitation on LinkedIn?

Jim - The tool itself makes recommendations based on common connections. We had several people in our intersecting networks so I asked you to join my network.

John - By the way, have you played with the LinkedIn Social Map?

Jim - Yes! It is very interesting how it clusters individuals into different groupings that show the concentrations of your career over time. I got 6-7 clusters mainly associated with companies I had worked for .

John -  Tell me about what you do at Dell Services as the Chief Innovation Officer.

Jim - We are still forming the Innovation Group here at Dell Services, We have worked up the team’s initial charter, and our charter likely to be a constant work in progress – in fact, the role of an innovation office always should be a work in progress. In prior work at places like Perot Systems where I was CTO, I was looking over the horizon at emerging technologies and figuring out their impact on our business. That is sort of what I am doing at Dell, but at Dell the CTO is much more focused on products with an 18-24 month horizon. So, Innovation is the new title. As we were combining the acquired Perot Systems into the existing Dell Services, we decided to create this office that would be looking over the horizon farther and more broadly then the current CTO does. Initially we needed to get a good definition of what Innovation is at Dell. How to measure it, how to know when you are successful. We also needed to develop a repeatable innovation process. In most companies innovations occurs in an adhoc fashion, almost by accident.
We have defined the process and it starts with Visioning. What we do in Visioning is to look at environmental trends. Trends in laws or culture or business that could influence the adoption of technology. For example, there are more and more laws dealing with privacy on the internet. How will these laws impact current application portfolios or future development?
We don't look initially at trends in technology because we feel that the legal/cultural/business trends need to be in place first before a technology will take hold.
So we paint this picture of the direction that the world wants to move and then use techniques like Metcalfe's Law to understand the value of connections between the different trends. This helps us decide what technologies to focus on and understand what applications of the technologies could bring most value.
Next we go into an Innovating phase where we will pick a promising technology and do some trial applications to see how well it really works. Based on these results we will select a few to take into the final phase of Production.



John - So when you are doing these steps of innovation is this only for Dell Services OR are you creating a services offering to take to your clients?

Jim - Where we are taking this is in two paths. One is for Dell Services but the other is an offering for our customers. For example, a customer who has a particular problem and who wants to issue an innovation challenge to solve the problem. We can help the customer understand where innovation can be applied to both Products/Services but to also the Processes used to manufacture/sell those Products/Services. So a customer can issue the challenge to some community (inside the company, outside the company, or both) to feed ideas into the process that changes Products/Services/Processes.

John - A while back I had listened to a lecture on YouTube  by Douglas Merrill about Innovation at Google.



He described innovation as being a combination of transformational, incremental, and  incremental with a side effect. Do you see Innovation in similar shades?

Jim - Yes. We see innovations that if applied to existing Products/Services/Processes are almost six sigma continuous improvements. Then as you move farther away from existing Products/Services/Processes the changes are larger and require organizational change or new technologies. Finally, if it is a completely new business model with a completely new technology then it is a game changer... a disruptive innovation.


John - Is there a correlation between the high risk/reward of a game changer vs a low risk/reward of the incremental innovation?

Jim - That is where the practical and academic literature just falls on its face. I don't think there is a correlation. You can't predict the financial reward from the potential of the disruption. Jeffrey Moore talks about companies who focus on differentiating parts of their business falling into one of three categories. The first is if you are competing in a market that with all the other competitors then you need to be always optimizing to compete against them. The second is when the market shifts  and you need to innovate to keep up. Finally, there is the the opportunity to be in a new market. This is where I disagree with some of my colleagues who think you need to move your company to where that market is. I think you should try to move the market to where your company is already able to operate effectively. The problem is figuring out where the market is going to move. There is a famous quote by Henry Ford who said "If I had listened to all the market researchers I would have built a faster horse." 
I think the key is not to swing for home runs. Instead try out several innovative ideas at relatively low investment and see which ones gain market acceptance before investing significantly in those innovations.

John - That reminds me of what I had learned about McDonalds' innovation program. This was many years ago so it might have changed, but at the time they had an innovation program that would collect ideas on changes for the restaurants. Hundreds would enter an evaluation cycle and an initial short list would be based on analysis. Then the short list ideas would each be tried out in a test restaurant. Those that worked out well would be rolled out to the entire franchise. What I thought was really exciting was that McDonalds saw this as an on-going program of improvement and waited until an innovation was proven before heavy investment.

Jim - We tell people that innovation is not R&D. R&D is about taking capital and turning that into knowledge. Innovation is about taking knowledge and turning it into capital. The key with innovation is to discover how to modify what I already know into something that is better. One of the neat things that is starting to happen is that with cloud computing platforms a start-up can try out a new innovation at very low cost. I think you will see more and more of the innovation taking place in small start-ups because the cost to fail is so low. Then if they reach a point of sustainability they will be acquired by a larger company.

John - I did an interview with Ed Yourdon a few weeks back and we were on a similar point of view. He thought that with the advances with mobile technology and cloud computing we would see new apps and businesses created by high school students.Quickly creating apps that go into app stores and some of them becoming wildly popular.

Jim - Right. As the technology has advanced, as the costs have come down, the cost to fail has reduced so people are more willing to try something out. 

John - Over the last few years, as you were dealing with these waves of technology, what have been the things customers have done in innovation to be successful?

Jim - Thats an extremely interesting question. I do suspect that going forward the patterns might be different then they were in the past. In the past, innovations were driven largely to satisfy the business world. What you are seeing now is a lot of innovation being driven from the consumer side. Mobile devices, converged communications, social networks like Facebook. All of these game changers were initially developed to satisfy a consumer need. Business innovations often followed from them. On the consumer side of innovation there is less concern about being perfect. If it is good enough and can offer the opportunity for follow-on improvements then the rate of innovation goes up.

John - Well lets use Facebook as an example of a success. We even have a movie we can use as a reference
.
 

We have the initial game changing idea launched after several weeks of furious work, then the business becomes almost self sustaining, because all the real value is in the content created by the users. And the more users you have the more people want to join. There is a critical mass of success driven by the participation of the consumer.

Jim - The value proposition on the social media side really follows Metcalfe's Law. The more connections you have the more you value the network. And with Wikipedia, it is a source of information that is good enough. It is not as authorative as a refereed research paper but it is your first source of information, and because it has a network of authors for each article you can be sure of its currency.
So the real value of a product/service is no longer its stand alone value, but its value in the context of its ecosystem.
So one of the predictors of success will be a companies ability to create a network, an ecosystem, around their product/service.

John - It is always easy in retrospect to highlight one of the big winners of today and talk about how innovative they were starting out. But I do believe that a lot of the success is a matter of luck. Having the right product/service available when the market is ready for it.

Jim - In management theory there is this thing called superstitious learning. We were successful, we did the right things, we followed our strategy. When in fact, we were lucky. The lessons you take away from these successes, you need to maintain the context.
The key to long term success is that once you get lucky and have an initial success, can you execute as a business to grow the success?
I think a very common pattern of failure is a company that is initially successful, cash starving themselves and not able to meet market demand.

John - The flip side to success is failure. I call it a Failure to Launch. You have mentioned a few patterns of failure. Any other examples?

Jim - Oh wow! Its funny because you tend to forget the failures, but those are the ones you should remember the best. One of the ones I always felt bad about was Convergent Technologies. They built AT&T's first unix servers, they also built computers for Unisys and Burroughs. They were brilliant engineers. They built a tablet computer called the Workslate that had a word processor, a spread sheet, a voice recorder. A very early iPad. It was ahead of its time and the market ecosystem was not ready for it.
Along the same branch of technology we have the Apple Newton. The device itself had remarkable stand alone technology, but because it predated wireless connectivity, it could not link its user to the wider world. In general, while PDAs were not a dramatic failure, they did not take off nearly like the smartphones of today.

John - Final topic is THE NEXT BIG THING. What do you think will be a game changing innovation that we will be talking about three years from now?

Jim - I think that the big game changer, is that we will be thinking of software applications in a radically different way. In Anderson's Long Tail model 
there is a market of very specialized applications in the tail that have been difficult to create and market because of the traditional costs have been too high. But with Cloud Computing development and hosting capability and app stores and internet marketplaces for distribution, the costs go way down. So we will see 1-2 person companies developing very unique solutions for mini-markets. So the NEXT BIG THING will be the opposite of what we have seen from vendors like Microsoft with broad functionality, shallow domain depth, swiss army knife  Office products. 

John - As a consumer, will I go to a trusted brand like a Microsoft and through configuration get the special app I want OR will I be searching in an app store and finding the unique app that fits my needs developed by a high school student?

Jim - That is the question of the broker. It might provide some life to traditional systems integration firms to assemble the solution for you. It is very difficult to predict the ecosystem of companies that will provide these unique solutions.
More on the 10 year horizon we will see a lot more meta-data and semantics being associated with data and applications functionality so that the searching and assembling can be done automatically. So that based on our query to the computer a one-time, ephemeral application and data will be assembled by the system to solve our need.

John - Thank you very much for sharing your insights with us.