Select Page

Silicon Valley Ageism Versus the Productivity of Famous Inventors

A few weeks ago I was having lunch with a friend who half-jokingly asked me if I was ready to retire yet.  I half-jokingly quipped that I was well past the age of “fundable” established by Silicon Valley venture capitalists, and would therefore be relocating to Puerto Rico in the near future.  Jokes aside, ageism in the technology industry is a real phenomenon, and these perceptions are unfair on two counts.  First, venture capitalists with any common sense do in fact frequently fund entrepreneurs of all ages, although there are more than a few seemingly without any common sense.  Secondly, productivity and age are not correlated, but productivity, health, and wealth and probably are.

I took a wager with my friend that a cursory analysis of famous inventors would show no correlation of age to productivity.  I wanted to minimize the distortions of the modern market on intellectual property, so I just took the first few off the list of famous inventors from the last century.  I cannot claim that this is scientific or fully conclusive, but I do claim that someone owes me $20.  The data is actually a little difficult to find because the USPTO database is not searchable before 1976.  If someone wants to do a complete analysis of the famous or prolific inventors of the last century, I would be willing to reward you with the proceeds of my $20 wager.  Suffice it to say that you would be unwise to “hire young” as some people have suggested, even if you were comfortable with breaking the law.






Your Startup Can Succeed With Offshore Development Teams

Your Startup Can Succeed With Offshore Development Teams

A headline reads “We gave all of our most boring and tedious work to people we never met, and here is what happened next”.  Let me guess: a total failure?

Recently there have been a few rather nasty articles deriding offshore development teams.  As an entrepreneur in Silicon Valley I want to set the record straight: Every successful start-up I have been involved with learned how to successfully leverage offshore teams, and several of these companies would not have made it otherwise. Some of the most talented, hard-working engineers I have ever had the privilege of working with were based in India, China, and Russia.  Although there are rare exceptions where the model does not make sense, if your software company is struggling to make the model work, in all likelihood the problem is you.  If your company cannot leverage offshore teams, your company cannot hire the best people in the world wherever they might be.  In other words, you have a serious competitive disadvantage.

Screen Shot 2015-01-23 at 10.29.35 AM

The photo every tourist is obliged to take

The reason that many are quick to dismiss offshore development is simple: Creating distributed teams that are productive together is not easy. To help understand why, I reached out to several colleagues who have been very successful building teams in Russia, China, and India, to get their perspective:

“When outsourcing begins, there can issues of confidence and trust” says Rajiv Sinha, VP Engineering, Citrix Systems.  Rajiv says that developing mutual respect between development teams is essential.  “You have to get someone with a face-to-face presence whose sole function is to represent the remote team.  For the people in the home office, this person is the remote team.”

Avinash Agrawal, Principal Consultant (formerly VP of Engineering at Guavus, Gale Technologies) says “total two-way transparency and relentless communication” are the key to creating clear responsibility and accountability between teams.  “Offshore teams often suffer from ‘remote site syndrome’, and feel excluded from information and the decision making hierarchy”.

Leaders must “endeavor to create opportunities where offshore and onshore teams get to work closely together”, says Vipin Sharma, Head of India Engineering for a stealth mode Silicon Valley big data start-up.  “Regular technical summits and workshops where key team members travel to each others sites can go a long way to boost ‘one team’ spirit and overall efficiency”.

I had the benefit of working with the teams these leaders created, so I know first hand that their advice is sage.  To their list, I will add this observation from years of experience: Many companies begin by trying to create a sustaining engineering model, where the offshore team is primarily responsible for bug-fixing and other maintenance.  Don’t do it.  You cannot hire the best engineers in the United States and relegate them to sustaining engineering, and you should not expect that people are different anywhere else.  Talented engineers need hard problems to solve.  Give your remote teams ownership, authority, responsibility, and accountability, and you will see results.  Perception issues and blame games do arise from time to time, you have to get ahead of those immediately by facilitating communication, and cultivating mutual respect between your teams.

None of this is to say that there are not problems in every offshore jurisdiction.  Finding office space in hotspots like Bangalore and New Delhi can be a nightmare.  In India there are many cases where people still do not show up on their first day of work after accepting an offer.  Competition in tech hotspots means that employees sometimes try to float ridiculous 75% or more pay increases.  Language barriers frequently show up in source code.  The power goes out frequently, sometimes 2-3 times a day every day.  Different countries have a confusing myriad of employment law, intellectual property law,  licenses, taxes, and permits.  This is all just part of the process of doing business in any foreign country.

Let’s distill this into some clear guidance for creating offshore development teams:

  • Representation needs to be formal. The remote team needs a dedicated representative at the home office.  Instead of kicking an email over the wall while the remote team is sleeping, this person is always available for face-to-face communication, and up to date on all development status.  A senior engineering manager or product management person is ideal, it needs to be someone with incredible communication talent, patience, and persistence.
  • Daily communication is key. Teams should be in daily contact at the leadership level, preferably twice every day.  It feels ridiculous having to say this in 2015, but get video conferencing running.  GoToMeeting is great, and Fuze is wonderful.  There are no excuses for not videoconferencing in this day and age.  Seeing people, their body language, their expressions, keeps you on the level of human beings instead of keystrokes.  Of course, it should go without saying that you should also have a chat system like Slack, where engineers and product teams can communicate instantly ad-hoc.  Bug tracking and other collaboration tools like a wiki need to be open, accessible, and performant.  Put those babies in EC2 or Digital Ocean with high bandwidth and unfettered access.
  • Frequent travel is mandatory. Your team is on the other side of the world, not on another planet.  As a leader, you need to get yourself there to meet, plan, and share your vision, strategy, customer success, and customer failure.    If has been more than 100 days since your last visit, you are very, very late.   There is no substitute for frequent face to face interaction, because too much is lost in written communication, and there is frequently a language barrier to overcome.
  • Empowering offshore teams is essential.  This is so important that it bears repeating: Hire brilliant people and give them hardest work you have to do.  You need great leadership on the ground to hire the best people, so begin your offshore team by finding that leader, and giving them complete authority to make hiring decisions.  Set the tone with the rest of the company that you are finding and hiring the best people in the world, and share their successes with everyone.


2015 Year of the Whale and Other Disruptive Trends in IT

2015 Year of the Whale and Other Disruptive Trends in IT

The year 2014 was as a very exciting period in cloud computing as several emerging technologies and trends began to shape the future of IT.  In late 2014 the overarching trend disrupting technology and industries everywhere is still the growth of software.  Software is now nearly ubiquitous due to the proliferation of general purpose processing power, low-cost personal computing devices, and the advent of universal connectivity through the Internet.  The biggest challenge still facing developers and IT operations teams today is the ability to build and scale complex distributed systems, and to continuously evolve those systems in response to rapidly changing customer and market needs.

Three major trends that are taking shape to meet this challenge are containerization, hybrid cloud, and converged infrastructure:

2015 is Year of the Whale

docker_monstroRemember way back when the world tried to pretend that VMWare was no big deal?  Yes, so do I.  It is a rare and enjoyable circumstance in IT when we get to see a single technology create a nearly universal upheaval.  Containerization is rapidly taking over as the de-facto mechanism for packaging and deploying distributed applications, creating widespread disruption to virtually every part of IT.  The excitement and popularity of Docker has to do with how the technology provides a consistent mechanism to build, deploy, and scale application components with isolated system resources. Docker neatly controls the “dependency hell matrix” of application dependencies and configurations within the container build time, enabling an immutable infrastructure pattern for continuous integration and deployment.

Docker disrupts Operating Systems 

The big news in late 2014 was the announcement by CoreOS, the minimalist operating system distribution designed to simplify container deployment at scale, that it will pursue is own containerization technology called Rocket.  It has been very clear for some time that Docker and CoreOS were on a collision course, but this announcement coinciding with DockerCon sent a clear message about current state of affairs: Docker and CoreOS are not friends anymore.  The two open source companies are competing for the same customers because, without an operations platform, Docker containers are ultimately just a developer tool.

Competition is always healthy for emerging technologies, and we now have a two-horse race.  The unquestionable leader in this race is Docker, who has by far the larger share of the open source development community, mindshare, and partnership traction.  This is a particularly difficult situation for CoreOS since Unlike Docker, which is effectively just software that can be packaged for any Linux distribution, CoreOS is a standalone operating system.  This means that, in addition to Docker, CoreOS also competes with industry giants like Microsoft, VMWare,  RedHat, and Ubuntu. Those four giant incumbents also just happen to be Docker’s key partners.

Meanwhile, Ubuntu and RedHat have formulated their first response by introducing some CoreOS-like capabilities in two new projects Ubuntu Core and Project Atomic, and Microsoft announced that future versions of Windows Server will support Docker containers natively.

Docker disrupts Platform as a Service

The bigger news in late 2014 was that Docker announced support for orchestration of multi-container distributed applications at DockerCon with the introduction of Docker Swarm and Docker Compose.  Docker Swarm is a clustering system for Docker-enabled hosts, providing basic discovery of hosts, and scheduling of Docker containers to hosts.  Docker Compose provides a simple YAML-based configuration language to describe and assemble multi-container distributed applications, and is clearly what became of Docker’s acquisition of Fig. This means the future is also quite uncertain for Platform as a Service.

Cloud Foundry, OpenShift, and ActiveState have all moved quickly to incorporate basic support for Docker containers.  PaaS platforms also have a lot of capability with security and multi-tenancy that are valuable to large businesses. But with its own native container orchestration, Docker can provide much of the developer-centric “git push” user experience for deploying distributed applications with less complexity than deploying a full blown PaaS solution.  That user experience is a prime motivator for companies that are exploring the PaaS option in the first place.

For right now, that simplicity still comes with tradeoffs in functionality, since Docker Swarm and Docker Compose are still alpha-quality software.

Docker disrupts Virtualization, Private Cloud, Configuration Management

For years now, virtual machine images have been often abused as the de-facto mechanism for packaging and deploying distributed applications.  From an IT operations perspective, virtual machines offer superb resource isolation, security, and stability.  But virtual machines are also full blown operating system instances that require full management, each with their own storage and network settings, dependencies, and environment-specific configuration.  Virtual machine images are large and unwieldy live filesystems (or a plurality of files these days).  Once deployed, virtual machines tend to “drift” from their original pristine state, as they are modified from time to time by software, scripts, and human beings to suit as-of-this-moment requirements.  Indeed, the need to manage an explosion in the number of virtual servers under management, the so-called  “vm sprawl” problem, has helped configuration management systems like Chef, Puppet, Ansible, and Salt become fundamental tools of the system administration trade.  It is also unclear where developer responsibility ends and system administrator responsibility begins with a virtual machine, frequently making troubleshooting an “all hands on deck” experience.

You have probably heard that containers and virtualization are perfectly complimentary – that is certainly true.  However, containers have less performance overhead, use resources more efficiently, and are faster to deploy than full blown virtual machines.  Containers have another very important advantage over virtual machines in that they are software-defined (they are created from meta-data that specifies their composition).  Together, these characteristics enabling a powerful pattern for managing distributed applications – immutable infrastructure.  In the immutable infrastructure paradigm, minimalist bare-metal operating systems that are automatically discovered, configured, and assigned container images through automation.  Containers are created and provisioned when needed, and destroyed when altered or no longer needed.  The delineation of responsibility between IT and development teams is crystal clear, IT runs the ships, and Developers run the containers. In 2015 we will begin to see many early adopters begin to standardize on fully automated container-to-metal architectures for their private cloud offerings.

Yet for all those advantages, IT will continue to struggle with management of virtual machines for the foreseeable future. There are a number of very good reasons, many of which have to do with legacy applications, and multi-tenancy.   For a very long time into the future, there will be (mostly legacy) applications that simply may not work well with containers, so it will still take years before containers can likely claim to be the dominant model.

Docker disrupts Infrastructure as a Service

Any cloud provider that has an appropriate base system image can host Docker containers, but cloud providers are moving quickly to make them easier to manage.   Amazon introduced Docker support in Elastic Beanstalk,  Google has Container Engine in the Google Cloud Platform, CenturyLink introduced Panamax, and Digital Ocean now has CoreOS as a base image.  Expect that in 2015 we will begin to see a whole new set of container-centric APIs and cloud services emerge in these providers.

Hybrid Clouds 


Hardware refresh cycles, coupled with the demand for more agility in IT services, is finally causing large businesses to take a serious look at hybrid cloud architectures.  Many businesses are continuing to operate with infrastructure that has long outlived its planned obsolescence, and virtually every IT leader I have spoken with in the last few months is looking to leverage public cloud as part of their overall IT strategy.  The bad news is that  IT teams remain constrained by legacy infrastructure, regulatory and compliance issues, and a long list of security concerns.  This is more or less the same set of barriers that has been stalling public cloud adoption in the enterprise for several years.

Add to that list the complexity of managing multiple clouds and vendor relationships, along with the difficulty of migrating workloads and data between clouds, and its easy to understand why businesses have taken their time to get to hybrid.   For hybrid cloud in particular, this complexity may ultimately motivate businesses to acquire cloud services through an intermediary like a cloud service brokerage or marketplace, which provide tools to manage multiple clouds in a single user experience.  Right now development and test use cases are still predominate for hybrid clouds, and that is likely to remain the case throughout 2015.




Automation is probably the mantra I have heard repeated most among IT professionals and leadership in the last half of 2014, and it has been hand in hand with the goal of enabling self-service for their users.  Businesses everywhere are still struggling to scale IT operations under constant budget pressure.  The only way to get more work done with ultimately fewer people is to automate.  Continuous integration and deployment is also a very common goal among the population of engineering and operations teams I have spoken with recently.  Along with Docker itself, there are some very nice tools like Shippable emerging to take advantage of containerization.

In 2015, I expect that we will see some of the existing automation frameworks like Puppet, Chef, Salt, and Ansible develop features to handle difficult tasks like bare-metal hardware orchestration (several of these already have some capabilities in this respect).  We call this getting the infrastructure to “ready-state”, which is the first moment the infrastructure is available to accept a workload after first-time hardware provisioning.

The problem with third-party automation tools is that they have great difficulty keeping pace with hardware vendors, who are constantly refactoring and shipping new products, hardware revisions, firmware updates, and management software.  Ultimately, it is most likely hardware vendors themselves that will have to deliver superior automation orchestration.

This brings us to our last major disruptive trend, converged infrastructure:

Converged Infrastructure


For virtually as long as IT has been supporting businesses, the primary functions of the IT organization was to evaluate, acquire, and integrate different hardware and software to create business systems.  And, since the dawn of the PC era, hardware vendor lock-in has been a primary design consideration.  As web-scale architectures have matured and become widely known, and with the advent of virtualization (containers included), the days of IT acting as its own hardware integrator may be coming to an end.  Virtualization of any kind means that IT teams can be less concerned about hardware vendor lock-in, since they can quickly re-deploy these systems without downtime. As discussed before, modern IT teams are becoming primarily concerned with creating and maintaining software.  The imperative to move quickly to respond to customer and market trends means there is less lead time than ever.  What if deploying infrastructure on-premise was no more complicated than assembling Lego blocks?

Converged infrastructure, or what Gartner calls “rack based computing”, is the inevitable culmination of many of the trends discussed herein.  Converged systems help ordinary businesses create web-scale infrastructures, delivering pre-integrated compute, network, storage, and virtualization.  For on-premise IT, as well as service providers, converged infrastructure is probably the most exciting development in a dozen years.  Back in 2007, I joined a hot young start-up called 3LeafSystems creating a converged network, storage, and compute fabric.  It is a great example of exactly how long these technology trends can take to actually come to fruition.

Today, every major hardware vendor has a converged line of businesses, and there are a number of start-ups doing very well in this space (like Nutanix).  In 2015, we can expect to see a lot of vendor activity in this area, as the next generation of these systems begin to come to market.

Moving Forward

No doubt about it, early-adopters of these technologies will have something of a wild ride in 2015 and beyond.  Even so, early investors are likely to have a significant ROI cases for companies seeking better agility and lower costs.  Are you an IT leader, and are these trends and others on your radar?  I would love to learn about it, so please drop me a note on LinkedIn.   I hope that you had an excellent holiday to start to your New Year, and wish you a very successful 2015.