Select Page
How Digital Advertisers Destroyed Themselves

How Digital Advertisers Destroyed Themselves

It’s one of those weird things. I can remember the exact moment when I stopped using Yahoo forever.  It was many years ago on the day that Yahoo placed a new and intrusive advertisement on its home page.  This particular ad was at the time beyond anything I had ever experienced before.  A few seconds after loading the home page a flock of crows would take off and fly around the screen.  After about 5 seconds they would land on top of an advertisement.  Do you remember this horror?  When Yahoo ran this advertisement it interfered with me finding meaningful search results.  I switched right then and there to Google and never came back.  It was a sign of things to come.

Since then there has been a steady increase in the number of ever more annoying display ads.  This has created two negative consequences for media companies and advertisers.  Users have learned to ignore display advertisements leading to lower click through rates.   And record numbers of users are now deploying ad blocking software to stop intrusive ads.  Today there is a nasty arms race between media companies and ad blocking technology, and on the open web it is clear that ad blocking is winning.

Anti ad-blocking startup PageFair says that ad blocking users grew by 41% to 198 million users and is expected to cost media companies an estimated $22 billion in 2015.  That helps to explain why every media site on the web seems so eager to have you install their useless mobile app. I say useless because these often have the same features as the mobile site except that its impossible for the user to block ads.  This means that the company invests in a mobile app no one wants to use while continuing to destroy its own revenue stream with click bait.  Lose, lose.

The web is not a passive form of entertainment. Users are on a quest for information in a world where they are being driven to distraction.  Media companies and advertisers have now trained users to be experts at avoiding advertising.  To get beyond this problem advertisers will need to quickly improve the information content of their ads.  There are some signs that the trend towards more informative and less distracting advertising is already underway.

Adblock Plus, the leading ad blocking solution, has made a business out of whitelisting advertisements that conform to its acceptable ads criteriaNatasha Lomas wrote this nice piece on Techcrunch where Adblock Plus says that very few users opt out of these whitelisted advertisements.  Adblock Plus now faces competition from anti-adblocking start-ups PageFair and Sourcepoint.  These solutions attempt to create a sense of guilt for users by encouraging them not to use ad blocking software.  Good luck with that one.  Natasha writes, “you can’t help but feel that if the digital ad industry just stopped to look at how awful ads and ad tactics have become … there might be no need for adblockers (or anti-adblockers) at all”.

Is the advertising industry finally starting to catch on? The Interactive Advertising Bureau (IAB) recently released a report detailing the effectiveness of a new display ad format. In the report, IAB cited a number of metrics showing that users regarded these new ads as less intrusive and annoying.  Users like the new ad format because ads did not flash, take over the screen, or interrupt what the user was doing.  Those sentiments align well with with at least a few of the Adblock Plus criteria, even though the IAB format is still very far from qualifying for the whitelist.  Only time will tell if the trend toward less intrusive ads will take hold quick enough to save the digital advertising industry from itself.

The technology genie is already way out of the bottle with ad blocking software. That battle cannot be won by advertisers.  Time to stop investing in ways to force users to look at ads, and to start investing in ways to make them want to look at ads.

Silicon Valley Ageism Versus the Productivity of Famous Inventors

A few weeks ago I was having lunch with a friend who half-jokingly asked me if I was ready to retire yet.  I half-jokingly quipped that I was well past the age of “fundable” established by Silicon Valley venture capitalists, and would therefore be relocating to Puerto Rico in the near future.  Jokes aside, ageism in the technology industry is a real phenomenon, and these perceptions are unfair on two counts.  First, venture capitalists with any common sense do in fact frequently fund entrepreneurs of all ages, although there are more than a few seemingly without any common sense.  Secondly, productivity and age are not correlated, but productivity, health, and wealth and probably are.

I took a wager with my friend that a cursory analysis of famous inventors would show no correlation of age to productivity.  I wanted to minimize the distortions of the modern market on intellectual property, so I just took the first few off the list of famous inventors from the last century.  I cannot claim that this is scientific or fully conclusive, but I do claim that someone owes me $20.  The data is actually a little difficult to find because the USPTO database is not searchable before 1976.  If someone wants to do a complete analysis of the famous or prolific inventors of the last century, I would be willing to reward you with the proceeds of my $20 wager.  Suffice it to say that you would be unwise to “hire young” as some people have suggested, even if you were comfortable with breaking the law.

tesla_inventions_by_age

ford_inventions_by_agebrowning_inventions_by_ageedison_inventions_by_age

all_inventors_by_age

 

 

Your Startup Can Succeed With Offshore Development Teams

Your Startup Can Succeed With Offshore Development Teams

A headline reads “We gave all of our most boring and tedious work to people we never met, and here is what happened next”.  Let me guess: a total failure?

Recently there have been a few rather nasty articles deriding offshore development teams.  As an entrepreneur in Silicon Valley I want to set the record straight: Every successful start-up I have been involved with learned how to successfully leverage offshore teams, and several of these companies would not have made it otherwise. Some of the most talented, hard-working engineers I have ever had the privilege of working with were based in India, China, and Russia.  Although there are rare exceptions where the model does not make sense, if your software company is struggling to make the model work, in all likelihood the problem is you.  If your company cannot leverage offshore teams, your company cannot hire the best people in the world wherever they might be.  In other words, you have a serious competitive disadvantage.

Screen Shot 2015-01-23 at 10.29.35 AM

The photo every tourist is obliged to take

The reason that many are quick to dismiss offshore development is simple: Creating distributed teams that are productive together is not easy. To help understand why, I reached out to several colleagues who have been very successful building teams in Russia, China, and India, to get their perspective:

“When outsourcing begins, there can issues of confidence and trust” says Rajiv Sinha, VP Engineering, Citrix Systems.  Rajiv says that developing mutual respect between development teams is essential.  “You have to get someone with a face-to-face presence whose sole function is to represent the remote team.  For the people in the home office, this person is the remote team.”

Avinash Agrawal, Principal Consultant (formerly VP of Engineering at Guavus, Gale Technologies) says “total two-way transparency and relentless communication” are the key to creating clear responsibility and accountability between teams.  “Offshore teams often suffer from ‘remote site syndrome’, and feel excluded from information and the decision making hierarchy”.

Leaders must “endeavor to create opportunities where offshore and onshore teams get to work closely together”, says Vipin Sharma, Head of India Engineering for a stealth mode Silicon Valley big data start-up.  “Regular technical summits and workshops where key team members travel to each others sites can go a long way to boost ‘one team’ spirit and overall efficiency”.

I had the benefit of working with the teams these leaders created, so I know first hand that their advice is sage.  To their list, I will add this observation from years of experience: Many companies begin by trying to create a sustaining engineering model, where the offshore team is primarily responsible for bug-fixing and other maintenance.  Don’t do it.  You cannot hire the best engineers in the United States and relegate them to sustaining engineering, and you should not expect that people are different anywhere else.  Talented engineers need hard problems to solve.  Give your remote teams ownership, authority, responsibility, and accountability, and you will see results.  Perception issues and blame games do arise from time to time, you have to get ahead of those immediately by facilitating communication, and cultivating mutual respect between your teams.

None of this is to say that there are not problems in every offshore jurisdiction.  Finding office space in hotspots like Bangalore and New Delhi can be a nightmare.  In India there are many cases where people still do not show up on their first day of work after accepting an offer.  Competition in tech hotspots means that employees sometimes try to float ridiculous 75% or more pay increases.  Language barriers frequently show up in source code.  The power goes out frequently, sometimes 2-3 times a day every day.  Different countries have a confusing myriad of employment law, intellectual property law,  licenses, taxes, and permits.  This is all just part of the process of doing business in any foreign country.

Let’s distill this into some clear guidance for creating offshore development teams:

  • Representation needs to be formal. The remote team needs a dedicated representative at the home office.  Instead of kicking an email over the wall while the remote team is sleeping, this person is always available for face-to-face communication, and up to date on all development status.  A senior engineering manager or product management person is ideal, it needs to be someone with incredible communication talent, patience, and persistence.
  • Daily communication is key. Teams should be in daily contact at the leadership level, preferably twice every day.  It feels ridiculous having to say this in 2015, but get video conferencing running.  GoToMeeting is great, and Fuze is wonderful.  There are no excuses for not videoconferencing in this day and age.  Seeing people, their body language, their expressions, keeps you on the level of human beings instead of keystrokes.  Of course, it should go without saying that you should also have a chat system like Slack, where engineers and product teams can communicate instantly ad-hoc.  Bug tracking and other collaboration tools like a wiki need to be open, accessible, and performant.  Put those babies in EC2 or Digital Ocean with high bandwidth and unfettered access.
  • Frequent travel is mandatory. Your team is on the other side of the world, not on another planet.  As a leader, you need to get yourself there to meet, plan, and share your vision, strategy, customer success, and customer failure.    If has been more than 100 days since your last visit, you are very, very late.   There is no substitute for frequent face to face interaction, because too much is lost in written communication, and there is frequently a language barrier to overcome.
  • Empowering offshore teams is essential.  This is so important that it bears repeating: Hire brilliant people and give them hardest work you have to do.  You need great leadership on the ground to hire the best people, so begin your offshore team by finding that leader, and giving them complete authority to make hiring decisions.  Set the tone with the rest of the company that you are finding and hiring the best people in the world, and share their successes with everyone.

 

2015 Year of the Whale and Other Disruptive Trends in IT

2015 Year of the Whale and Other Disruptive Trends in IT

The year 2014 was as a very exciting period in cloud computing as several emerging technologies and trends began to shape the future of IT.  In late 2014 the overarching trend disrupting technology and industries everywhere is still the growth of software.  Software is now nearly ubiquitous due to the proliferation of general purpose processing power, low-cost personal computing devices, and the advent of universal connectivity through the Internet.  The biggest challenge still facing developers and IT operations teams today is the ability to build and scale complex distributed systems, and to continuously evolve those systems in response to rapidly changing customer and market needs.

Three major trends that are taking shape to meet this challenge are containerization, hybrid cloud, and converged infrastructure:

2015 is Year of the Whale

docker_monstroRemember way back when the world tried to pretend that VMWare was no big deal?  Yes, so do I.  It is a rare and enjoyable circumstance in IT when we get to see a single technology create a nearly universal upheaval.  Containerization is rapidly taking over as the de-facto mechanism for packaging and deploying distributed applications, creating widespread disruption to virtually every part of IT.  The excitement and popularity of Docker has to do with how the technology provides a consistent mechanism to build, deploy, and scale application components with isolated system resources. Docker neatly controls the “dependency hell matrix” of application dependencies and configurations within the container build time, enabling an immutable infrastructure pattern for continuous integration and deployment.

Docker disrupts Operating Systems 

The big news in late 2014 was the announcement by CoreOS, the minimalist operating system distribution designed to simplify container deployment at scale, that it will pursue is own containerization technology called Rocket.  It has been very clear for some time that Docker and CoreOS were on a collision course, but this announcement coinciding with DockerCon sent a clear message about current state of affairs: Docker and CoreOS are not friends anymore.  The two open source companies are competing for the same customers because, without an operations platform, Docker containers are ultimately just a developer tool.

Competition is always healthy for emerging technologies, and we now have a two-horse race.  The unquestionable leader in this race is Docker, who has by far the larger share of the open source development community, mindshare, and partnership traction.  This is a particularly difficult situation for CoreOS since Unlike Docker, which is effectively just software that can be packaged for any Linux distribution, CoreOS is a standalone operating system.  This means that, in addition to Docker, CoreOS also competes with industry giants like Microsoft, VMWare,  RedHat, and Ubuntu. Those four giant incumbents also just happen to be Docker’s key partners.

Meanwhile, Ubuntu and RedHat have formulated their first response by introducing some CoreOS-like capabilities in two new projects Ubuntu Core and Project Atomic, and Microsoft announced that future versions of Windows Server will support Docker containers natively.

Docker disrupts Platform as a Service

The bigger news in late 2014 was that Docker announced support for orchestration of multi-container distributed applications at DockerCon with the introduction of Docker Swarm and Docker Compose.  Docker Swarm is a clustering system for Docker-enabled hosts, providing basic discovery of hosts, and scheduling of Docker containers to hosts.  Docker Compose provides a simple YAML-based configuration language to describe and assemble multi-container distributed applications, and is clearly what became of Docker’s acquisition of Fig. This means the future is also quite uncertain for Platform as a Service.

Cloud Foundry, OpenShift, and ActiveState have all moved quickly to incorporate basic support for Docker containers.  PaaS platforms also have a lot of capability with security and multi-tenancy that are valuable to large businesses. But with its own native container orchestration, Docker can provide much of the developer-centric “git push” user experience for deploying distributed applications with less complexity than deploying a full blown PaaS solution.  That user experience is a prime motivator for companies that are exploring the PaaS option in the first place.

For right now, that simplicity still comes with tradeoffs in functionality, since Docker Swarm and Docker Compose are still alpha-quality software.

Docker disrupts Virtualization, Private Cloud, Configuration Management

For years now, virtual machine images have been often abused as the de-facto mechanism for packaging and deploying distributed applications.  From an IT operations perspective, virtual machines offer superb resource isolation, security, and stability.  But virtual machines are also full blown operating system instances that require full management, each with their own storage and network settings, dependencies, and environment-specific configuration.  Virtual machine images are large and unwieldy live filesystems (or a plurality of files these days).  Once deployed, virtual machines tend to “drift” from their original pristine state, as they are modified from time to time by software, scripts, and human beings to suit as-of-this-moment requirements.  Indeed, the need to manage an explosion in the number of virtual servers under management, the so-called  “vm sprawl” problem, has helped configuration management systems like Chef, Puppet, Ansible, and Salt become fundamental tools of the system administration trade.  It is also unclear where developer responsibility ends and system administrator responsibility begins with a virtual machine, frequently making troubleshooting an “all hands on deck” experience.

You have probably heard that containers and virtualization are perfectly complimentary – that is certainly true.  However, containers have less performance overhead, use resources more efficiently, and are faster to deploy than full blown virtual machines.  Containers have another very important advantage over virtual machines in that they are software-defined (they are created from meta-data that specifies their composition).  Together, these characteristics enabling a powerful pattern for managing distributed applications – immutable infrastructure.  In the immutable infrastructure paradigm, minimalist bare-metal operating systems that are automatically discovered, configured, and assigned container images through automation.  Containers are created and provisioned when needed, and destroyed when altered or no longer needed.  The delineation of responsibility between IT and development teams is crystal clear, IT runs the ships, and Developers run the containers. In 2015 we will begin to see many early adopters begin to standardize on fully automated container-to-metal architectures for their private cloud offerings.

Yet for all those advantages, IT will continue to struggle with management of virtual machines for the foreseeable future. There are a number of very good reasons, many of which have to do with legacy applications, and multi-tenancy.   For a very long time into the future, there will be (mostly legacy) applications that simply may not work well with containers, so it will still take years before containers can likely claim to be the dominant model.

Docker disrupts Infrastructure as a Service

Any cloud provider that has an appropriate base system image can host Docker containers, but cloud providers are moving quickly to make them easier to manage.   Amazon introduced Docker support in Elastic Beanstalk,  Google has Container Engine in the Google Cloud Platform, CenturyLink introduced Panamax, and Digital Ocean now has CoreOS as a base image.  Expect that in 2015 we will begin to see a whole new set of container-centric APIs and cloud services emerge in these providers.

Hybrid Clouds 

hybrid_clouds

Hardware refresh cycles, coupled with the demand for more agility in IT services, is finally causing large businesses to take a serious look at hybrid cloud architectures.  Many businesses are continuing to operate with infrastructure that has long outlived its planned obsolescence, and virtually every IT leader I have spoken with in the last few months is looking to leverage public cloud as part of their overall IT strategy.  The bad news is that  IT teams remain constrained by legacy infrastructure, regulatory and compliance issues, and a long list of security concerns.  This is more or less the same set of barriers that has been stalling public cloud adoption in the enterprise for several years.

Add to that list the complexity of managing multiple clouds and vendor relationships, along with the difficulty of migrating workloads and data between clouds, and its easy to understand why businesses have taken their time to get to hybrid.   For hybrid cloud in particular, this complexity may ultimately motivate businesses to acquire cloud services through an intermediary like a cloud service brokerage or marketplace, which provide tools to manage multiple clouds in a single user experience.  Right now development and test use cases are still predominate for hybrid clouds, and that is likely to remain the case throughout 2015.

Automation

 

robot

Automation is probably the mantra I have heard repeated most among IT professionals and leadership in the last half of 2014, and it has been hand in hand with the goal of enabling self-service for their users.  Businesses everywhere are still struggling to scale IT operations under constant budget pressure.  The only way to get more work done with ultimately fewer people is to automate.  Continuous integration and deployment is also a very common goal among the population of engineering and operations teams I have spoken with recently.  Along with Docker itself, there are some very nice tools like Shippable emerging to take advantage of containerization.

In 2015, I expect that we will see some of the existing automation frameworks like Puppet, Chef, Salt, and Ansible develop features to handle difficult tasks like bare-metal hardware orchestration (several of these already have some capabilities in this respect).  We call this getting the infrastructure to “ready-state”, which is the first moment the infrastructure is available to accept a workload after first-time hardware provisioning.

The problem with third-party automation tools is that they have great difficulty keeping pace with hardware vendors, who are constantly refactoring and shipping new products, hardware revisions, firmware updates, and management software.  Ultimately, it is most likely hardware vendors themselves that will have to deliver superior automation orchestration.

This brings us to our last major disruptive trend, converged infrastructure:

Converged Infrastructure

converged

For virtually as long as IT has been supporting businesses, the primary functions of the IT organization was to evaluate, acquire, and integrate different hardware and software to create business systems.  And, since the dawn of the PC era, hardware vendor lock-in has been a primary design consideration.  As web-scale architectures have matured and become widely known, and with the advent of virtualization (containers included), the days of IT acting as its own hardware integrator may be coming to an end.  Virtualization of any kind means that IT teams can be less concerned about hardware vendor lock-in, since they can quickly re-deploy these systems without downtime. As discussed before, modern IT teams are becoming primarily concerned with creating and maintaining software.  The imperative to move quickly to respond to customer and market trends means there is less lead time than ever.  What if deploying infrastructure on-premise was no more complicated than assembling Lego blocks?

Converged infrastructure, or what Gartner calls “rack based computing”, is the inevitable culmination of many of the trends discussed herein.  Converged systems help ordinary businesses create web-scale infrastructures, delivering pre-integrated compute, network, storage, and virtualization.  For on-premise IT, as well as service providers, converged infrastructure is probably the most exciting development in a dozen years.  Back in 2007, I joined a hot young start-up called 3LeafSystems creating a converged network, storage, and compute fabric.  It is a great example of exactly how long these technology trends can take to actually come to fruition.

Today, every major hardware vendor has a converged line of businesses, and there are a number of start-ups doing very well in this space (like Nutanix).  In 2015, we can expect to see a lot of vendor activity in this area, as the next generation of these systems begin to come to market.

Moving Forward

No doubt about it, early-adopters of these technologies will have something of a wild ride in 2015 and beyond.  Even so, early investors are likely to have a significant ROI cases for companies seeking better agility and lower costs.  Are you an IT leader, and are these trends and others on your radar?  I would love to learn about it, so please drop me a note on LinkedIn.   I hope that you had an excellent holiday to start to your New Year, and wish you a very successful 2015.

 

 

 

 

Why Docker and Containerization is a Boon for Software Startups

Why Docker and Containerization is a Boon for Software Startups

This is a repost from the Dell Entrepreneur Blog.

In the 10 or so startups I have helped build since 1995, one of the biggest challenges we always faced was the “scale problem”. Scale is a problem of success, a great problem you get to solve when you succeed at creating some initial value for your customers. If the measure of a software company’s ability to innovate is the velocity of software creation, then the measure of a web software company’ s ability to innovate includes getting that software through testing, integration, and successfully deployed into production. The scale problem can impact each of these functional areas differently and sometimes in surprising (and interdependent) ways. Often, the resolution of one scale problem simply reveals another previously unknown scale problem, leading to a seemingly unending list of issues and remediation activities.

Over the last 10 years creating value in web software applications, and scaling them up, has become a lot easier. The growth of agile software development practices have greatly accelerated software development, testing, and delivery. At the same time, the advent of cloud services like Amazon, Google, and Azure have reduced the time to acquire and deploy web infrastructure to near zero. Continuous integration and deployment has taken hold as an operations and architecture style, automating test and integration processes, and reducing the overall time between development and customer benefit. Today, software startups can build world-class web applications, scale them to millions of users, and ensure their availability for less money, time, and with employees than ever before.

These advances in scale have not been without tradeoffs, however, and a new set of problems has emerged around development and operations complexity. While the horizontal scaling pattern increases capacity along with the number of application server instances, the difficulty becomes managing the sheer number of application server deployments, along with their configuration and dependencies on other infrastructure and services. This also has severe implications for continuous integration and deployment processes, where functionality often has to be “stubbed out” owing to differences in the production and development environments. While further infrastructure abstraction in the form of platform-as-a-service can improve developer workflows and further reduce operations complexity, these issues are not simply negated by PaaS. As it turns out, many of the startups who initially adopted PaaS have eventually moved on to more traditional infrastructure models as they have scaled, owing to better economics, and increased flexibility to manage problems and make changes. These problems and complexities are at the root of why so many people are excited about containerization, and more specifically, Docker (the de-facto containerization standard for Linux).

One of the major benefits of Docker and containerization is that they enable an architectural style known as immutable infrastructure. In an immutable infrastructure, components of the infrastructure are never modified once deployed, but rather they are replaced by new components generated from a pristine state. For example, in the “old world” of 2008, an upgrade to the Java version across an infrastructure of 1000 servers would require:

  • logging in to each server
  • downloading the Java release
  • installing Java
  • restarting the application servers.

To deal with the scale challenge, these steps would be automated through scripting (ssh for loop), configuration management (Puppet, Chef), and other external systems. The problem of course being that this type of process works when it works, except when it doesn’t work, requiring teams to discover differences between systems and environments and recover on the fly.

In the new world of immutable application containers, applications are built in place with their dependencies and configuration. Wherever those containers are deployed, they run with the same dependencies and configuration. Containers are absent infrastructure dependencies, such as hostnames and IP addresses, which are injected at runtime when the container is deployed on a target host. This delineation provides a very clean separation of concerns between the application and the host infrastructure, and ultimately reduces the number of configuration endpoints by orders of magnitude (to just 1).

The benefits for software development are vast. Already, Docker (and others) are working on various new forms of service discovery, in order to solve the infrastructure dependency injection problem, and consequently the “awareness” of dependencies between application components on different servers and infrastructure. This means that in the future, development teams can focus on creating and maintaining micro services, another important scaling pattern for software development process. Micro services are discrete functionality packaged as network-available services, often through REST interfaces, which are completely independent of other services. By leveraging the immutable architecture style, integration testing becomes a matter of deploying the necessary micro services from their pristine state in the repository and executing the test harnesses against the complete system.

For operations, the immutable style provides a means to deploy major releases fractionally, testing them with small portions of the user base before rolling them out across the entire infrastructure. Containers themselves can be automatically introspected to derive dependencies on other containers, networks, storage, and other systems. Rolling upgrades backwards requires merely deploying the previous version of the container(s) in question and terminating the more recent version.

In the last few months there has been a relentless outpouring of new orchestration systems for Docker containers, including Kubernetes (Google), Mesos (Mesosphere), as well as PaaS capabilities from Flynn.io, Deis, and OpenShift (RedHat). As these new tools continue to emerge and mature, all of these developments will ultimately translate to a lot less software startup equity being spent on overhead and lot more software startup equity being spent on creating value through working software that scales.

That is a huge win for start-ups, their founders, their employees, their investors – and their customers.

Here at Dell Cloud Marketplace, we’re creating a new generation of tools to help IT and developer teams compare, consume, and control cloud services. Emerging technologies like Docker and containerization are a big part of what we’re building and we’re excited to showcase some of our progress in the forthcoming public beta of Dell Cloud Marketplace.