The term serverless computing may sound a little misleading to the uninitiated. After all, every online service needs to be hosted somewhere. The term is most meaningful to developers on the customer end of offerings such as AWS Lambda, Microsoft’s Azure Functions, and IBM’s OpenWhisk, since those services remove the need for customers to think about managing needs associated with hosting, scaling, and other under-the-hood concerns that come with servers.
Orange Silicon Valley’s principal cloud architect Jeremy Huylebroeck sat down with us to answer a few questions about why serverless options have taken off and what they mean for how developers approach their work. As these services continue to evolve, it will be an interesting space to observe, and Jeremy explained what he will be watching.
OSV: You stated in a report you wrote that “going serverless” pushes boundaries in terms of scalability and reactivity, but isn’t that what elastic computing has always been about? Is there something new here in terms of how fast and how big?
JEREMY: The elastic computing marketing term came at a time when getting IT resources was commonly still a matter of days or worse. The new benchmark became minutes. Serverless typically pushes it down to 100-200 milliseconds maximum. It reaches the level of just the time it takes to call a function over the internet.
Elastic clouds are known to allow quick scalability in terms of number of servers in order to follow the demand. Serverless enhances the idea by hiding completely the complexity of scaling. Resources are allocated transparently in 100 milliseconds, and — most importantly — they disappear entirely when not used. True elasticity is not only about how big, but about how small it can be.
OSV: So what does “going serverless” entail? How does it affect what we do inside the DevOps model?
JEREMY: Going serverless means having a discussion with the developers. Are they comfortable with the offered service level agreement — and most of all the supported languages and libraries? Serverless platforms are much closer to the code.
Serverless services have the advantage of naturally integrating with the DevOps transformation. It helps developers and operations to work together because it moves the responsibilities on the developer side. IT made the serverless platform specific enough to reduce complexity, but generic enough to make it flexible and controllable by the developer. Serverless comes as a great complement to the work done in containerizing and microservicing applications.
OSV: So, microservices and containers seem to be the table stakes here. What about the rest of the world that is still trying to formulate a strategy for cloud (public or private)? Does this help the late adopters move any faster?
JEREMY: Serverless platforms’ scope is still limited today. It makes them pretty simple to use, and very efficient when properly targeted to where it matters (e.g., spikes in traffic). They are complementing an existing architecture, and not replacing everything. Private serverless solutions may make little sense today, unless it is part of a DevOps/CI/CD transformation potentially. They make more sense in public or hybrid deployments, and even more in architectures leveraging services from others. Serverless services are a typical way to run part of a “microservice”-oriented architecture.
Late adopters are typically doing hybrid clouds. There is no doubt that if serverless touches part of their infrastructure and software, it would accelerate their transformation. It’s part of the bet of cloud providers, to create Cloud SDK stickiness.
OSV: What kinds of services can benefit the most from this model? How do we think about its utility in terms of front-end versus back-end?
JEREMY: Typically, unpredictable traffic scenarios benefit from serverless scaling capabilities. Anything that is run infrequently, at scale or not, can also benefit from it by removing idling servers. Today we see a lot of data manipulation done as the data flows (for example: logs formatting and image or video processing), and large-scale big data processing with Hadoop. It is also used frequently as the glue between different web services. The serverless code is triggered by an event (for example: a sensor making a small HTTP call with its information). The code starts a cascade of other web services needing to process or react on the data.
OSV: If a startup or a big player is already working with IBM Bluemix, AWS Lambda, MS Azure, or Google Cloud, will we see price competition in 2017, or is this year going to be about building out the product offerings?
JEREMY: Pricing is the same on all the platforms today. I believe the value for the provider is to extend the product offering. It’s gonna broaden the scope of use cases first of all. But more importantly, it is going to increase stickiness.