Skip to main content

Lambda is an AWS internal efficiency driver. So why no private serverless models?

I’ve been in a number of conversations recently about Functions as a Service (FaaS), and more specifically, AWS’ Lambda instantiation of the idea. For the lay person, this is where you don’t have to actually provide anything but program code — “everything else” is taken care of by the environment.

You upload and press play. Sounds great, doesn’t it? Unsurprisingly, some see application development moving inexorably towards a serverless, i.e. FaaS-only, future. As with all things technological however, there are plusses and minuses to any such model. FaaS implementations tend to be stateless and event-driven — that is, they react to whatever they are asked to do without remembering what position they were in.

This means you have to manage state within the application code. FaaS frameworks are vendor-specific by nature, and tend to add transactional latency, so a re good for doing small things with huge amounts of data, rather than lots of little things each with small amounts of data. For a more detailed explanation of the pros and cons, check Martin Fowler’s blog (HT Mike Roberts) .

So, yes, horses for courses as always. We may one day arrive in a place where our use of technology is so slick, we don’t have to think about hardware, or virtual machines, or containers, or anything else. But right now, and as with so many over-optimistic predictions, we are continuing to fan-out into more complexity (cf the Internet of Things).

Plus, each time we reach a new threshold of hardware advances, we revisit many areas which need to be newly understood, re-integrated and so on. We are a long way from a place where we don’t have to worry about anything but a few lines of business logic.

A very interesting twist on the whole FaaS thing is around its impact on server efficiency. Anecdotally, AWS sees Lambda not only as a new way of helping customers, but also as a model which makes better use of spare capacity in its data centres. This merits some thought, not least that serverless models are anything but.

From an architectural perspective, these models involve a software stack which is optimised for a specific need — think of it as a single, highly distributed application architecture which can be spread over as many server nodes as it needs to get its current jobs done. Unlike relatively clunky and immobile VMs, or a bit less flexible containers, you can orchestrate your serverless capabilities much more dynamically, to use up spare headroom in your server racks.

Which is great, at least for cloud providers. A burning question is, why aren’t such capabilities available for private clouds, or indeed, traditional data centres? In principle, the answer is, well, there should be. Despite a number of initiatives, such an option has still to take off. Which begs a very big question of — what’s holding them back?

Don’t get me wrong, there’s nothing wrong with the public cloud model as a highly flexible, low-entry-cost outsourcing mechanism. But nothing technological exists that gives AWS, or any other public cloud provider some magical advantage over internal systems: the same tools are available to all.

As long as we live in a hybrid world, which will be the case as long as it keeps changing so fast, we will have to deal with managing IT resources from multiple places, internal and external. Perhaps, like the success story of Docker, we will see a sudden uptake in internal FaaS, with all the advantages — not least efficiency — that come with it.



from Gigaom https://gigaom.com/2018/03/23/lambda-is-an-aws-internal-efficiency-driver-so-why-no-private-serverless-models/

Comments

Popular posts from this blog

Who is NetApp?

At Cloud Field Day 9 Netapp presented some of its cloud solutions. This comes on the heels of NetApp Insight , the annual corporate event that should give its user base not just new products but also a general overview of the company strategy for the future. NetApp presented a lot of interesting news and projects around multi-cloud data and system management. The Transition to Data Fabric This is not the first time that NetApp radically changed its strategy. Do you remember when NetApp was the boring ONTAP-only company? Not that there is anything wrong with ONTAP of course (the storage OS originally designed by NetApp is still at the core of many of its storage appliances). It just can’t be the solution for everything, even if it does work pretty well. When ONTAP was the only answer to every question (even with StorageGrid and EF systems already part of the portfolio), the company started to look boring and, honestly, not very credible. The day the Data Fabric vision was announced