#63: In this first episode about serverless, we attempt to define the fundamentals of serverless in 2020.
Viktor Farcic is a member of the Google Developer Experts and Docker Captains groups, and published author.
His big passions are DevOps, Containers, Kubernetes, Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD).
He often speaks at community gatherings and conferences (latest can be found here).
He has published The DevOps Toolkit Series, DevOps Paradox and Test-Driven Java Development.
His random thoughts and tutorials can be found in his blog TechnologyConversations.com.
If you like our podcast, please consider rating and reviewing our show! Click here, scroll to the bottom, tap to rate with five stars, and select “Write a Review.” Then be sure to let us know what you liked most about the episode!
Also, if you haven’t done so already, subscribe to the podcast. We're adding a bunch of bonus episodes to the feed and, if you’re not subscribed, there’s a good chance you’ll miss out. Subscribe now!
Viktor Farcic 0:00
There are two ways I think we can split that conversation. One is that it can be a function or a container.
Darin Pope 0:08
This is episode number 63 of DevOps Paradox. Serverless 101.
Darin Pope 0:17
Welcome to DevOps Paradox. This is a podcast about random stuff in which we, Darin and Viktor, pretend we know what we're talking about. Most of the time we mask our ignorance by putting the word DevOps everywhere we can, and mix it with random buzzwords like Kubernetes, Serverless, CI/CD, team productivity, islands of happiness, and other fancy expressions that make us sound like we know what we're doing. Occasionally, we invite guests who do know something, but we do not do that often, since they might make us look incompetent. The truth is out there and there is no way we are going to find it. PS: it's Darin reading this text, and feeling embarrassed that Viktor made me do it. Here are your hosts, Darin Pope and Viktor Farcic.
Darin Pope 1:09
Now, if you were with us last week, and you should have been, if you haven't, go listen to last week's episode, our conversation with Adam sort of devolved into what's next after Kubernetes and we said Serverless. Now we've got the new DevOps Catalog course and book. If you haven't picked it up, go pick it up. The link for it is down in the show notes. Shameless plug. We realized that we've talked a little bit about serverless before, but since we're taking sort of this detour in the trainings that we should focus a little bit more on how we see serverless. So we've got a handful of episodes coming up over the next few weeks. Not sure exactly how many but we're going to start today with maybe it's a serverless 101 or how we see serverless. And if you go back in the history books five years ago, a little bit more than that. It's you could have seen the progress over time. There used to be Parse, which was a cool standalone service that was a Backend as a Service. Right? It's, it's the "as a Service" has been changing over time, right? We had Platform as a Service and then Backend as a Service. But now we have Function as a Service that's sort of like we are decomposing everything down to now to functions. Is that how we think serverless is today? So we're here we are in mid 2020. What is serverless today?
Viktor Farcic 3:00
So I think it's important to decouple the idea from specific implementation of whatever the subject is right. And in my head the idea about serverless is that we focus more on writing code and letting somebody else do the rest. When I say writing code, testing and stuff like that, right, as well. So I see serverless as an evolution of providing more stuff on top of infrastructure as a service, right? Simply, we are not giving you any more just pure servers like VMs. Like, what is really what cloud started with. We're giving you more and more and more so that you can focus more and more on your core business. And serverless in that context, is giving more services so that your applications run well, like we're going to give you not only server where your stuff runs, but we're going to give you monitoring and auto scaling, so on and so forth, until you get to the point where all you have to do is say, this is my application, run it. And I'm going to continue being focused on my application. That's how I see serverless. Right? This is not necessarily a definition of serverless but how I see it. Now, the problem is that, and this is normal this that we, when we get some terms, like in this case serverless, we inevitably associate it with certain implementations of that term, right? So when we talk serverless, most of the people first think, think lambda, right? Same thing like when you say Google, you don't say search, right? Even though Google is not the only search engine because it became so big for basically a good reason. So people associate lambda with serverless, which is a right association to be honest, and therefore associated serverless are functions. Where in my case I think that functions is a subset is a flavor of serverless. And anything that can allow me to just say, this is my application, do whatever needs to be done with it, in my head is serverless in a way,
Darin Pope 5:24
I sort of agree, because you're saying my application, but I can't take a whole application and run it as a single lambda. I could, but should I? And that's where we're going to talk about all those details on the next episode. But let's talk about it just for a second.
Viktor Farcic 5:43
Yeah, but can you repeat that sentence, but say serverless instead of lambda? In that same sentence?
Darin Pope 5:50
I don't even remember what I said
Viktor Farcic 5:52
So, you said I cannot run my whole application as lambda. And if I would change that and say I cannot run my whole application as serverless then I would disagree. Your application cannot be a function your full application cannot be a function therefore, it cannot be lambda. That is correct. But there is nothing really saying that your whole application, no matter whether it is 10 lines of code, or thousand or hundred thousand cannot be treated as serverless. Serverless as in terms, here's the application, do what needs to be done. That's why I would kind of and the two major groups that I'm seeing that there are many, right, but there are two ways I think we can split that conversation. One is that it can be a function or a container. Right? There are other flavors, but those are the most common. Function and container. And you can see that all the providers are providing those two flavors of serverless. like Google has Google Functions and Google Cloud Run and then Azure has Azure Functions and Azure Container something I don't remember, and AWS has Lambda and ECS with Fargate. Right? So everybody has those two flavors, which have different pros and cons. And then we can split it maybe into managed and self managed. Right? Will you let somebody else outside of your company manage everything after here is my application? Or will you have a department in your company who is going to provide that type of service? In both cases for you as a developer, it is managed. But whether it is a third party company like AWS, or it's a different department who is doing that managing then we can call it managed or self managed maybe. Like you would have OpenWhisk would be example of self managed service.
Darin Pope 7:55
So or OpenWisk OpenFaaS. What was a 3rd one I say today? Kubeless. Knative. Kubeless. Right there, all of these could be considered to be self managed. Yeah, even though even though there may be also managed versions of those available as well,
Viktor Farcic 8:11
yes, there might be, but yes, exactly.
Darin Pope 8:14
So if we were having this conversation four years ago,
Viktor Farcic 8:19
hmm,
Darin Pope 8:20
all of those names that we just listed off, didn't really exist. Some of them did, but most of them didn't.
Viktor Farcic 8:28
In the four years ago, it was mostly lambda. I think that Azure appeared, like maybe five years ago, I think, or something like that. Maybe OpenWhisk was there. I'm not sure.
Darin Pope 8:40
And they're, they're all sort of, you know, we're sort of coming to sort of this time, where all of these things are ready for primetime. Do you want to call it ready for for whether you can use it, as primetime is a different story but from a usage perspective as something that I could build my business on. Again, this is what we're gonna talk more about in the next episode, but I believe that everything is at a reasonable GA level. It's not version one.
Viktor Farcic 9:18
No, it is GA. I do think it is GA. However, like with any relatively new idea or technology, something becomes GA. And then GA initially means GA for a small subset of cases. And then over time, that will be expanding, right? I do think it's GA. But I also think that the number of cases is relatively limited to certain specific use cases. It will never be hundred percent, but I don't think that we reached the desired scope of the use cases which would fit into serverless. I don't think that we are there yet. But for those where actually the current implementation fits their needs, it would be GA, at least. And this is now tricky. GA in, in managed service, definitely GA. lambda is I mean, it's been there forever. Now self managed is yes, yes. But then the self manage would pose additional obstacles. Like, if you're managing your serverless infrastructure, then does it bring all the benefits but that's the next episode I guess.
Darin Pope 10:37
We're making the statement that everything is GA, not just version one GA but everything's at least version three, version four GA, right, we're sort of making numbers out but you know, it's, everything's stable enough now to where I would be willing to risk a good portion of my business on it.
Viktor Farcic 10:54
I wouldn't say good portion that but that portion of your business
Darin Pope 10:59
A measurable portion, let's say. Not just 1% I'd be willing to go 20-25% of my business.
Viktor Farcic 11:06
That's the part I wouldn't agree. So if you say, Okay, let me rephrase that. I think that generally speaking, function as a service flavor of serverless would be GA. And I don't think that container as a service flavor of serverless is GA yet. Right? I think that there is work there to be done. We're close but not there yet. And then, so if if, if function, flavor of that is GA then I still wouldn't say that you can take an average company and say 25% of what you have is going to be nano nano services. I'm not, not to say functions kind of and it's not my if might if you have micro services then this would be nano service, right? I'm not sure that companies are ready to split it into such tiny pieces at 25%? If we say five, maybe yes, I mean, but five is a considerable, five is a considerable number, the same thing. I might be too pessimistic. But I would say that what we're the percentage of companies ready to use Kubernetes would not companies, but percentage of workloads in companies for Kubernetes is not hundred percent it's not even 50. Right. We are talking about low two digit numbers.
Darin Pope 12:32
So I will concede that. And you said the one thing that I agree with you on and I hadn't really thought much about it, because lambda has been out there for so long. Right there. They're sort of the original of the Big Three. Did Google functions. They're the last to the game. I think.
Viktor Farcic 12:49
They've maybe three years, give or take.
Darin Pope 12:53
So functions have been around for a long time in internet time a long time, but the container based the ECSs, the cloud runs those are or ECS with fargate is relatively new,
Viktor Farcic 13:12
yes,
Darin Pope 13:13
in comparison to their function, brethren. And in theory that should be simpler. But if you've been running containers already on a bare Docker node without orchestration Mesos Kubernetes doesn't matter, it's the same, but different right?
Viktor Farcic 13:31
I tend to look at it from different directions kind of that are colliding now, I think that there is that serverless movement that is oriented towards functions right. And then that in parallel after lambda, so, containers are newer than lambdas, right? And then containers appeared, Kubernetes appeared and now put serverless completely aside and then there is the effort to make containers easier to deploy, right? abstracting layers on top of schedulers like Kubernetes. But it doesn't have to be Kubernetes. And then that those two separate efforts are kind of combining themselves into the same pinnacle of being called serverless. That what I'm trying to say is that I think that different approaches are coming from different directions. And but ultimately, going towards the same same objective, which is what we call today serverless. I wouldn't say that idea behind containers start today, let's make a better version of serverless. I don't think that that happened, just kind of naturally evolved. And come to a point to say, look at that, actually, that's also serverless, in a way.
Darin Pope 14:52
when you started down the path for the course for the serverless section. You made a conscious decision to not write everything from scratch. You chose the Serverless Framework to do your development, right?
Viktor Farcic 15:15
Yes.
Darin Pope 15:16
Now, of course, they've been around for a very long time. I used them on a project a few years ago. Back when it was very, like less than 1.0 Like .1 I was I was like, very early adopter, felt the pain because I was trying to do stuff with Java. That's my that's my primary language still. Because JavaScript is sorry for the kids. I'm not a kid anymore. callbacks, what are callbacks
Viktor Farcic 15:52
sorry for interrupting. I do share that actually I have If I would have to choose between Java and JavaScript, I would have a slight preference over Java, I wouldn't say that that's my favorite, but I would go towards that. But then when you think in terms of functions now functions, then JavaScript, if those are my only two choices makes much more sense. Because it's tiny. It's small. It's if you ignore the fact that you need 5000 dependencies for anything, it's, it's feels better for something if it's going to be three digit number of lines,
Darin Pope 16:29
right. And if you're coming at serverless, from a Spring Boot, this is prior to Quarkus. This is prior to everything right? Going back three, three or four years. If you're coming at it to where you had, I think it was two JAR files that lambda gave you that you had to have in order to spin it up. But hey, I'm a Spring Boot developer. So let's go and bring all the Spring Boot dependencies in. Then you're back to where you're at with the 500 dependencies. I think what I'm trying to say here is having a developer framework, an application developer framework is really important to help keep you within the guardrails. Because you cannot create, I'm going to step out on a limb here. You cannot create functions the same way you create full applications.
Viktor Farcic 17:18
No. I was about to say something, but I'm keeping it for the next episode.
Darin Pope 17:24
Okay. So and and so this, this is still trying to stay around the 101. Just because you've been an application developer for 30 years like me. You've got to completely retrain your brain.
Viktor Farcic 17:41
Yes,
Darin Pope 17:42
because it's it is a different way of thinking. Can you still use your dependencies? Sure. No problem. state. What is the state you speak of? Well, what's funny though, now with lambda and EFS, okay, You can do it doesn't mean you should.
Viktor Farcic 18:03
Yeah. And even before kind of, you shouldn't be doing stateful applications, no matter whether we're talking about serverless or not, and when I say stateless, you should be doing stateless applications. What I really mean is that your state should not be in your application. Use a database,
Darin Pope 18:21
yeah. Store it somewhere outside of the application. It should not be inside it. Yeah,
Viktor Farcic 18:26
exactly. And then then we can discuss whether actually you should have me you have to, for any serious use case, doesn't matter. If you use lambdas. And if use functions, you will have to have state somewhere it's impossible not to have a state. It just not in your function, or application or even monolith kind of like not where you're developing.
Darin Pope 18:50
Right. So that's probably one of the biggest things that you if you're not used to writing stateless applications if you if you're coming from stateful applications. Making that jump to stateless you think is going to be simple. But your fingers will tell you different when you sit down to type things out, because you're going to fall back into stateful habits. And that's not a bad thing, but you will quickly find out that scaling becomes a big issue really fast. And that's that's one of the big pluses of good serverless. Right? Yes, there are servers there, right? We haven't even mentioned that. Yes, there are still servers there. But your, your application can scale infinitely. that's a that's a positive and a negative.
Viktor Farcic 19:39
That's kind of you know, a while ago when I was exploring functions as a service lambdas and whatnot. I was kind of frustrated. Okay, so you support only those three languages, right? My application cannot run. one instance cannot run more than whatever number of minutes and then Sometimes for, to be honest, I was frustrated with that. And now I have a completely opposite reaction. Actually, I'm frustrated that they're removing those restrictions. Like the news the other day that you read about being able to attach what was it? Yeah, EFS to lambda. Yes. Right. So actually, I'm now I think that was a good thing. You just took time to get used to it. And now that they're removing, because they want to increase the customer base, I understand that but they're removing the constraints that were guiding you towards better usage of certain technology and basically, that we're telling you who you cannot fit into those guardrails, then actually, this is not a good use case for you. For those those guardrails are getting removed, and that's not a good thing. Maybe,
Darin Pope 20:53
yeah, I there's probably a couple of good lightweight cases. Again, if you're listening from AWS, thank you for listening. But we know that EFS is not really a performant data store. It's not true NFS it's NFS compliant, but it's not fast. Sure, you can provision throughput, it still doesn't matter. So I, it's a toss up, right?
Viktor Farcic 21:21
And, you know, kind of like when I think about use cases of types of functions that would benefit potentially, maybe from storage, like that, I think have always come up with the use cases that actually do not really need more than one zone. So kind of like, I don't know if it's big data processing, right? Most likely, if the whole zone is down for 15 minutes, it's not going to be the end of the world. That's not a user facing usually application,
Darin Pope 21:53
right? It's that's a batch. That's a batch job that's running.
Viktor Farcic 21:56
Exactly. So I'm not saying always but Most of the cases I can imagine that would need that would be just as better off with block storage, which is limited to a zone. But yeah, you're in a single zone. So what if zone goes down? It's not pretty. It's not nice, but you're not really putting Amazon's homepage amazon.com. That that's not the type of application we're talking about anyways.
Darin Pope 22:24
Well, and maybe this is one we'll talk about. but I'll go ahead and mention it is, should we even if we're in a single zone, should I be reliant upon real disk? Whether it's EBS, or EFS? Shouldn't I instead be using s3 and just making calls making API calls for data instead? Right, it's, it's those kinds of things. But Alright, so let's, let's hold that, because that's a very good question about what we'll talk about in the next one. Have a one thing we talked about with JavaScript versus Java, one of the things that is usually considered a con for serverless is cold starts. If you're if there's not an instance available, it takes X amount of time for that function, that service to become available for usage. So an event comes in and there's nothing running to access it. Okay, let me go spin one up, and then we'll pass the request on to that. JavaScript, no big deal. Right? reasonably ish fast. Was that what you saw when you were doing your tests?
Viktor Farcic 23:34
Yes, reasonably fast. I did test this batch only with Javascript. I haven't compared it with Java. That would actually be a good idea?
Darin Pope 23:45
Well, and then, so let me tell it, then java using standard Java. I'm not talking Graal. I'm not talking the higher performing the JVMs. I'm talking just a vanilla JVM. You're talking it could take seconds, sometimes for it to warm up.
Viktor Farcic 24:03
But it doesn't matter. It doesn't matter, because I don't see functions. So I don't see people, I don't see sufficiently good use case for functions that are being hit single function being hit with thousands or millions, millions of requests per day. I don't think that that's a use case. I think some kind of batch processing is a good use case, maybe trigger of a web hook from something right is a good use case. Most of the good use cases do not really I don't think is that critical whether it starts in a millisecond or three seconds. I don't think for the use cases I'm imagining for functions, right.
Darin Pope 24:49
So one that I could think of that would be interesting is if you're sort of a martech a marketing technology type play, you're doing ad serving. Ad serving you, you've got to have fast response. Because there's there's a lot of things in play. If you've never looked into that that's a very interesting side of our business.
Viktor Farcic 25:15
Yes, actually I almost worked for such a company. Spent some time with them. But yes, that's, but you wouldn't use Functions as a Service for that. It doesn't matter. JavaScript is also kind of like, slow. If you need to spin up an instance, it's already too late, of whatever that is. So you need to have an instance running. And what you're really looking for in cases like that, is probably to be able to scale that instance, depending on the traffic. But still, that's kind of ohhh I can handle up to 1000 requests per second or whatever, right? I can. I'm closing to 900. Let me spin up one more, another still until that spinning up of additional instances is happening, you're still handling your requests, right? one instance of something, and this is this is now important to understand. If you have an instance of instance of something, whatever that something is, in this case, functions, right? per request, then you need to think twice what is a good use case? Let's, let's see a simple, simplest possible use case. I can imagine now, not simplest but simple. Let's see a silly website. Okay, one of those that you do with Jekyll or whatever cool kids do today, right? So, and I open a homepage. Right. normal homepage has few JavaScript libraries and some CSS and some images. That's probably going to generate at least 50 requests. A single hit on homepage right 50 requests is really architecturally forget about kind of what is a service what is not a service architecturally, that it doesn't make sense to have a 50 instances of something being created because I have 50 requests based on a single user request. That's not the use case. And that's a big majority, every, almost most of the things have a huge number of requests this time this these days.
Darin Pope 27:31
And that's a good place to stop. Because we're leaning back into when you may want to use serverless and when you may not want to use serverless. So a couple of things we called out. Functions as a Service today, between me and Viktor we both agree that Functions as a Service is very usable, completely GA from a managed perspective. If you wanted to use self managed I don't know that that's completely GA yet. self managed functions maybe
Viktor Farcic 28:09
I think it is GA. OpenWhisk is good enough. FaaS is a misleading name. I'm not sure whether that's characterises function, and it's called fast. But let's say OpenWhisk is good. It's really a question of how much do you want to do that? on your own? That's the real question, rather than it's GA.
Darin Pope 28:31
And the biggest con Of course, you can go Google anything about serverless and you'll find out about cold starts. And just because there's a cold start issue, I don't believe should throw out the baby with the bathwater. cold start is just what you have to do. So that's where we'll stop for today. Next week, we're going to talk more Again about should you even consider using serverless? And if you do, what are the use cases? Sort of like what you're saying? Do I have 60 different 50 different serverless functions to load up a webpage? pretty extreme but I have a bad feeling somebody's probably done that before. Okay. Thanks for hanging out Viktor. I think I think this one was good. I think the next one will be a little bit better. Because this one we tried to stay not too opinionated. The next one. Opinions will fly.
Viktor Farcic 29:38
takes a lot of effort for me to not be opinionated.
Darin Pope 29:44
We hope this episode was helpful to you. If you want to discuss it or ask a question, please reach out to us. Our contact information and the link to the Slack workspace are at https://www.devopsparadox.com/contact. If you subscribe through Apple Podcasts, be sure to leave us a review there. That helps other people discover this podcast. Go sign up right now at https://www.devopsparadox.com/ to receive an email whenever we drop the latest episode. Thank you for listening to DevOps Paradox.