Summary
Communication is a fundamental requirement for any program or application. As the friction involved in deploying code has gone down, the motivation for architecting your system as microservices goes up. This shifts the communication patterns in your software from function calls to network calls. In this episode Idit Levine explains how the Gloo platform that she and her team at Solo have created makes it easier for you to configure and monitor the network topologies for your microservice environments. She also discusses what developers need to know about networking in cloud native environments and how a combination of API gateways and service mesh technologies allow you to more rapidly iterate on your systems.
Announcements
- Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
- When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
- Your host as usual is Tobias Macey and today I’m interviewing Idit Levine about what developers need to know about service-oriented networking and her work at Solo on the Gloo project
Interview
- Introductions
- How did you get introduced to Python?
- Can you describe what Solo is and the story behind it?
- How much should developers need to know about the ways that their applications and services are communicating?
- What is the current state of networking for applications across physical, cloud, and containerized environments?
- How do service mesh features influence the architectural decisions that software teams make while building their applications?
- What operational capabilities do they unlock?
- What are the aspects of application networking that are simplified or enhanced by service mesh platforms?
- In what ways has service mesh introduced new complexity to operating software systems?
- How can developers mirror the network topologies for production environments while working on new features?
- What are the most interesting, innovative, or unexpected ways that you have seen Gloo used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on Gloo?
- When is Gloo the wrong choice?
- What do you have planned for the future of Gloo?
Keep In Touch
- @Idit_Levine on Twitter
Picks
- Tobias
- Shadow and Bone on Netflix
- Idit
Closing Announcements
- Thank you for listening! Don’t forget to check out our other show, the Data Engineering Podcast for the latest on modern data management.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you’ve learned something or tried out a project from the show then tell us about it! Email hosts@podcastinit.com) with your story.
- To help other people find the show please leave a review on iTunes and tell your friends and co-workers
Links
- Solo
- Computational Biology
- Microservices
- Kubernetes
- Service Mesh
- Istio
- LinkerD
- Envoy Proxy
- API Gateway
- CRD == Custom Resource Definition
- Gloo Edge
- Bazel Build System
- GraphQL
- mTLS
- GitOps
- Dagger
- WASM == Web Assembly
- Kubernetes Gateway API
- Consul Connect
- eBPF
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Hello, and welcome to podcast dot in it, the podcast about Python and the people who make it great. When you're ready to launch your next app or want to try a project you hear about on the show, you'll need somewhere to deploy it. So take a look at our friends over at Linode. With the launch of their managed Kubernetes platform, it's easy to get started with the next generation of deployment and scaling powered by the battle tested Linode platform, including simple pricing, node balancers, 40 gigabit networking, dedicated CPU and GPU instances, and worldwide data centers.
Go to python podcast.com/linode, that's l I n o d e, today and get a $100 credit to try out a Kubernetes cluster of your own. And don't forget to thank them for their continued support of this show.
[00:00:57] Unknown:
Your host as usual is Tobias Macy. And today, I'm interviewing Edith Levine about what developers need to know about service oriented networking and her work at Solo on the glue project. So, Edith, can you start by introducing yourself?
[00:01:08] Unknown:
First of all, thank you so much for having me. I'm Edith Levin. I'm the founder and CEO of Solo. Most of my life worked as a software engineer, so I'm very, very technical. Most of my career worked as a cloud focusing engineering. I worked in cloud before cloud was called cloud. Before that, we call it virtualization. So that's where I was. And then when all the new stuff started in the cloud or the cloud native technology started, I was there from the beginning. So so that's why I am as a technologist. And, yeah, and I started solo, honestly, to have an amazing environment for engineers to work market after I worked in a lot of company, and I thought that we can make something that will be funner for people to work with.
[00:01:48] Unknown:
So that's basically why I started. So, hopefully, it's helpful. And do you remember how you first got introduced to Python?
[00:01:54] Unknown:
Actually, when I was in college, I did 2 major degree. 1 of them was computer science, and the other 1 was actually bio so it's basically, biology. And was called a degree that's called computational biology. So, you know, Python would become very, very strong back then on the biotech industry. So that's where I actually got introduced to this from the first day, and I was doing a lot of project that related to bioinformatics and computation in biology with Python. So that's where I started, basically, get to know that language and wrote some cool code there.
[00:02:26] Unknown:
As you mentioned, you founded Solo to serve as a opportunity for helping to support the cloud native ecosystem. And I'm wondering if you can describe a bit about what it is that you're building there and some of the story behind how it came to be and why that's a particular area of focus that you wanted to spend your time and energy on?
[00:02:44] Unknown:
So as I said first, I mean, I wish I could say to you that it started from an ID, honestly. So it did not started because of an ID. It seriously was me working in a few start up that got acquires and then worked in a big company. Just felt that we can do it better. So, basically, the idea was, you know, I felt that we can manage that better, be more optimized. You know? No politics. It's just like, this is a technology. That's what it should be about. I started so as that position. And then when I looked at the market, it was, like, end of 2017. I think that the biggest thing that happened in that market in that point was the fact that everybody recognized that they need to rearchitect their application and write them as microservices.
So that was, like, the huge movement. And I think that also people basically kind of, like, starting. Like, you could guess that Kubernetes will be the winner, you know, platform that they would wanna run on. So to me, as kind of, like, you know, someone who worked in that technology quite a lot, it was clear to me that like, I tried to figure out what would be the problem that people will have. 1, they will do that migration, and we'll get all microservices. And the thing that I basically realize is that, you know, it's pretty simple. If you have 1 big binary, you cut it into small pieces. You know, that's great. But somehow, these pieces need to, let's call it, glue somehow together. Right? In the point that they need somehow to be able to connect to each other, they need to be make sure that they're connecting a secure way because, you know, specifically because now everything is on a wire, you need to make sure that nothing is coming in the middle. It's the third party.
And, also, honestly, like, just observability by itself became a big problem because there's so many microservices. You don't even know when to request, but kind of waits it. So how can you go and, for instance, collect all the logs? Where are the logs? Or, you know, in general, observability become a very, very big problem. How do you debug that? Right? And I think that that's basically the problem that I kind of, like, looked at. So when I looked at the market themselves back then, I did notice that there is other people who's trying to solve that problem. And, specifically, they call that technology service mesh.
There were only 2 platform back then or project that basically tried to attract that. 1 of them is link a deal came it's called Voyant, but it's basically the founder of Twitter people. So they came from Twitter. The other 1 is Google trying to basically, together with IBM and Lyft, kind of, like, create their own version of service mesh. So that was the end goal of the company. It was clear to me that that's going to be something you that exactly will solve the problem. Though, if you looked at those implementation, you will discover that they really were new and they weren't ready. Right? There were a lot of, like, decision choice, and, honestly, it were more like a POC than actually a real product.
So it was clear to me that that would be the solution to the problem that I want, but it also was clear to me that it will take a while until the market will get there. So in order to do this, what we did is we basically started as we tried to figure out what would be this piece that I can take and focus on it until the market will get there. But I can sell right now. Right? Because I'm a startup. Somehow, I need to sell. I don't think that my investor will appreciate if I would tell them just wait 4 years, and I will give you a customer. So, basically, what I bet on is the proxy. And, specifically, it was envoy proxy.
It was the most mature piece there. Like, it was running already in production on, Lyft. I really like a lot of this functionality of this. It was very, you know, API driven. There were a lot of, you know, like, they have these things called filter chains, so you can kind of, like, customize that. So I felt that this is really, really interesting piece and the most complex 1 because it's c plus plus or sync. I know that this is something that if we will own it, when service mesh will get everywhere, that would be a big piece for us. So, basically, took the proxy and tried to figure out which market can we attack. And we basically started by attacking the API gateway market. We, you know, we looked at that market, and we saw that there is quite a lot of yeah. It's a very mature market. Everybody using API gateway, you have to, right, basically.
Right now, if everything is Microsoft, you probably even need it more. But the problem with this is that there weren't any innovation whatsoever in that market. So, like, while we kind of, like, changing the word, you know, cloud native, eventually consistent, declarative, and so on. We looked at the market of API gateway. The only thing that they change is the messaging. Now it's an API gateway for microservices. That's the only change. Right? Nothing in this software changed. So I basically said that's definitely not the API gateway that I wanna run-in production. And I took basically Envoy and build API gateway the way I will wanna run it, which is, as you said, like, you know, eventually consist and declarative and CRD based in Kubernetes and so on. So you don't need active active Cassandra cluster and so on. So, basically, that's where we started. So it's blue. Now it's called Blue Edge. We open source it. It's an open source project. There's a lot of people using it, which you amazing.
Like, honestly, like, you're probably using it every day. So it's pretty, pretty insanely. And then besides that, as I said, we all today, it, like, we also attacked the service mesh market. And today, we have also the leading service mesh, basically, product, which is also open source, and it's based on STL. And it's basically focusing on taking STL, make it very easy to use with the focus also on multi clusters. So that's kind of like in general what's the laser.
[00:07:59] Unknown:
And as you mentioned, the introduction of Kubernetes and the ability to deploy these smaller units has led to a greater adoption of microservices because you're reducing some of that deployment friction that led to the model that's being sort of the predominant pattern for software engineering for a number of decades. And as you are starting down that path of going into microservices, you have a easy path to production with Kubernetes. I'm wondering how much of the networking the developers need to know about as they are developing these componentized services, and do they need to know how that's all function, how the service mesh how the service mesh works, what the routing is at the API gateway, and just some of the incidental complexity that that brings in along with trying to keep the entire system in your head of all these different services and how they talk together? Or is it something where because it's a service mesh and because it's microservices, you can just scope down in on the 1 service. And long as you have your clearly defined contracts, you don't have to care about the rest of it. Exactly. So that's exactly what it's trying to solve. Right? I mean, before that, you know, before service mesh, there were a solution. Right? Somehow, this microservice did communicate.
[00:09:08] Unknown:
I think that the problem was is that in a big organization, they basically create library, and you basically, they force their engineer to use those library from communication, for security, and so on. The problem with this, first of all, is that that's all in the code. Right? So just assume that you're running your application. Everything is great. Right? Your business logic is fantastic. You're deploying it. Coming as the IT person and saying, hey. You have to upgrade your application. I need you to to support that library. That's pretty annoying, first of all. Right? And, also, honestly, then you need to do a new deployment and so on, which is totally unnecessary. You didn't change anything in the, basically, business logic. So why do we need to do that? So I think that that's also risky because what if this the engineer, honestly, forgot to to include that library? There is a lot of risk around that, and I think that's exactly what the service mesh is trying to do. Honestly, as an engineer, you shouldn't worry about this stuff. Just assume that it's happening.
So as an engineer, honestly, you don't have to be familiar with it out you know, what is the details, how the communication should work, how the security working, and so on. I think that that's something that we can obstruct for the engineers if they are not interested in. And, honestly, I think it's also giving some insurance to the CSA of the world or the IT people that they need to know that this is being forced. And instead of them, like, physically enforcing it, then make sure that the engineer didn't forget. And if the engineer forget, and they need to look at and just process that it's very, very unique. We basically abstract that. And I think this is really beautiful. Basically, abstract that from the business logic. So now you're only in charge of the business logic of the application, which is what probably everybody wanna do.
And then we put basically the proxy, the sidecar next to it, and now it's everything declarative. So all those policy enforcement, which is basically a networking application networking, is basically right now is being done in the declarative way. For some reason, I need to change it. I can change it very simple without influence everything. And, honestly, it doesn't even have to be the engineer. Right? Or in most places, it's not the engineer. What we see in our customers, there is a lot of people, but the funny is is that these things is so necessary. You know? Everybody needs an API gateway if they're using Kubernetes.
And what we see very oh, ingress at least. And what we see basically in our customers that specifically in the open source, there is a lot of start up using it. Right? And that's totally different how they're doing it. There, there is 1 person who's doing it at all. It's the engineer. It's the person who's doing the service mesh. She's doing API gateway. Probably, I don't know. So, right, there's no you know, probably doing also the marketing and everything to it. But we see in the big companies a lot of customers. So we see quite nice kind of, like, snapshot of what's going on in the in the market is that there is customers that, for instance, says, you know, they're on API gateway team, and they're all service mesh team. And by the way, they might be different. They will be that will be the platform, and that will be the API. And so then we see the IT team is kind of, like, responsible, the devil or the the platform owner of Kubernetes is taking care of it. But the beautiful of it is that how much they're giving the engineer to do and how much the engineer need to know about the service measure, the API gateway is basically a very differentiator between industry or different companies. There's companies that they are really trust their people because they're really strong, and then, honestly, they give them a lot of the functionality to do this. And there's customers you know, there's company that that's kind of, like, totally No 1 even know that it's running.
And I think that what we do in our product is basically allowing it both. It's kind of, like, really you're basically creating kind of, like, we call them the walks workspace that basically you can delegate to an application team. You can take the IT or the owner can detect how much freedom is giving to those team and how much does he need to hide the the concept of service mesh, basically.
[00:12:48] Unknown:
Does that make sense? Yes. And 1 of the other interesting elements of the migration to microservices and service mesh is that a lot of teams are going to have an existing monolith that they need to incrementally pick apart and turn into these microservices, which introduces the need to be able to bridge across these different systems. And some of those systems might be running in Kubernetes for their microservices, but their monolith might be running on a e c 2 box somewhere, or it might be running on prem. And I'm wondering how teams are approaching that sort of span of networking being able to bridge across on premises, cloud, and cloud native technologies, and being able to stitch together these different systems so that they are from the outside perspective, a single cohesive unit, but internally, they're actually going through rapid evolution and constant change?
[00:13:40] Unknown:
Yeah. I know. That's a great question. Honestly, this is also start. Like, the first idea was, you know, a a product suite called Galu. Right? And that's honestly because my English is not great. So but that's exactly British is probably a better word for that. But, basically, when I tried to describe what I'd be doing and I said, okay. So people will try to migrate. In the same time, they will have monolithic. They will have microservices. And, honestly, some of our customer and a lot of them, actually, is using also Lambda or serverless. And the reason we call it good is exactly the idea is, like, eventually, you need to glue those environment. They need to look like 1. You need to make them work together, and it will help with the migration. And if you look at the announcement, probably that was the main use cases. We're going to help you migrate.
So that's what our product doing really, really well. I mean, we you know, our customers forget safety and have mainframe. Right? So forget about monolithic application. And, basically, you know, the same customer potentially, big banks banks, and we have a lot of them. You know, they have mainframe. Right? And they have, you know, a lot of VM, like a platform starting platform kind of like a Kubernetes. And a lot of them actually are also really excited about the serverless. So they seriously have all of them. And the fact that we can basically unify all the in network policy, it's pretty, pretty strong.
And, basically, the way it's working is that, you know, the request is coming to the proxy, right, which is Envoy proxy or glue, basically. Right? And that's it. Now all the rest, we know how to communicate to everybody if that's when it's coming from north south, basically, traffic. But if you have east west, the same thing. Right? We're basically capable of form the microservices to call to a Lambda, to call to a ECS instance or whatever you want. The idea is that basically all the communication across the application pieces, let's call it, doesn't matter if it's monolithic microservices or serverless is going to be basically seamless, and that's the work that we will focus on. So for instance, you remember I told you that in Envoy, you can create some customization.
So basically, when the request coming to Envoy, it's going to a filter chain. And you can put whatever logic you want in the filter chain. There's a lot of default stuff, but there's also place for basically customization. Now it's not a simple 1. You need to write it in c plus plus as in. So and then rebuild the proxy, which is by itself, it's not fun, which using some basil, if you know. It's it's not simple. But, but the idea is that already there from the beginning, we created, for instance, a SOAP support or we created a Lambda. So, basically, the filter itself is creating basically the ink chip with Lambda and AWS and basically spinning it up. And, recently, we did something really even cooler than that, in my opinion, which is, basically, we build GraphQL into Envoy. So, basically, we teach the Envoy how to speak right now, you know, how to call it 8 layer 8. Right? So it's not HTTP. It's basically it's it can speak GraphQL. And when the request come to me GraphQL concept, he is the the GraphQL server. Can go patch whatever data you want and basically do exactly what GraphQL server is doing. So I think that that's exactly the idea is that, look, it doesn't matter which API call you make. It's all going to go to the same place, you know, the same glue or the same envoy.
And it doesn't matter, you know, what is the type of the application that you're building, language it is. We don't care. Right? I mean, it's all basically going to be very seamless for you. And it's all policy and, and, declarative, basically, configuration.
[00:16:57] Unknown:
Because of the fact that you do have this ability to incrementively and iteratively adopt these different architectural patterns and be able to stitch together these different system components, whether it is a Docker container running in a Kubernetes cluster or a Lambda endpoint running in AWS or, you know, Knative or whatever it might be. I'm wondering how you've seen that influence the way that developer teams approach their architectural decision making, whether they're going from an existing monolith or an existing system, or if they're developing a greenfield application and how that manifests in terms of the software they produce, but also maybe in terms of the way that they actually structure their development organization.
[00:17:38] Unknown:
So I think that, honestly, I wish I could take credit for it. I think that the the organization changed dramatically because now everything is microservices. That's by itself, I think, changed dramatically. Because before that, we have 1 repository and everybody need to contribute. It's really hard. You have to be in the same language. There's a lot of limitation by that. I think that kind of, like, move to microservices. That by itself giving you right now autonomy to each right? You can kind of, like, cut your your team to way smaller. Right? Basically, the pizza team. And now that's the responsibility. And, basically, honestly, most of the stuff that, you know, in the organization, it's all kind of like as a service in a way. Right? You're basically consuming it. Right? Pretty simple, and the communication is API driven. I think that that's not something that I can take an, you know, credit for, but I will say that we're making it way simpler.
Because as I said, the old idea is basically making it, you know, the team velocity to go higher by basically cut it to small pieces, and autonomy for each of them. So now they're not depending on each other, but now their interface is an API interface. And the interface is basically going all on the wire. And we have to make sure that it's secure, and we will have to make sure that it's, you know, easy and good to communicate and so on. And it's for instance, you know, you rate limited it or if you don't wanna do a retry and so on. All that pieces, I think, is very, very important, and I think that that's where we're helping a lot. Right? Because so, basically, we are kind of like an enabler in a way. Right? I mean, that's when you wanna go. Eventually, you wanna be, you know, get autonomy to the team, and I think that that's the piece that's missing there is the network. That's the piece that we're putting on top of Kubernetes. And I think that that completing Kubernetes is pretty, pretty good. And I think that, as I said, because the interface that we're creating on the networking on the declarative is relatively and as I said, if everybody wanted to get to a point that there is an autonomy, to the team and they can go very fast, we also need to take care of the piece of how they're going to communicate to each other. And I think that that's exactly the piece that we are focusing on. So I think it's enabled a lot of team. But I think they already started to structure the team a little bit different before. Just working very nicely with our model. So we're lucky.
[00:19:42] Unknown:
1 of the potential downsides of having these systems broken apart across multiple services is that it does add some overhead in terms of the network communications where if it's all in process communication and you're just dealing with function calls in a language runtime, it can be much faster. But as you start to bridge across these different service boundaries, you pay the cost of TCP or UDP and encryption and all of the different sort of network protocol overhead. And I'm curious how you've seen teams make that decision about which components need to be in process and communicating as high speed as possible and which ones can afford to pay that network overhead.
[00:20:21] Unknown:
I think that, honestly, here's what we see. So first of all, yes, there is this team that, you know, there is those workload that is not going to run on with service mesh because they need to be extremely fast. We're working on this. So we were actually going to do some enhancement that eventually, I think, will solve all those problem. Also, in high critical, you know, environment, when you need the speed, it will be very, very cheap to run service mesh. I think that, honestly, what we see and as I said, we have a huge simple flag in the point that we have a lot of customers and a lot of open source users who can seed a lot. Here's what I'm we're hearing with most of our customer. They have some budget. Let's say that it's 5 minutes, something like that. That's what they're willing to pay for it. Now as long as it's below that number, honestly, do not care. If it's above, it's a problem, and we need to solve it. The thing that they do care once it's under that threshold is basically about a feature. Right? They wanna do crazy stuff in the network because they feel that now they're basically on the network. They can do a lot of crazy stuff. I don't know. Rate limiting and edit and get those metrics that maybe I couldn't see before and so on.
So I think that that's actually what we aim for them. So, again, first of all, they starting with a. We have to be under that ratio. And, honestly, that's a reasonable 1 with service mesh is totally handling well, so it's not a problem. And then the next 1 is, you know, please give me more feature. I wanna do this, and I wanna do that. And please give me this option and that option. It's very strange. Go connect to this system, verify this, then go back. You know? Open policy, agents. So so it's very interesting to see how people kind of, like, leveraging it. But that's what we see. And as I said, for everything, like, service mesh is being used in everywhere. No. It shouldn't. I get that at the beginning when service mesh started, so there were basically those 2 modes. 1 mode is basically as a sidecar, and the other 1 was as kind of, like, you know, a library.
And, honestly, the library just never catch. I mean, people, ideally, just want that, you know, ideally, if they could have nothing and the application could be influenced by nothing, that will be the best kind of, like, outcome. As I said, it's just defeating the purpose. If you're putting a library inside, it's exactly what we want to prevent of doing. So it's just not working really, really well. But again, we don't get getting a lot of pushback. And as I said, we have a lot of customers that are running a lot of stuff in production. It's worth the price, basically, because you're getting so much powerful, so much observability, so much, you know, features and security, mTLS, right, which is, you know, kind of like 0 trust networking, which is really, really important to organization.
More than the too little hope relatively cheap.
[00:22:57] Unknown:
The other interesting element of these service based systems is being able to actually manage the development of a single component and then hook it into the rest of the running system during your development process to make sure that you can do integration and end to end testing. And I'm wondering how you've seen development teams approach that sort of mirroring of the network topologies for their local environments or being able to bridge their local environments into a running preproduction system to be able to verify that the feature changes that they're making are actually going to maintain consistency with the rest of the overall architecture?
[00:23:34] Unknown:
That's a great question. When we see most of our customer, definitely in the big organization. Right? I mean, I don't know. Think about the most usable credit card or, you know, that we seriously have a very, very big account that, honestly, you're probably using every day. So, you know, you can't be down. I mean, it's very, very drastic. And, you know, when you're just pushing configuration, honestly, you cannot be down. It's very, very critical because people will lose you know, we're basically taking over all the network. If The network is down. Basically, everything that you wrote is down. Right? It's very, very dangerous. So the way that we see people doing is that, first of all, the use of GitOps is very, very important. So, basically, everything is going to a pipe. So, like, you know, you're pushing configuration to your, basically, GitHub account or whatever you're using.
It's basically running through a CICD and basically build it and deploy that. But first, you're starting by deploying it in the development team. Right? Then you're doing the same thing with new testing, and then you're doing it on staging. And only then after really stress the seed and make sure that it's dramatically good and moving this configuration, you know, Basically applied on on the production environment. Usually, when you're doing this, everybody kind of, like, sit down like this and make sure that it's not going to fall. But that's something that people doing. So, again, testing it a lot before it's coming. I will say that we have also young customer or, you know, startups and so on, and they sometimes, you know, can from the left, kind of, like, push it, which is very, very dangerous.
But they are. So in terms of the development life cycle, you know, we're trying to mimic as much as we like, you know, there is a lot of CICD here. Right? That basically is creating cluster and putting the the the the application there and the technology on top of it and pushing configuration and making that it's good. There's a huge Swiss, you know, kind of like a testing pipeline that is very, very useful. That's what we see. And it's not always cheap. I know that a lot of the stuff we did a lot of optimization for our pipeline because if you wanted to actually test it all, honestly, sometimes it's getting to the point that, you know, 1 you know, I I think 1 month, we basically pay something like 100 k, and I said to them, what? Right? So it's pretty, pretty big. So you need to figure out how you optimize that. Are you, you know, parallel the work? How do you?
And so on. So there's a lot of optimization that should be done. Yeah. But besides that, I mean, you know, there's some interesting tools that maybe can help that in the future. I don't know if you saw that. Like, it was just announcement by, Dagger, who is basically the founder of of Docker. And I think that what he's trying to do is that way, that it would be basically as seamless as you can to run your pipeline on your local machine as well as there. By leveraging you, I mean, it will be interesting to see. Absolutely.
[00:26:16] Unknown:
Yeah. It's definitely an interesting space. And you mentioned that you were fairly early into the game of service networking and cloud native API gateways. And I'm wondering how you've seen the overall ecosystem evolve and develop because I know that Kubernetes has added in some additional primitives to be able to support these systems more natively and make it easier to plug them together and be able to bring them in. So, like, I know that the ingress controller is a relatively new development. And I'm wondering if you can talk through some of the decision making or evaluation that engineering teams go through as they're working on selecting what their API gateway and service mesh is going to need to support and how you think about the overall competitive landscape as you continue to build and evolve the Glue platform.
[00:27:01] Unknown:
So in terms of the cloud native environment itself, I mean, it's funny. You know, we're working on this project specifically, like, the API gateway. Probably we released, what, probably 4 years ago. Right? So there's so much change. There were so many people that were relevant there and, honestly, not relevant anymore. I ran out of money or got acquired. So that was a lot of kind of, like, kind of, like, development. I think that there's 2 information that is important, I think. Let's talk first on an API gateway because, honestly, not everybody needs service mesh. If what you're running is 1 small application, probably you shouldn't get a blown kind of, like, service mesh. So let's think about the API gateway, and that's something that everybody need. Using your own ingress. Let's let's be more accurate. Right? You're basically using your Kubernetes. You need somehow an ingress.
Usually, what we see in that development is that in the beginning, the first ingress API, and the problem that there is in general when you're basically using something that you're trying to support it all, it's basically the lower common denominator. And I think that the first ingress for Kubernetes was controller was pretty honestly not useful. We supported it, but we also had an API by ourself that is was more robust. And what we saw is that almost, you know, everybody basically were leaning to the API gateway mainly because they needed more functionality to what the ingress could offer. It was very, very slim. So that's where it was on the beginning.
I think that and, specifically, if we're talking about specifically about solo, I think, gloom. I think 1 of the thing that was strong for us is what's always really good at is choosing the right technology. If you look at the technology that we're actually based on, I mean, 5 years ago, we bet on Envoy. Right? This is back then, it was so new. You know, NGINX and HAProxy was the leader. So by understanding that that technology is very solid and you wanted to choose that, I think that's something that helped us a lot. So today, Enviroproxy is the de facto proxy. This is what everybody's should run and wanna run. And for a good reason. Right? It's an open source project that there is a huge community around it. It's not a company based. Right?
You know, we contributed to it, but Google too and a lot of other vendors. And therefore, it's and it also was built way different than engine extending HAProxy. I think that HAProxy, there is this notion that you need to bring the configuration to them and then do auto restart. And that's not the big deal because usually configuration is not changing a lot, but that's true on the old architecture. But now when your microservices and stuff is going up and down and you're pushing, you know, release probably every few minutes, that's not going to work. Those changes need to, you know, auto restart and basically restart the the proxy. It's just really, really expensive and doesn't make any sense. I think what Envoy is very good at is, first of all, the fact that it's API driven, so you don't need to, like, restart it. I think the other side, as I said, is that you can customize that. It's very great in performance.
It's a huge community, so a lot of innovation happening there. For instance, we wanted to figure out how we can extend the proxy in a good way, but we're making sure that it's easier because not everybody wanna write c plus plus as sync and use Basel. And we took WASM as a technology and basically edited to that support for invoice. And now you can build kind of like an WASM model and basically dynamically load it to the proxy, and now you can write whatever logic that you want. So I think all those innovation is only happening in Envoy. You know, it's like every time. It's like the different why do you should choose Mesos or Docker Swarm versus Kubernetes because there there is a real community.
And I think that Envoy is the winner right now. So lucky us, we chose that. So I think that in the beginning, there were 3 player that chose it. 1 of them ran out of business. The other 1 totally lost to us, basically. That make us very, very successful because our technology is really, really solid. So that's in the API based. There is the new API ingress. Oh, they're calling it Kubernetes API Gateway API. I think that, again, there is always going to be that problem is that you really, really have to do the lower common denominator because you can't put there anything in the API that not everybody's supporting.
And we actually you know, basically, it was inspired a lot by our API, honestly, from Google because we talked to the in Kubernetes and Google and with the others. So, actually, it was actually inspired by our API. I think that the only difference is that when you're looking at the API of the API gateway, there is 2 weakness right now that there is is that, first of all, as I said, you have to have the common denominator. So everything that is outside that scroll, the API is not that you know, need to do an extension. It's not really friendly, let's call it, in terms of the user. And the second 1 is the fact that it's not a community multicluster at all. And for us, it was very important because we're running in huge environment with a lot of clusters that it's very important to to our customers and our user. So that's kind of like that. I still think that it's done very well. And, you know, that's something that we will support as well. But yeah. So that's kind of, like, in the innovation. I think that right now, you know, I could well, we see in the market. There's no competition. I mean, we're basically if you need an API gateway that is more the future, you know, natively to Kubernetes using CRD and all this stuff. Honestly, Glue is the best by far. It's also very mature right now. It's running huge, huge environment in huge scale. So in that point, it's really solid.
So now this next question is about service mesh. And service mesh specifically was as it adds on to Kubernetes and not part of Kubernetes. Because, theoretically, Google was the 1 who started it there. Right? Specifically, STO. Right? They didn't build it into Kubernetes. And the reason is because it's always was purposed that it's not going to be only for Kubernetes workload. And if that's the case, it doesn't make any sense that you believe there because it's also going to run, you know, VMs and lump and serverless and everything else. And that's why it's kind of like its own entity.
Right? Which I think it should stay like this. But it's a very great complementary. Right? Right? So it's really, really good kind of, like, controller for that, and that's basically what it's doing. And I think that that's in terms of the service mesh. Now service mesh was you asked about competitive. Every time that there is, you know, a very interesting or exciting technology, there will be a lot of you know, it's very hard for everybody to agree, and everybody else wanted to create, you know, their own kind of, like, differentiator. So I think that in service mesh, it was even more aggressive.
I think that what happened is that it was bad. 1st, like, a d, and then Google came with a competitor, but it wasn't great. So then we actually helped Ashinkov to build their own competitor, which is console service mesh. And then I think every API gateway in the world, basically, came with their own service mesh. And more and more and more got out there, and it's become basically not that different if you remember from the container orchestration war that was before. I think that the good news is that, eventually, all those stuff is kind of like you know, time kind of, like, will tell. Right? So I think that right now, if you're looking at all those service mesh that was announced, there's not a lot out there that actually survived this. And I think this is the key. Right? The good 1 is the 1 that will survive.
And I think that right now, by far, the best mature and good service mesh is STO. Again, lucky us for choosing it. But it's the big community. You know, it's running, honestly, probably every you know, most of the places that I know in the world, like, everybody's running it. So, honestly, in the beginning, when we started the service mesh, we kind of, like, was saying, well, we should then don't have to kind of, like, choose a leader. Right? But then, you know, it was very easy to see that the market is very ready to access you. And then we said, okay. So S tier it is. Right? So I think by now, honestly, there's not a lot of competition. It's by far the biggest. There's a community there. It's running in a proven technology that everybody using today. So, yeah, lucky us. And between the provider, it's interesting how we differentiate ourselves. But, again, I think that there, we are very, very good because we are probably 2 years ahead of the market, mainly because if you will look at the blog that I put in 2018, you saw that we already thought about all this problem in 2018. So we're building it for a long, long, long time. Right? Which mean that right now, we're just way mature, and we also start up moving way faster. Right?
So, seriously, like, our product is probably 2 years in the market. So that's help.
[00:35:26] Unknown:
As you have been building this business and the technology stack to help support all of these organizations with managing their various networking topologies and support their evolution from monoliths to microservices and being able to stitch together all their various systems. What are some of the most interesting or innovative or unexpected ways that you've seen the glue platform used?
[00:35:47] Unknown:
Yeah. I mean, what we see a lot of the time is that it's very interesting how much use like, you're giving the customer the ability to basically, you know, have that power to basically obstruct the network. They wanna put a lot on that logic. But, you know, eventually, it's all on the network and it's in the request button. The last 1 that you will see is to put a lot of logic there. So we see did see a lot of customers are putting tons of log logic there. Right? And, basically, we had 1 customer that it's related to networking and telecommunication.
Their services was basically sell SMS in a way, like, the ability to write an SMS back then. And all the logic of, you know, what is the account property that the customer buy was basically managing the gateway. There weren't any logic in the application itself. It was all rate limited and so on, basically, you know, authentication and rate limiting on the gateway, which is pretty unique. Right? So your call is not changing. It doesn't matter, and everything is being forced on the gateway. So we saw a lot of interesting stuff. So, like, a lot of stuff that related for data loss prevention. So for instance, let's assume that I'm a, you know, you know, I'm running a doctor list or, you know, a dentist list. Right? And I wanted to make sure that, you know, you will ask the the, you know, a question or get some data. But based on who you are, maybe I don't want you to see everything. Maybe I don't want you to see the number, you know, and license of the doctor or something like that. So all of this is also filtered very nicely in the gateway.
Yeah. There was a lot of very unique 1. I saw a lot of people that basically said, used gateway, API gateway, like, they're using service mesh, which is basically kind of, like, create an internal gateway between their application, kind of, like, mini pods of yeah. We saw a lot of good stuff. I mean, honestly, it's pretty it's so powerful, so you can do everything you want. I mean, you can think about something extremely
[00:37:33] Unknown:
unique. And in your own work of building this company and growing the technology and helping to support this rapidly evolving ecosystem around cloud native technologies, what are some of the most interesting or unexpected or challenging lessons that you've learned in the process?
[00:37:48] Unknown:
Oh my god. I learned a lot. But I think, you know, honestly, when I started the company, I was pretty much naive. Right? I said, look, I will come with a very good idea. I'm going to put it out there open source, and that's it. People will not try to do what I already done. No. They're not going to steal it. I'm not going everybody will agree that this is the best thing. I'm going to lean to it. Well, oh, boy. I was wrong. You know, unfortunately, you know, even in the open source market, there is a lot of interest for big companies. Right? There is a reason why they're doing stuff in the open source. There's a reason why they're giving. It's very interesting to discover that, honestly, because that will drive a lot of your stuff. There was a lot of stuff that we let it do. So as I said, you know, we saw people basically, seriously copy paste 1 to 1 or seriously announcing what we did only because they are bigger. And we're like, what?
You know? So there is a lot of those. But I will say that my lessons the big lessons done on the open source is that, you know, in politics, there isn't everywhere, and interest, there isn't everywhere. Every company is in, you know, every community. But I will say that the only good thing that I learned from all of this is just ignore it, put your ad down, write code, and do an amazing work persistently. And, eventually, you know, it will be very hard to stop you. And I think that that's the thing that Gulani took us a lot of time, honestly. Just like every time that, you know, suddenly someone announced the same thing or someone said that they have something that they don't have or someone seriously. Right? It's very easy to copy paste. And the beginning was very frustrated and said, but it's not even true what they're saying. And how can we prove it? And how can we say that? And it's not good to say that. Right? And I today, I think the lesson is just, like, just sit down, put your head down, and do work that is good and solid and give it to the community and walk with them together to iterate on this. And, eventually, if persistently you're doing it, honestly, you know, you will be the only 1 who survive. And, honestly, that's happened to us in the API green. It's happening to us on the service mesh. You know, just work hard and, you know, you eventually, it will be rewarded.
[00:39:50] Unknown:
And so for people who are in the process of either selecting an API gateway or ingress controller or they're evaluating service mesh solutions, what are the cases where glue is the wrong choice and they might be better suited with a different provider or a different technology stack or foregoing them altogether?
[00:40:07] Unknown:
Yeah. So let's talk about API gateway first. We are the best. I mean, I think that, you know, when we started, we always were very, very into the innovation, so, you know, more cloud native. If you're running Kubernetes, we are the only 1 that it makes sense to run. Right? So, know, what our competitor or more legacy API gateway were using is the fact that in VM or in legacy environment, they probably can do better. But, honestly, we fix that. Right? I mean, we basically you know, a lot of our customers running VMs and running as a set mainframe, and they're really good at that as well right now. So I don't think that there is any reason not to take Lua. We'll be very blunt. First of all, it's open source, so it's really, really useful.
There's a huge community of people who's using it, and, you know, you don't need to pay for that. So to me, it's, like, you know, no brand. I just go use it. And I think that this is a necessity. Like, that's something right now. You know, it's in Envoy based, so, you know, that's by itself. It's a you you know, the last thing that you want to invest in API gateway, then you will need to change because the community doesn't move fast enough. So from that point, I think it need to be an envoy based. And my you know, specifically, Glue is by far the best API gateway for for for envoy based. So that's that. So, again, here I will say, you should just choose us. If you're talking about service mesh, so I don't think that this is a technology that, you know, you shouldn't use Glue mesh, which is already our I think that in general, the question, do you need to service mesh in general? And I do think that you not always need to. Right? I mean, it's very depends. It's how big is your environment because sometimes it's an you know? When there is a lot of microservices, it makes sense. But sometimes, all you have is 1 application. Put the logic there. You know? It's not a big deal.
So I think that it depends of, again, you know, how many microservices you're running, how many cluster you're running, how much, you know, complex is your use case, what feature you need from the service mesh. Do you need security? Do you care about observability? Right? I mean, not everybody do. So I think that's the quite the first question. Now given that you do need a service mesh, as I said, towards end of the market, STO is the right service mesh. STO is the service mesh that is the most mature, the most community driven, open source, been used the most in all over the world. We have the you know? So, like, that's number 1.
Again, it's a huge benefit. And the last 1 that I will go about in saying is the fact that Glue mesh by is, as I said, is the most advanced distro that exists in the market in term of functionality. And, again, just because we started it way before everybody. So you will get a lot of benefit by an ease of using STO by using Gloomesh. So yeah. I mean, I think that if you're going with this logic, you say, well, I need an STO and NV first, then I need the service mesh. So I want the service mesh that is Envoy based. I need the best service mesh. It's SDR. And then what is the best provider that can give me to this? Honestly, but lead you to us and to go.
[00:43:00] Unknown:
And as you continue to build out the business and the platform or or any areas of focus that you're particularly interested in digging into?
[00:43:17] Unknown:
Yeah. So there's a lot of stuff that we're doing. Honestly, the interesting stuff is how we build a product. And the way we're working is we basically first of all, as an open source community, that's very helpful because we're getting a lot of feature and request from them. And we also have a lot of customers, and those customer, each of them has a Slack channel. And, basically, the product today is so robust because that's what the customer asked. So when I'm looking at stuff that we have right now on the list, I mean, again, it's very depends on the product. If we're looking at, it's really stuff that related to, you know, maybe API monetization, that kind of stuff, you know, better performance.
You know, 1 of the thing that is very important, as I said, is always stability and resilient. No matter what, it cannot go down. So that stuff that we're putting a lot of focus on it, you know, in the beginning, you know, it's funny because when you have a lot of customer using it, you will find some edge cases that you didn't even think about that. So I think that's really, really critical for us. I think that in the service mesh, honestly, there is a lot more work to can be done. Right? I mean, first of all, it's less, you know, mature because it's younger. And second of all, because, honestly, it's more powerful. So there is way more stuff we can do. So to me, think about the ability of the stuff to do. Like, for instance, we're abstracting all your networking. There's crazy stuff that we can do in terms of observability.
There's crazy stuff that we can do in terms of how to bring different API, you know, support, how to manage that in the organization. Right? I mean, why are we using all those software? Eventually, we're using all this infrastructure software for some 1 reason and 1 reason only, which is to care piece of the infrastructure And, basically, then I get into the application team and enable them. And I think that there, there is a lot more work to do to basically make it stay. You know, again, we're doing some work in Glue. But I think in general, there's way more work that we can do to enable more services.
Extensibility is a big piece that is very important to us. We also really focusing right now on honestly, now when there is such adoption, right, there's no more question, you know, if people need it or what is the 1 that they need. And they need STO. I think that now we can work a lot on make STO better and better specifically in terms of performance, better in terms of, you know, type of architecture that you can think about. You know? So we're leveraging EBPF below the the service mesh, and I think that that can help us a lot to basically get a better observability, even security, right, more on the layer 4, as well as speed and, you know, like latency and so on, which I think it's a lot of our customer care. So I think that's something that we can do with eBPF, but there is also stuff that we can do with the way we basically place the proxy and, you know, kind of, like, ability to maybe virtualize the sidecar and so on. So there is a lot of stuff that we're basically looking on and focusing on.
I think that yeah. No. Listen. I mean, what's fun about open source is the fact that there's always new technology, and this ecosystem is so fast growing and interesting that honestly, it's just having fun. It's always there is something you need to do. Are there any other aspects of the work that you're doing at Solo and on the Glue platform or the overall ecosystem of API gateways and service mesh and application networking that we didn't discuss yet that you'd like to cover before we close out the show? So 1 thing that I think that if you remember on the beginning of the podcast, what I said is basically that the reason I built this company, it wasn't about the ID. It was about the culture that I think that I can apply on the company. I'm personally was a very, very frustrated engineer. And the reason I was frustrated engineer is because, honestly, what was really, really important to me as an engineer is is a few thing. And, honestly, it was in that order. Number 1 is, you know, to be challenged.
Number 2 is to be, you know, heard. Right? It was really important that people will listen to what I'm saying. I felt that I really would've wanted to in to make an impact. You know? Of course, I wanted to be paid well. You know, the people appreciate what I'm doing. I think that when I started solo, 1 of the thing that was very important to me is that and I think that a lot of these things is sometimes you can't do in a company because, you know, they're not sharing with you the data. So I basically, when I was looking on the little my leadership before, I always said, what are they doing? Right? Because it looks like they're totally doing the wrong thing. The thing is that what is very, very important is that a lot of the reason that they did this thing that they did when I was always looking at leadership, I said, why are they doing what they're doing? It doesn't make any sense. And and I was very frustrated about the way they you know, direction that they're going. And, honestly, I think that that was totally wrong there. They did went to the right direction, and I think that the problem was that they never shared with me all the data.
And, therefore, based on the data that I had, they made a wrong decision. In Sola, it's really different. The way we're working is all about, like, you know, I'm hiring an amazing people. The team is insanely good and growing so much. But it's also you know, I will never come to someone and say to him, here's what we need to do. It's always here's what I know and best of what I know and in details what I know. Here's what I think we should do. What do you think? And we're talking about it, and I think the culture is insane. And so in specifically, is is a company that we're having a blast working there. And, also, that's why people not leaving. Right? I mean, a lot of people you know, no 1 leaving solo, which is I think it's fantastic.
[00:48:33] Unknown:
Well, for anybody who wants to get in touch with you and follow along with the work that you're doing, I'll have you add your preferred contact information to the show notes. And with that, I'll move us into the picks. This week, I'm going to choose a show that I just started watching on Netflix called Shadow and Bone. It's a really interesting sort of fantasy steampunk style show, a lot of really interesting world and character building. So, I've enjoyed watching the first few episodes of that, so recommend giving it a shot if you're looking for something to stay entertained with. And with that, I'll pass it to you, Eddy. Do you have any pics this week? Yeah. I know. So I saw an amazing documentary by HBO. It's not a new 1, but honestly, it was just was shaking my word at least. It was about Elizabeth Holmes
[00:49:14] Unknown:
and about. I mean, honestly, I was, in shock. As a founder woman, I was pretty fan of Steve Job. It was very hard to watch. But yeah. No. I hope that they will continue making the world better.
[00:49:28] Unknown:
Alright. Well, thank you very much for taking the time today to join me and share the work that you're doing at Solo and on the Glue platform and helping to accelerate the adoption and capabilities of networking for cloud and on premises technology. So I appreciate all of the time and energy that you and your team are putting into that, and I hope you enjoy the rest of your day. Thank you so much. And, again, thank you for having me.
[00:49:52] Unknown:
Thank you for listening. Don't forget to check out our other show, the Data Engineering Podcast at data engineering podcast dot com for the latest on modern data management. And visit the site of pythonpodcast.com to subscribe to the show, sign up for the mailing list, and read the show notes. And if you've learned something or tried out a project from the show, then tell us about it. Email host at podcastinit.com with your story. To help other people find the show, please leave a review on Itunes and tell your friends and coworkers.
Introduction and Guest Introduction
Edith Levine's Journey with Python
Founding Solo and the Glue Project
Microservices and Kubernetes
Networking in Microservices
Bridging Monoliths and Microservices
Architectural Decision Making
Network Overhead in Microservices
Managing Development and Integration Testing
Evolution of Service Networking Ecosystem
Interesting Use Cases of Glue Platform
Lessons Learned in Building Solo
When Glue is the Wrong Choice
Future Focus Areas for Solo
Company Culture at Solo