Summary
Cloud native architectures have been gaining prominence for the past few years due to the rising popularity of Kubernetes. This introduces new complications to development workflows due to the need to integrate with multiple services as you build new components for your production systems. In order to reduce the friction involved in developing applications for cloud native environments Michael Schilonka created Gefyra. In this episode he explains how it connects your local machine to a running Kubernetes environment so that you can rapidly iterate on your software in the context of the whole system. He also shares how the Django Hurricane plugin lets your applications work closely with the Kubernetes process model.
Announcements
- Hello and welcome to Podcast.__init__, the podcast about Python’s role in data and science.
- When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With the launch of their managed Kubernetes platform it’s easy to get started with the next generation of deployment and scaling, powered by the battle tested Linode platform, including simple pricing, node balancers, 40Gbit networking, dedicated CPU and GPU instances, and worldwide data centers. Go to pythonpodcast.com/linode and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!
- So now your modern data stack is set up. How is everyone going to find the data they need, and understand it? Select Star is a data discovery platform that automatically analyzes & documents your data. For every table in Select Star, you can find out where the data originated, which dashboards are built on top of it, who’s using it in the company, and how they’re using it, all the way down to the SQL queries. Best of all, it’s simple to set up, and easy for both engineering and operations teams to use. With Select Star’s data catalog, a single source of truth for your data is built in minutes, even across thousands of datasets. Try it out for free and double the length of your free trial today at pythonpodcast.com/selectstar. You’ll also get a swag package when you continue on a paid plan.
- Your host as usual is Tobias Macey and today I’m interviewing Michael Schilonka about Gefyra and what is involved with developing applications for Kubernetes environments
Interview
- Introductions
- How did you get introduced to Python?
- Can you describe what Gefyra is and the story behind it?
- What are the challenges that Kubernetes introduces to the development process?
- What are some of the strategies that developers might use for developing and testing applications that are deployed to Kubernetes environments?
- What are the use cases that Gefyra is focused on enabling?
- What are some of the other tools or platforms that Gefyra might replace or supplement?
- What are the services that need to be present in the K8s cluster to enable Gefyra’s functionality?
- Can you describe how Gefyra is implemented?
- How have the design and goals of the project changed since you first started working on it?
- What is the process for getting Gefyra set up between a K8s cluster and a developer’s laptop?
- Can you describe what the developer’s workflow looks like when using Gefyra?
- How do you avoid collisions/resource contention among a team of developers who are working on the same project?
- What are some of the ways that developing for Kubernetes influences the architectural and design decisions for a project?
- What are some of the additional practices or systems that you have found to be beneficial for accelerating development in cloud-native environments?
- What are the most interesting, innovative, or unexpected ways that you have seen Gefyra used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on Gefyra?
- When is Gefyra the wrong choice?
- What do you have planned for the future of Gefyra?
Keep In Touch
Picks
- Tobias
- kubernetes.el – Kubernetes interface for Emacs
- Michael
- It’s fermentation friday, perfect for baking a sourdough bread or brewing beer
- Two of my favorit YouTube channels Kurzgesagt – In a Nutshell and LockPickingLawyer
- For entrepreneurial spirits: Reddit community research with (GummySearch)[https://gummysearch.com/]?utm_source=rss&utm_medium=rss
Links
- Kopf framework
- PyOxidizer
- Tuna
- Wireguard-go
- https://k3d.io/?utm_source=rss&utm_medium=rss
- kind
- Django Hurricane
- Blueshoe
- Django
- Kubernetes
- K3d
- Telepresence
- Unikube
- Sidecar Pattern
- Docker-compose
- Kubernetes Patterns book
- O’Reilly Platform
- Amazon (affiliate link)
- CodeZero
- CoreDNS
- Nginx
- Cookiecutter
- Tornado
- uWSGI
- 12 Factor App
- Pycloak
- Keycloak
- Kubernetes Operator
- Kubernetes CRD (Custom Resource Definition
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Hello, and welcome to podcast dot in it, the podcast about Python and the people who make it great. When you're ready to launch your next app or want to try a project you hear about on the show, you'll need somewhere to deploy it. So take a look at our friends over at Linode. With the launch of their managed Kubernetes platform, it's easy to get started with the next generation of deployment and scaling powered by the battle tested Linode platform, including simple pricing, node balancers, Linode platform, including simple pricing, node balancers, 40 gigabit networking, dedicated CPU and GPU instances, and worldwide data centers.
Go to python podcast.com/linode, that's l I n o d e, today and get a $100 credit to try out a Kubernetes cluster of your own. And don't forget to thank them for their continued support of this show. So now your modern data stack is set up. How is everyone going to find the data they need and understand it? Select Star is a data discovery platform that automatically analyzes and documents your data. For every table in Select star, you can find out where the data originated, which dashboards are built on top of it, who's using it in the company and how they're using it, all the way down to the SQL queries. Best of all, it's simple to set up and easy for both engineering and operations teams to use.
With SelectStar's data catalog, a single source of truth for your data is built in minutes, even across thousands of datasets. Try it out for free and double the length of your free trial today at pythonpodcast dotcom/selectstar. You'll also get a swag package when you continue on a paid plan. Your host as usual is Macy. And today, I'm interviewing Michael Shalomka about Gafira and what is involved with developing applications for Kubernetes environments. So, Michael, can you start by introducing yourself?
[00:01:57] Unknown:
Hi. I'm Michael. I'm the cofounder of BlueShoe, a Munich based software company with a focus on cloud native software development. I studied computer science in a cooperatives study program together with IBM here in Germany. After the time, I wanted to found a company myself. I think in 2014, I started BlueShoe, and, yeah, it turned out to be a very good decision. I'm working in the Python ecosystem and particularly with Django for the past 8 years now. And since, I think, 2018, I've been highly focused on the cloud native development of software development.
Yeah. In my spare time, I'm having a few DIY projects running. For example, I'm converting
[00:02:47] Unknown:
a van into a camper van, stuff like that. Lots of fun. And do you remember how you first got introduced to Python?
[00:02:53] Unknown:
That was back in the days at the university. I decided to switch from Windows to Linux or better yet, Ubuntu. I did most of my academic projects with Python, and the first considerable project was the simulation of a magnetic pendulum from from a CAL theory.
[00:03:13] Unknown:
Given your focus on software development for cloud native environments, that brings us to the Gafira project, which is what I came across and why I invited you on the show. So I'm wondering if you could start by describing a bit about what it is and some of the story behind how it came to be and why you created it and some of the motivation there.
[00:03:31] Unknown:
We have been doing software development in all kinds of fashion from bare metal deployments, Vagrant, Ansible scripts, and so on and so forth. And back in 2018, my team decided to do exclusively Kubernetes. We tried to, yeah, move all of our projects to Kubernetes infrastructures and all of our clients as well. Yeah. We decided for projects that are not feasible or too far away or too small to hand them over to other service providers and to really focus on writing software that has q and a as a target platform. Yeah. It was quite difficult for our development teams to adapt from 1 day to another to a workflow that tries to involve Kubernetes as early as possible.
We've been researching the market, and there are indeed some tools or there have been tools already out there. They're trying to make the development process a bit more convenient. But we have been really happy with the tools that have been there. And so we decided to create a little prototype where we are running a local Kubernetes cluster on the machine of the developer, trying to run aside a docker container containing the application that is subject to the development work, and I'm trying to make it part of the Kubernetes cluster. So this is a bit more than just syncing or keeping a volume of code in sync and having a container in a Kubernetes cluster, reloading it all over again.
It was more to bring up a unique development infrastructure, which allows us to have easily our applications debuggable or attaching to them with a debugger, overwriting environment variables, code hot reloading is, of course, a topic. You name it. Everything that is already useful to our developers and useful in the past, that was what we have been trying to bring to all developers. And we had a little prototype which consisted of a k 3 d cluster and telepresence in in version 1 back in these days. We rolled out that infrastructure to all developers and found that this is quite an approachable way for them to write their code while running it in q and e to z at the same time.
I think this is something that the market still lacks at the moment. So we put a little more effort into this prototype, and we created a platform called Uniqube. This is still in its early days. I think we published it in November last year. Part of the command line interface that developers are using is still telepresence. It's telepresence in version 2 nowadays. But we have been a little bit dissatisfied with telepresence. So we saw connection losses. We haven't been able to work on advanced Kubernetes patterns, like, for example, the the sidecar pattern, which is not supported so well with telepresence.
And on the other side, telepresence writes or rewrites your workloads within the cluster, which causes problems or anything that your workloads are still modified within the cluster and do not so we saw a few troubles. So I decided, I think, in December last year to rewrite this core principle of our development workflow and replacing telepresence with our own solution. And that is Jifaira.
[00:07:16] Unknown:
In terms of the overall complexities that come up when you're developing an application that is intended to be deployed onto Kubernetes. What are the points of friction that they run into when using a quote, unquote more traditional workflow where you're just developing on your local machine or maybe even using something like Docker Compose?
[00:07:37] Unknown:
And so Kubernetes itself, having it on the operation side of things, is not really causing troubles in in terms of the development, especially if we're talking about our traditional applications. Applications that are under development for a couple of years now. But if you want to introduce some advanced, Kubernetes features like those patterns, which are very well written in the Kubernetes patterns with foundational patterns or behavioral patterns, structural patterns, and so on and so forth. Your developers will face the challenge that this is not available when developing software with Docker Compose or or Vagrant or custom setups that requires that you write your application right within Kubernetes.
Right? If you, for instance, want to leverage the cycle pattern, this is something that you want to achieve which would lead to a not well fitting dev prod parity with Docker Compose or if developers create their own development infrastructure, which is not fitting the production environment. Then you will see a lot of troubles, especially in the transition of bug fixes and new features going from development to an integration system. Like, for example, what I saw in the past was a developer used the native open function in Python to operate on files in a Docker Compose mounted volume.
So a local volume, so to say. But the integration system employed in s 3 back end, and Python native open is not working with an s 3 back end anymore. We are trying to bring such aspects of our entire platform as close as possible to developers by providing them a Kubernetes environment, which is as similar as possible, including such aspects like storage back ends to our developers.
[00:09:34] Unknown:
In terms of the modifications to system architectures and the ways that you might approach the structure and design of an application, how does the Kubernetes operating environment influence some of those early decisions that might affect the overall structure of the application and how you compose it together?
[00:09:54] Unknown:
I think employing Kubernetes in an early development stage promotes or emphasizes to use service oriented architecture a lot. If it's just like a click in order to spin up a new service, a command in a CLI to spin up a new service, this will really emphasize also the mindset of the developers to separate problems in a more domain driven decision making of the overall architecture. And if they are familiar enough with advanced Kubernetes patterns and they are right at their hands, they probably use them.
[00:10:34] Unknown:
As far as the overall aspects of the workflow that Gafira is focused on, you mentioned that it is a intended as a bit of a replacement for telepresence and being able to manage the communication from the developer's editor into the running Kubernetes cluster, wherever that might be. I'm wondering if you can just talk through the specific functionalities and features that it's focused on and the ways that it fits into the developer workflow.
[00:11:04] Unknown:
Yep. So, basically, I see 2 main use cases at the moment. 1 is that you are bringing in a completely new service, which is not yet in a cluster. And this is 1 thing that Gfira allows you to, run a local Docker container, which behaves like it would run within Kubernetes. That is, for example, resolve services from the namespace you're running the service in or being able to attach it to a database, which is running within the cluster, while having the code executed within the local Docker host. So this is 1 thing. And the other thing is if you're working on a service which is already part of the cluster and you might want to add a new feature or fix a bug, Gfira allows you to run the service also locally in your local, Docker host, but intercepting all the traffic that is hitting its associated service, which is running within the cluster and intercepting the traffic hitting this container and tunneling it to the local instance, which is quite useful for debugging purposes. For instance, if you have, like, for instance, 2 services talking to each other within the Kubernetes cluster.
If you want to, for example, debug 2 services running within a Kubernetes cluster, you can just intercept 2 services at the same time having a local instance running with a debugger attached to it, and then observe how the request is created and then executed. And on the other end, observe and debug how the request, hits the the second service and how the response is then generated and, yeah, find bugs that might lie between 2 services.
[00:13:00] Unknown:
In terms of the other approaches that developers have come up with for being able to design and develop for Kubernetes environments and try to match their local environments to what it's actually going to look like in production. What are some of the alternative tools or development patterns that you've seen them use?
[00:13:20] Unknown:
Yeah. I've talked to quite a number of teams during the past 4 months. Very very different. So a few teams do not even try to make Kubernetes part of the developer side. I see this most oftenly in Java environments that are running a local Java process, and they're not even creating containers locally. So this is part of a CICD process. But on the other hand, you find teams that which create on demand integration systems for or so called preview systems, which are rolling out, just 1 feature and 1 service and all adjacent services on the click of a button. So they are dedicating Kubernetes clusters to their developers, but I think in a later stage.
So you have written your feature or your bug fix, and now you are pushing your code. And then it's up to the developer to spin up and preview environment, which rolls out the commit, the code of the service, and and they are now able to test everything within a Kubernetes environment. But this requires, I think, a bit of a back and forth from such a preview environment in case you are seeing troubles or services not behaving like you as a developer wanted to. Other tools are I think telepresence is still a really important 1. I found a code 0 as another alternative, and probably there are a few other tools available which are trying to somehow tackle that problem as well. But I think Gfira has a really unique approach of trying to make Kubernetes a development platform.
[00:15:08] Unknown:
And can you talk through how the Jifaira project is implemented and the architectural and design aspects of it?
[00:15:15] Unknown:
Jafira uses quite a few popular open source technologies as a foundation. For instance, obviously, it's Docker, but for the connection between the local Docker host and the Kubernetes cluster, it employs a WireGuard Go connection, which should be available in almost all Kubernetes environments, local Kubernetes as well as cloud based Kubernetes environments. Then there is CoreDNS, and I'm heavily using Nginx for all kinds of TCP reverse proxying, rsync for syncing down in files that might have changed in a Kubernetes pot. For instance, the service account token, which is automatically replicated from the the target container in the cluster to my local Docker container instance in order to request the Kubernetes APIs if you, like, for instance, want to create custom resources, have an, operator pattern in place.
[00:16:17] Unknown:
As you have been going through the development and testing and working with some of the early users of Jfira, what are some of the ways that the approach and the overall goals of the project have changed or evolved over the past few months?
[00:16:32] Unknown:
I think they did not really have changed during the past few months. 2 months ago or so, I released a first version, which was feature complete for the first evolution, so to say. And now it's more or less observing how people are using it and to see if everything works or if there are any bugs remaining. It's still the same, but there is, of course, a greater vision.
[00:17:00] Unknown:
In terms of the workflow of actually using Jfira, is it something where you would typically run a local Kubernetes cluster on the developer machine, or is it something where you would have a Kubernetes environment running in a dev environment in a sort of a dev stage in a cloud provider, and then Jafiro would be connecting up to that cluster? And in that scenario, what are some of the ways that you mitigate some of the possibilities of resource contention if you have multiple people working on the same project?
[00:17:32] Unknown:
Both scenarios are intended to be covered with Gfira. But in the first step, I was really focusing on the local Kubernetes cluster, deployment. So although I tested it with, for instance, Google Kubernetes platform and other cloud based Kubernetes environments too. It's still in an early stage. Probably, resource contention would be a problem at the moment. So it's just 1 user in 1 cluster at at a time. But at some point in the future, I will add that feature that it is possible for more users to operate in in 1 cluster at the same time. For the developers who are working on these projects, how much knowledge of Kubernetes and its different principles
[00:18:17] Unknown:
and features are necessary for them to be able to be effective? And what is the sort of collaboration between the developers and the operators who are managing the cluster?
[00:18:27] Unknown:
So this was 1 of our primary goals for Unikube to hide as much complexity as possible coming from Kubernetes towards our developers. And I think if you have common use cases, you don't even have to touch cube control if you're running it with Unikube. But Jifaira itself is a bit more later to that stage, meaning that you have to create your Kubernetes cluster by yourself as a developer, setting correct kubectl context, and connecting with Gfira to set cluster. So this involves quite a bit of knowledge from the developer with all the aspects of Kubernetes.
And I would say that even if Unikube is trying to hide away complexity, it is still something that I would like to see in the future that developers are more used to working with Kubernetes. I'd like to see more of those patterns being adapted to the software architectures. And that means that they are probably not can't reject those complexity and things all things Kubernetes for too long anymore. But I do see that we are having front end developers or even back end developers with a low affinity to infrastructure. For some of them, creating container images is still a thing.
So, yeah, hopefully, we can foster the op adoption of Kubernetes and Kubernetes patterns in in our architectures and in other people's architectures too. But at the moment, in order to answer your question, I'd say it's still a thing that a developer has to know a little bit about Kubernetes.
[00:20:09] Unknown:
To the point of putting the application into a Docker image and that being a potential point of friction for some engineers, I'm wondering if you can talk to some of the ways that as software engineers, we can create the applications in such a way that it is more conducive to being containerized and just some of the issues that developers experience as they are making the jump from, I have an idea. I've built an initial prototype. Now I need to actually put it in a container so I can run it somewhere.
[00:20:38] Unknown:
Yeah. I think what we're trying to do is create as much templating as possible. For instance, we are using cookie cutter templates for our Django projects, which create a project layout for a Django project. And on the other hand, there is a associated template, which creates Helm charts running that specific project in a Kubernetes cluster. And if you're following the guide as a developer, you just create a project, then create the charts in order to run it in Kubernetes,
[00:21:12] Unknown:
apply the charts, and, yeah, you're all set. I know that 1 of the other projects that you've built at BlueShoe is Django Hurricane to make the Django framework itself more conducive to running in a container environment. And I'm wondering if you can speak to some of the modifications that you've had to make or some of the additional components that you've integrated to make that a more seamless transition to go from, I have a Django project to this is running in at least a semi optimized fashion in a Kubernetes environment.
[00:21:41] Unknown:
1 of the foundational patterns of Kubernetes is, for instance, the health probe. We saw in our projects in the past that especially the point of health probes has been implemented over and over again in our Django application. 1 thing that really bothered me, at most was that those probes are not really reasonable. Right? This is just an HTTP get on front page. This really doesn't tell anything else about the state of health of the application. Then the the front page is actually been served without a 500. And Django itself comes with, I think, a bit underestimated smaller internal framework. It's the check framework.
And this is something that we integrated into hurricane. So Django developers are able to add a check pie, register a check, And those checks tagged with the hurricane tag are then executed upon every probe request. That way, you can easily integrate the check for, let's say, an adjacent service or for the database or even for the existence of something within your database, which is crucial for your service to run properly. On the other hand, we use Tornado as our application server, so which is included with hurricane. Because prior to hurricane, we've been running our applications with uwhiskey most of the time, and developers or validate our operators of the applications or, so to say, our DevOps coworkers ask, okay. How many processes do you need running it in AMPRA mode? How many threads and what's the capacity that is required? And then we found that this not really matches the process model coming with Kubernetes because what we've done was putting a internal process model of our application in a container, in a pot of Kubernetes, and Kubernetes doesn't really know anything about the number of processes we are running in your whiskey.
Harry Kane, on the other hand, runs a single threaded tornado server, which also serves the probes. So this is not the second process that does everything completely decoupled from my application. So it's intended that it's just single threaded, 1 IO loop that serves probes as well as requests or application requests. And everything is monitored with Kubernetes, meaning if we are hitting requests, backlogs of a certain length or CPU utilization is too high, we can easily leverage Kubernetes or port auto scaler, vertical auto scaler in order to spawn new parts to serve our our request. And this much better fits the process model of Kubernetes and laying the responsibility for scaling application applications into the hands of Kubernetes. And these are just 2 of the goals that we're trying to achieve with Django hurricane.
[00:24:45] Unknown:
And for anybody who has an existing Django project, what are steps involved in converting it to use Django Hurricane or to use Django Hurricane to manage the sort of execution in the container environment?
[00:24:58] Unknown:
Well, so this is quite simple. It's just some pip install, put it to your installed apps, and it comes with a management command. It's called serve. Serve is intended to be used during development as well as production. Yep. That's basically it. If you want to have a deeper integration, of course, you add checks and do a few of those other things. For example, if you would like to have a webhook being called once the application gets unready, so declared from Kubernetes, or once startup is through, then you can also add some webhooks and stuff like that. But this is some kind of advanced integration, but it's quite simple to start with.
[00:25:39] Unknown:
So now that we have a Django app that speaks natively to Kubernetes in terms of the process model that Kubernetes is looking for, you've got a way to do your development locally and connect it up to a Kubernetes cluster. What are some of the other challenges that you see developers run into as they're starting to build in a cloud native model and just some of the other concepts or challenges that they encounter?
[00:26:03] Unknown:
So once we've got our developers to follow the 12 factor app and, most importantly, the dev prod parity, I think we did a huge step forward. But then it comes to adopting advanced Kubernetes patterns. Like, for instance, 1 of my favorites is the sidecar pattern. For example, the authorization handling in in our applications is usually done with a sidecar, which handles the OpenID Connect flow in front of our Django applications being attached to a Keycloak or some other identity provider. Right? That would also involve developers to create a few mutual functions in order to leverage what happens next. Right? I mean, in Django, we usually have a user model, and we have authorization and everything built in. But this is no longer required in the service architecture, I think, or not to that point.
But we'd like to stay compatible with the ecosystem, and that usually requires to have a user model locally present. I think adapting to those advanced patterns and making the best out of it, this is still a a challenge for developers.
[00:27:16] Unknown:
In your experience of helping to adopt cloud native development practices and building Jafira to make it easier for developers to work natively with Kubernetes clusters, what are some of the most interesting or unexpected or challenging lessons that you've learned in the process?
[00:27:33] Unknown:
I think 1 of the biggest learnings I had implementing Jfira was creating an operator That is the most important part of the cluster side components is managed with an operator. And this was the first operator that I've written myself, and it was quite interesting how we can make use of custom resource definitions, which is sort of an extension model to Kubernetes APIs and making Kubernetes aware of maybe domain specific objects, but also for the process of making Gfira work. And this was quite interesting and helped me a lot to decouple the CLI of Gfira from the cluster side components.
Having an operator running within the cluster and communicating with it using custom resources. Like, for instance, the intercept request is 1 custom resource definition which contains all the data for the operator in order to create this intercept route from a container running in a pot connecting to the WireGuard go endpoint within the cluster, tunneling the traffic down to our local Docker network, and then putting it into the application that is making it receiving the the traffic. And that was quite interesting.
[00:29:00] Unknown:
As far as the overall design and user experience of Jafira, going back to the question of how much Kubernetes developers need to know to be effective, I'm wondering how you thought about that interaction pattern and how you think about getting Jafira set up without having to overwhelm developers with too many Kubernetes specific concepts.
[00:29:25] Unknown:
Oh, so this is Jfira internal. Right? I mean, you run the Gfira up command, which installs the operator to my to the cluster to the cluster connected. The operator then itself schedules other resources and services, like services, WireGuard endpoint and everything. And from a developer perspective, the Jifaira user, they are not getting in touch with operators or those patterns.
[00:29:53] Unknown:
Yeah. I was also thinking as far as things like taking advantage of pod annotations or container annotations to let Jafira know which specific pods to connect up to that are already running in the cluster or what the overall interconnect is between I'm developing on a feature branch, and now I want to connect this up to all of the other running services that are deployed in this Kubernetes environment to make sure that I can test the end to end flow of my app? Yeah. 1 of
[00:30:20] Unknown:
the basis premises that I did in designing Jifaira was to not get too much into the workloads that are coming from the application itself. So Gfira does not, for instance, rewrite workload manifest or modify them or, like, for instance, TelePresence, they are putting a so called traffic agent to your deployment, which then sits in front of your container within the port and intercepting the traffic, hitting a designated port. Right? That would cause telepresence once there is failure or anything that it stays as it is. And it's quite troublesome to get rid of the modifications done by telepresence.
Jfira, on the other hand, is not involved in any kind with the workloads that you are running. So there are 2 fields available on a platform modification, and 1 is important. You can exchange the running image. And this is the only part where Gfira modifies the workload, which is running within the cluster. It installs a component called carrier. And carrier is an Nginx based image, which is then afterwards dynamically configured to reroute to traffic. So it really replaces the container that was originally running in the pod. If there is anything wrong in in this process, you can simply restart the rollout or delete the pod, and Kubernetes will take care that you get a part that is exactly fitting your specification or the specification that was the workload created with originally.
[00:32:07] Unknown:
For engineers who are developing for a system where their application is going to be deployed to a Kubernetes environment, what are the cases where Jafira is the wrong choice?
[00:32:18] Unknown:
So at the moment, Jafira is the wrong choice if you want to work with more than 1 developer in a cluster. But apart from this and maybe the bugs being around, I'd say for many, many different use cases, it'll be the right choice.
[00:32:38] Unknown:
And as you continue to develop and use Jafira and introduce it to other engineering teams, what are some of the things you have planned for the near to medium term?
[00:32:48] Unknown:
1 of the biggest challenges that I see, developers are still struggling with is setting correct resource boundaries for applications. So operating teams usually want a good calculatable environment, and they're asking our developers to give resource limits and requests. And they usually do not really know anything they can tell them at this point. So 1 thing that I'd like to see with Gfira in the future is to have a monitoring command that allows me to run my feature or my bug fix on my local Docker host while bringing up some workload in the cluster.
Ideally, it's not too artificial workload that creates some load on my container as well. And while Jifaira monitors your local container, it can create your recommendations about the upper and lower bound that was used during the workload testing. And then, yeah, advise you with the result requests and limits.
[00:33:58] Unknown:
Are there any other aspects of the Jafira project itself or the overall space of cloud native development, particularly for Python engineers that we didn't discuss yet that you'd like to cover before we close out the show?
[00:34:10] Unknown:
1 note to the path to cloud native, which is also well written in the book, Kubernetes patterns, is where what cloud native actually means. I think this is quite important point that I'm trying to get across to developers usually. So it starts with usually well crafted code, clean code in in the core of all applications, but then it comes to a mindset that uses a domain driven design. Right? And that means that we're trying to keep our applications as close as possible to the business logic. To that domain driven design, we apply the principles of microservices, which means, basically, that we are having services being optimized for change.
I think this is a common misconception that developers think, okay, a microservice is only something that consists of a 100 lines of code or something like that. But if those 100 lines of code cannot be changed in the future because they are too cryptic or because they are not maintainable, then this is, at least in my opinion, not being part of the principle of microservices. And then to those microservices being created in in containers, we have all the automatable container orchestration at scale, and this is where QAnitos comes in with advanced patterns and and everything.
[00:35:44] Unknown:
Alright. Well, for anybody who wants to get in touch with you and follow along with the work that you're doing, I'll have you add your preferred contact information to the show notes. And with that, I'll move us into the picks. And this week, I'm going to choose a tool that I just started using a couple of days ago as I'm starting on my own Kubernetes journey, and it's actually an interface to Kubernetes that's a wrapper for kubectl that is built for emacs. And so it gives you a nice interactive way to explore different services and processes running in the cluster, be able to easily view their manifests so that you're already in your text editor environment. You can copy and paste and modify it and update it back into the running Kubernetes cluster. So it's just a nice interactive way to explore and update your running environment. And so with that, I'll pass it to you, Michael. Do you have any picks this week? Sure. So today, the date of recording is fermentation
[00:36:37] Unknown:
Friday, which is perfect for baking a sourdough bread. This is something that is quite trendy, at least in my bubble at the moment. Many people around me are baking their own bread. I've linked a video describing how you can do this yourself. And on the other hand, it's another hobby myself to brew my own beer. We have quite a huge craft beer scene here in Germany. But I'd like to promote this because it's fun, because it's complex, and you need to be cautious about the process and everything.
[00:37:09] Unknown:
Alright. Well, thank you very much for taking the time today to join me and share the work that you're doing at BlueShoe and on Jafira. It's definitely an interesting set of projects that you're building and supporting. So I appreciate all of the time and energy that you and your team are putting into making cloud native development easier, and I hope you enjoy the rest of your
[00:37:28] Unknown:
day. Thank you. Thank you for having me.
[00:37:32] Unknown:
Thank you for listening. Don't forget to check out our other show, the Data Engineering podcast@dataengineeringpodcast.com for the latest on modern data management. And visit the site of pythonpodcast.com to subscribe to the show, sign up for the mailing list, and read the show notes. And if you've learned something or tried out a project from the show, then tell us about it. Email host at podcastinit.com with your story. To help other people find the show, please leave a review on Itunes and tell your friends and coworkers.
Introduction and Guest Introduction
Michael Shalomka's Background
First Encounter with Python
Introduction to Gafira
Challenges in Kubernetes Development
Service-Oriented Architecture with Kubernetes
Alternative Tools and Development Patterns
Implementation and Architecture of Gafira
Local vs. Cloud-Based Kubernetes Clusters
Developer Knowledge and Collaboration
Challenges in Cloud Native Development
Lessons Learned from Developing Gafira
When Gafira is the Wrong Choice
Future Plans for Gafira
Cloud Native Development for Python Engineers
Contact Information and Picks