Summary
A majority of projects will eventually need some way of managing periodic or long-running tasks outside of the context of the main application. This is where a distributed task queue becomes useful. For many in the Python community the standard option is Celery, though there are other projects to choose from. This week Bogdan Popa explains why he was dissatisfied with the current landscape of task queues and the features that he decided to focus on while building Dramatiq, a new, opinionated distributed task queue for Python 3. He also describes how it is designed, how you can start using it, and what he has planned for the future.
Preface
- Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
- I would like to thank everyone who supports us on Patreon. Your contributions help to make the show sustainable.
- When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at podastinit.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your awesome app. And now you can deliver your work to your users even faster with the newly upgraded 200 GBit network in all of their datacenters.
- If you’re tired of cobbling together your deployment pipeline then it’s time to try out GoCD, the open source continuous delivery platform built by the people at ThoughtWorks who wrote the book about it. With GoCD you get complete visibility into the life-cycle of your software from one location. To download it now go to podcatinit.com/gocd. Professional support and enterprise plugins are available for added piece of mind.
- Visit the site to subscribe to the show, sign up for the newsletter, and read the show notes. And if you have any questions, comments, or suggestions I would love to hear them. You can reach me on Twitter at @Podcast__init__ or email hosts@podcastinit.com)
- To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
- Your host as usual is Tobias Macey and today I’m interviewing Bogdan Popa about Dramatiq, a distributed task processing library for Python with a focus on simplicity, reliability and performance
Interview
- Introductions
- How did you get introduced to Python?
- What is Dramatiq and what was your motivation for creating it?
- How does Dramatiq compare to other task queues in Python such as Celery or RQ?
- How is Dramatiq implemented and how has the internal architecture evolved?
- What have been some of the most difficult aspects of building Dramatiq?
- What are some of the features that you are most proud of?
- For someone who is interested in integrating Dramatiq into an application, can you describe the steps involved and the API?
- Do you provide any form of migration path or compatibility layer for people who are currently using Celery or RQ?
- Can you describe the licensing structure for the project and your reasoning?
- How did you determine the price point for commercial licenses?
- Have you been successful in selling licenses for commercial use?
- What are some of the features that you have planned for future releases?
Keep In Touch
- Project Website
- Personal Website
- Bogdanp on GitHub
- @Bogdanp on Twitter
Picks
- Tobias
- The Anybodies by N.E. Bode
- Bogdan
Links
- Dramatiq
- LeadPages
- Lisp
- Celery
- RQ
- Billiard
- Kombu
- Google App Engine
- GAE Task Queue
- RabbitMQ
- APScheduler
- Redis
- Memcached
- LRU (Least Recently Used)
- Middleware
- Gevent
- Pika
- SQS (Amazon Simple Queue Service)
- Google Cloud PubSub
- Django
- API*
- Bundler
- Cargo
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Hello, and welcome to podcast dot in it, the podcast about Python and the people who make it great. I would like to thank everyone who supports the show on Patreon. Your contributions help to make the show sustainable. When you're ready to launch your next project, you'll need somewhere to deploy it, so you should check out linode@podcastinit.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your app. And now you can deliver your work to your users even faster with the newly upgraded 200 gigabit network in all of their data centers. If you're tired of cobbling together your deployment pipeline, Thoughtworks who wrote the book about it. With Go CD, you get complete visibility into the life cycle of your software from 1 location. To download it now, go to podcast in it.com the You can visit the site at podcastinnit.com to subscribe to the show, sign up for the newsletter, and read the show notes. And if you have any questions, comments, or suggestions, I would love to hear them. You can reach me on Twitter at podcastinit or email me at host@podcastinit.com.
To help other people find the show, please leave a review on Itunes or Google Play Music. Tell your friends and coworkers and share it on social media. Your host as usual is Tobias Macy. And today, I'm interviewing Bogdan Popa about Dramatic, a distributed task processing library for Python with a focus on simplicity, reliability, and performance. So Bogdan, if you could start by introducing yourself.
[00:01:35] Unknown:
Hi, Tobias. I'm Bogdan. I'm a independent contractor currently working for, Leadpages, a marketing software company for, medium to large companies. I've been doing web development for close to 8 years, and I've been using Python for about 10 years at this point.
[00:01:56] Unknown:
And do you remember HayFirst got introduced to Python?
[00:01:59] Unknown:
Yeah. I was a very early user of Reddit back when way back when they they used to be implemented in in Lisp, and then they eventually switched to Python. And that's when I first heard of it. And, yeah, I was doing PHP at the time, and I don't know. It just it looked interesting, so I I dove into it. I liked sort of how how readable it was. I like the ecosystem and just, you know, I've been using it ever since.
[00:02:30] Unknown:
And you started building the dramatic project. So I'm wondering if you can just give a bit of an overview about what that project is and what was your motivation for creating it.
[00:02:41] Unknown:
Yeah. So Dramatic is a, distributed task processing library. So if you've used things like Celery or RQ, you're probably familiar with the concept. Basically, you write your functionality such that, say, you've got a a web request that you're processing something in, you can you can split out some of the functionality of that web request and put it on a worker somewhere. So So essentially, you write a function, you send a worker a message over the network to process that function, and dramatic sort of helps you, orchestrate that whole process.
[00:03:18] Unknown:
And as you mentioned, it sits in the same realm as projects such as Celery or RQ, which most people have I probably heard of 1 or both of those. So I'm wondering if you can just do a bit of compare and contrast between them and Dramatic.
[00:03:33] Unknown:
Sure. So Dramatic Dramatic actually came or I I built dramatic because of my frustration having used Celery for quite a long time professionally. Essentially, I I tried to fix some of the the things I found to be problematic with Celery. Things such as, really, to start with, just having a a much simpler, easier to understand code base. If you've ever if you've ever had to dive into Celery's source code, you'll know that it's spread across 3 projects, Billiard, Komboo, and then Celery. And it's, you know, as as you would have expected of of any, open source project that's that's just grown organically over time. Entropy took its toll, and so some things aren't aren't as well designed, as I would like them to be. And it's I personally think it's a little bit spaghetti, which has, prevented me from diving in and contributing in the past, and I'm sure it has others. So that was 1 reason, Dramatic's code base is much more straightforward, much simpler.
Another big 1 is I've used Google App Engine a lot, and Google App Engine provides a task queuing system that is extremely easy to use and and and just very straightforward. It it relies on convention more so than than configuration, and I I that just meshed very well with me. You just you define a queue, and then you're able to process messages on it, and then you're obviously able to configure some things. But it's very straightforward. All your any uncaught errors are are just retried with back off and all that stuff. Now by default, Celery doesn't do any of this. And it's you're you're able to build the stuff around it, but really I think it makes more sense to have this be the default.
So I want when I write my test, I need them to be idempotent. I I want my system to, like, automatically handle any errors that I haven't thought about. I wanted to retry them later, all that that sort of stuff. And I don't wanna think about, you know, implementing retries and and things like that. And you you mentioned r q. I I don't think r q actually handles any of that, but, I have to admit I haven't used it very much. Yeah. And then there's there's other things like locks and rate limiting. In in the projects I've had to work on, these have been very important. Salary has per worker rate limiting. So if you have a task and you wanna, limit, how often it runs, that limit only applies to the 1, worker process.
So if you in an auto scaled environment, for example, that isn't very useful. So dramatic provides global locks and global rate limiting and other things like, task prioritization. So for example, Celery 4 now provides RabbitMQ users a way to prioritize messages within a queue, and that's good. The problem is it doesn't let you it doesn't give you any way to prioritize messages across queues, which, again, for me is is kind of a big use case. So dramatic solves this problem as well. And I guess another big 1 or another 2 large ones.
1 is automatic code reload. When when you're, you know, when your code changes, it's kind of nice in development for for the workers to actually just reload automatically. Celery used to have this feature in Celery 3. In my experience, it never really worked, and they they actually removed it in salary 4. So that was a big 1. And then, the other 1 is just the fact that Celery's implementation of delayed tasks, that is tasks that you schedule to run-in the future, isn't that great. The way it works is essentially those those messages are put on the same queue that normal messages, that is non delayed messages, are put on. And so, what what the workers have to do is they have to pull all of those delayed messages into memory. But, of course, because memory is limited, it actually has a limit on the number of messages that it can pull. So if you have many messages that are far off in the future so if you for example, if you have a a system that fails very often and is very high throughput, you can end up in a situation where you have a ton of messages that are all delayed and have been pulled into worker memory. And many new messages coming in, but those new messages aren't getting delivered to workers because those workers actually have to wait for those delayed messages to, execute. So another thing I I set out to to solve and the way dramatic solves that is it just has separate queues for normal and delayed messages. And so, it does the same thing where it pulls delayed messages into memory, but those don't actually, interfere with normal messages. And when a delayed message is ready to be delivered, it's actually in queued on the normal, message queue, if that makes sense.
[00:08:55] Unknown:
Aside from delayed messages, 1 of the things that I've used in Celery is the idea of scheduled messages somewhat similar to, like a Chrome interface where you can say this task should be executed either every 4 hours or at 3 o'clock in the morning on Tuesdays. Is that something that Dramatic supports as well? That's
[00:09:16] Unknown:
Dramatic doesn't currently support that. I'm I'm not sure it's ever going to. There is there is an open issue, about it, a a feature request, about it right now. I think what I'm going to encourage people to do is just use, AP scheduler with Gramatic because that's that's a pretty solid piece of software. There's not a lot of reason for me to, duplicate that functionality, because sort of, the the main message broker for Dramatic is RabbitMQ, and there isn't a very efficient way to use RabbitMQ as, that sort of a scheduler.
At least, I don't think you can do it without plug ins. So I would end up just duplicating a lot of AP scheduler's functionality,
[00:10:01] Unknown:
which I'd rather not do since it's it's a stable, good project. Yeah. It's 1 that I've used in the past as well, and I have found it to be, as you said, quite reliable. It was actually written by a past guest, Alex Grownholm, who I spoke to about his asphalt project as well. So definitely a prolific engineer who's done some good work there. And some of the other features that Celery has that are sometimes useful are things like task training. I'm not sure if that's something that is possible in dramatic or something that you, haven't really found much use for.
[00:10:36] Unknown:
So I've I've used task training extensively, and and sort of ran into issues at scale with it, but partly because of the back end we were using. Essentially, in in Redis, test training, depends on or if you want efficient task training, then you must have a a, result back end that has support for atomic increment and decrement. Otherwise, it falls back to polling, and it can sometimes, drop those chains. And even when you do use a a back end such as Redis or Memcached, this is getting into sort of ops problems, but those 2 can can drop messages just by virtue of of, you know, the the LRU pushing, keys out of the cache. More so Memcached even though I notice. But, getting back to dramatic, it doesn't have that built in right now, but it's it's fairly easy to add. The main extension point with dramatic is is the notion of middleware.
So, essentially, you would write a little, you know, chain middleware that is able to hook into the the message life cycle and handle all of that. Of course, you would need some some shared storage system, so you would still need something like Redis or Memcached. But I yeah. It's it's not in there quite yet. I'm not sure if if there is if there is a lot of demand for something like that, I'll probably add it. But for now, I think probably best to stick with simple, you know, tasks and simple, flows. Because I found those are also just a lot more debuggable than like, Celery's API for doing chords and groups and all that stuff is very, very nice. And it's it's elegant and everything, but it doesn't at the end of the day, it's not as it's not nearly as simple as just having your task call another task at the very end of its its run. Of course, there are more complex things that that wouldn't, solve. But
[00:12:44] Unknown:
Yeah. Having tried to use some of the task training capabilities and getting bogged down and trying to figure out how it all flows together, it's, definitely something that, well, conceptually very nice is sometimes difficult to actually have work properly and reliably. And you mentioned that dramatic has the capability of being extended via middleware. So I'm wondering what are some of the other use cases for that middleware or examples of middleware that you've written or used?
[00:13:12] Unknown:
Well, so a lot of a lot of its its functionality is actually built on middleware. So for example, the retry the automatic retry functionality is built on middleware. The task, runtime limits, which is a a middleware that lets you, kill tasks that have been running for too long, is also built on middleware. There's a middleware that exposes worker met metrics over Prometheus. And, yeah, most most of the most of the functionality that's not just the the core of running the worker, pulling messages, and executing those messages is actually built using middleware.
[00:13:58] Unknown:
And digging deeper into the project, I'm wondering if you can discuss how dramatic is implemented and how the internal architecture has evolved over the time that you've been working on it? Sure.
[00:14:10] Unknown:
So, when I when I started on it, I I already knew, like, having having used systems like this for for quite a long time, I already knew what or how I would like it to work. So the the architecture is pretty simple. You have your worker processes. Each worker process has a shared work queue, which is just a Python priority queue. The the worker process spawns multiple workers threads, and you're able to configure this by default. It's it's 8 worker threads per process. You can switch to 1 worker thread per process depending on what your use case is or more if you're using something like gevent. And alongside those those workers threads, it spins up, consumer threads for each queue that you have to find. And, essentially, the the the consumer threads are the only things in the system that interact with, the broker.
So the the consumer threads just sit there and wait for messages from the broker. When they get a message, they put it on that shared priority queue, and then the worker threads simply consume messages off of that shared priority queue and then send an acknowledgment back to the consumer thread. So it's it's kind of it's queues all the way down because you've got your broker and then you've got in memory queues and and everything. But, yeah, that that's that's sort of the design at the high level. Initially, I had it where the so I should I should take a step back. So dramatic has at least 1 message, at least once delivery semantics. And so it relies on RabbitMQ's message acknowledgment pattern.
So essentially, the consumer takes a message from RabbitMQ that moves it from ready to enact, then the worker processes the message, and then eventually that message is acknowledged back to the broker. And at that point, the broker removes the message from the queue. If the consumer or if if the worker process dies at any point during this, then the message will simply get redelivered later. So 1 initially, the way the way I had it set up was the consumer thread would pull a message, then send it to a worker thread, and then the worker thread would acknowledge the message. But I quickly ran into the problem that picka or pike, I actually don't know how to pronounce it, which is the client library for RabbitMQ that I'm using is in thread safe.
So the worker threads would sometimes clash with each other and with the consumer thread. And so that's when I changed the the design from from both the worker and the consumer, communicating with Rabbit or any other broker to just the consumer doing the communication. So essentially, there's a queue for incoming work, and then there's a queue from workers to consumers for message acknowledgments, if that makes sense. But, yeah, for the most part, the the design is is the same as I had thought about it prior to actually starting to work on it. Because I've I've honestly been been dreaming of something like this for a long time, which is kinda sad if you think about it.
[00:17:35] Unknown:
And what have been some of the most difficult or challenging aspects of building and maintaining the project?
[00:17:41] Unknown:
Well, so, 1 thing that was a bit of a challenge is if you're familiar with RabbitMQ, you might know there's there's this quality of service setting that you have per queue. You're able to specify, for each consumer. Alright. And here's here's the total number of, messages I should I should process at any 1 time. And the the point of this is for consumers not to steal work from each other when they don't have to. So say you have 8, worker threads, you probably don't want any more than, say, 8 or 16 messages to be consumed for that process at a time. And that's the general idea. The the issue I ran into with this design is, the consumer would block waiting for messages from, Rabbit with an idle timeout. So it would get, say, a chunk of 16 messages.
It would pass those to, the workers. The workers would process them and then send those acknowledgments back to the consumer. The problem is the consumer would probably still be blocking on that that network call, still waiting for new messages. And so acknowledgments would be delayed. And because acknowledgments were delayed, the broker wasn't gonna send any new messages down because the consumer had already accepted 16 messages or however many it had. So this this turned out to be a small problem because, out of the box, PickCut doesn't doesn't let you interrupt that blocking call. So I had to work around that. Fortunately, it's it's kind of easy to work around just because of of how Pika itself is architected.
I did kind of have to depend on some of its internals, but I haven't, I have opened an issue, on their, you know, on their GitHub page asking whether or not they would like me to port or to, you know, add this functionality to pick up because it's it's kind of nice. So, essentially, the the idea is we still do the blocking call. We still efficiently, like, wait for either new messages to come in over the network or the the, you know, for acknowledgments to come into the consumer. The difference is the workers are able to, actually tell the consumer thread to wake up when, when a message is ready to be acknowledged. And, yeah, this this turns out to be quite the the CPU usage of this is very good.
It's as if you had actual blocking system calls everywhere because there are actual system blocking system calls everywhere. And the throughput is also very good. In fact, it's a little bit faster than it's already is, at at processing many, many messages. That was kind of long winded, but I hope it makes sense.
[00:20:48] Unknown:
That was great. Thank you. And you've mentioned that RabbitMQ is the main implementation target for the message broker, but it also has for Redis. I'm wondering how complicated it would be to add support for additional message brokers in the future.
[00:21:05] Unknown:
It it depends on how different they are from RabbitMQ. Like I said, RabbitMQ is my main target. I've used it for years, and it's it's, you know, it's very solid. But, yeah, adding adding new brokers is actually fairly easy. The API is is quite small. You you really only need to define methods for declaring a queue, and for and you need to define what is essentially just an iterator that grabs messages from that queue. So the the API surface is very small. I've I've thought I've looked into adding things like for for things like SQS and Google Cloud PubSub.
They're doable. I haven't added added them to the core because I don't really wanna support them. But, as external packages, they should be really, really simple to set up. Yeah. It's like, the main reason behind that is is just because I'm not likely to use them. I'd rather not have to support them myself.
[00:22:10] Unknown:
No. That's totally fair. There's that's the sort of perennial dilemma of any open source developer contributor is and as soon as you say yes to any feature, then you're stuck supporting it for life and Right. Sometimes you may end up regretting that in the future. So it pays to be judicious in the feature set that you implement because then it just sort of compounds the amount of work and effort that's necessary to keep the project healthy. Whereas if you push some of the other functionality into third party projects, then it it distributes the workload more and allows you to call on other people to help in that maintenance.
[00:22:47] Unknown:
Exactly. And it it it means people who are actually, you know, experts in those domains get to build that stuff because I'm I'm not an expert in SQS by any means.
[00:22:58] Unknown:
And are there any features of the project that we haven't called out yet that you are particularly proud of or think that, deserve a special mention?
[00:23:08] Unknown:
Well, so 1 thing I'm I'm quite happy with, that's different from at least from Celery. R q, I think, has something similar when it comes to unit testing. So, generally, when you when you wanna, unit test Celery code, you you just turn these flags on, that let your tasks run eagerly. It it works out okay. Say you have integration tests where you're you're actually testing, you know, what a what a call to an HTTP handler does that ends up working okay. My my problem with it is it's not close enough to the real thing. That is if if your task always runs eagerly, then it's it's as if that handler was fully synchronous, which in the real world isn't true. So instead, what what dramatic does is it gives you a way to run your tasks asynchronously. It essentially gives you, a way to run a worker thread and a, and an in memory, broker to, you know, process your tasks in unit tests. The the API isn't quite as nice as I would like it to be yet, but it it works out well, and and all its tasks, or all its tests rely on that. In fact, that that leads me into another thing that's think is worth calling out.
Dramatic has over 95% test coverage right now. It's actually pretty pretty well tested. And another big thing for me is Celery has tons of documentation, but it's it's kind of I don't even know how to describe it. It's just it's so large and so it's not very cohesive in in in my opinion, and it's kind of hard to get into. You'll you'll find yourself constantly referencing it and constantly searching for things. That's another thing I I I try to do better with dramatic. It has a simple straightforward user guide and then very, very good, reference documentation that's easy to find. So that's that's 1 thing I'm I'm particularly proud of.
[00:25:17] Unknown:
And for somebody who's interested in integrating dramatic into an existing application or into a new application, can you describe the steps that are involved in that process and the user facing API for how they would add those tasks?
[00:25:34] Unknown:
Mhmm. Essentially, it's it's plug and play. So you just pip install it, import it, and then you, use this decorator called actor to do at dramatic dot actor and then decorate some function with that. And that's really all you need to do. All the all all actors are just normal Python functions. And so you can take any function in your system and turn it into an actor. Actors are what dramatic calls tasks. And then there's there's also an API for doing, for declaring sort of class based actors, but I prefer the the functional API. And that's that's really kind of the the extent of it. It just, by default, it assumes you're using RabbitMQ and that it's running locally.
If you want to configure that, then all you have to do is is just instantiate a broker and then set it as the default. Should mention there are, integrations for, Django and API star to make this a little easier.
[00:26:37] Unknown:
And in the documentation, it mentions that in order to allow dramatic to process which functions are decorated properly, it needs to be instantiated fairly early in the start up process of the application as well.
[00:26:52] Unknown:
Yep. Yeah. That's that's true. So, when you declare, your own broker, you should do it as as early as as possible in the start up process. It's also, it's possible to not depend on that. Every the the actor decorator actually takes a broker, parameter. So you can specify you can actually have multiple brokers in your system, and you can specify what broker to use on, on an accurate level. But, yeah, the the default is to have a a global broker in which if you wanna use that, then, you should instantiate it as early as possible.
[00:27:30] Unknown:
And if someone is already using Celery or r q in their system, is there any sort of easy migration path or compatibility layer for people to be able to switch those implementations over to using RQ? Or is that just, too complicated and potentially fraught with errors to be a realistic migration?
[00:27:52] Unknown:
I I think it would be doable. I just haven't really spent any time on it. I think, really, the the process of switching is is simple either way. There's there's only a few things you have to grab for in your code base and then, change them up. Essentially, you're just switching calls to salary. Task to dramatic.actor and then calls to, you know, task.delay to task.send, which is what dramatic uses to send actor actor messages. Obviously, that's oversimplifying things a bit. If you're using very advanced features, that might not work as well. But, no, there there isn't a a shim right now.
[00:28:39] Unknown:
And another issue that could potentially crop up is with the at least once delivery semantics where you want your tasks to be idempotent. So if somebody isn't already designing their Celery tasks or RQ tasks to work in that way, they may end up having some issues with maybe repeat tasks because of a failed worker that didn't acknowledge the task before it completed.
[00:29:01] Unknown:
Right. There are ways to work around that. Essentially, you can't in in dramatic, you can't actually get around the late acknowledgment thing. So regardless of your retry settings, if your worker process dies in the middle of processing a task, that task will definitely get, reenqueued later. If that's not what happens, if you have unhandled errors and you don't want, the task to be retried? Then you can you can just specify that as a an option when you declare your actor. You can just say retries none. And, additionally, you can leverage middleware here. You can have a sort of unique task middleware, for example, that just uses something like Memcached to ensure that any task is only processed once over some period of time based on its, parameters.
[00:29:51] Unknown:
And are there any particular situations or use cases where you would advise against using dramatic?
[00:29:57] Unknown:
The 1 big 1 is, dramatic is Python 3 only. So if you're stuck on Python 2, then well, for 1 I'm sorry. For 2, there's your your options there are cell or ERQ or 1 of the the under test processing libraries that are out there. I didn't since since it's such a new library, I didn't feel the need to to add support for Python too, but I know I know people depend on that to some extent. And another 1 is if you really do depend on extremely complex workflows, for example, if you're using Celery and and you do depend on super complicated core group combinations, then that might be a better option right now, at least until I add some sort of nice middleware to the core for doing that kind of stuff. But otherwise, I don't think so.
Personally, I wouldn't use anything else, but I'm biased.
[00:30:53] Unknown:
And going back to the idea of making the project sustainable, I noticed that you have a dual licensing structure for if people want to use open source, it's GPLv3, I believe. And for commercial use where somebody doesn't want to release their modifications, you you offer a paid commercial license. I'm wondering if you can just describe the licensing structure there and your reasoning for taking that approach.
[00:31:21] Unknown:
Sure. Well, so I've I've been building open source software for a while now, and and and something I've I've noticed both in my own projects and in other people's projects is, essentially, you you go out. You try and solve a problem that you have, and then other people start using it, which is good. But, they they start depending on it to some extent, and you're stuck, supporting it. In the open, that ends up being you get, issues, pull requests, all that stuff, which again is is a good thing. But you also get companies who who start to depend on your work, and and they start to make a profit on your work. And I think it's it's only fair to, ask those companies to contribute back in some way. So for me, that's either they have to contribute source code, either their own or whatever it is they're doing, or they have to pay me to actually support the project. And that that materializes itself in a $2, 000 a year, commercial license.
And that said, I do I do also offer just free versions of that license for companies that are just starting out because I understand that not everyone is currently making a profit. Oh, and I I I should mention you you said, GPL v 3. It's actually a GPL. So it's the a FARO
[00:32:43] Unknown:
license. Thank you for that clarification. And from a practical standpoint, given that particular license, what are the situations in which somebody would need to release the source code of the project that they're working on? Or what is what are the boundaries of code changes that would need to be released? Is it purely changes to dramatic itself or does it somehow interface with the application that they're building as well? It's it's any,
[00:33:11] Unknown:
source code that interfaces with the dramatic API. So if you import dramatic and you use it, then you need to open source your code. That's the extent of of Arrow. So it's like it's it's sort of like the LGPL for network code. That's how I think about it. And how did you
[00:33:29] Unknown:
choose the price point for the commercial license? And have you had any success in selling any of those licenses?
[00:33:38] Unknown:
I chose what I thought was sort of a fair amount of money. It's to be to be completely honest, it's it's somewhat arbitrary. It's just that seemed fair to me, and that's what I chose. I have not had any companies purchase it yet. I have had companies, express interest. No purchases yet.
[00:33:57] Unknown:
And about how long has the project been around? When I was looking at it, it seems that it's a fairly recent project.
[00:34:04] Unknown:
Yep. I started working on it in, I believe, March of this year. Just slowly working on stuff in my free time, just building out the the things I I thought it needed. And then started showing it to people in November of this year, so about a month ago. And it's had some some uptake. But yes, it's it's a fairly new project.
[00:34:29] Unknown:
And what are some of the features that you have planned for future releases?
[00:34:34] Unknown:
Right now, I'm I'm mostly focused on just using it, making sure that the the things that are there are as solid as they can be. The 1 feature I am thinking of of working on is, adding support for pluggable encoders that right now, when you send a message over using dramatic, that message is encoded as JSON, and you have no hooks, into that process so that you can for example, if you're sending a a UUID object, there's no way for you to automagically, convert that UUID to a string so it can be sent as JSON. So I'm I'm I'm planning on adding something like that right now, but that's that's the biggest feature I'm currently thinking about. It has most of the things that I've wanted professionally.
So, yeah, that's kind of it.
[00:35:29] Unknown:
Alright. So for anybody who wants to follow the work that you're up to or get in touch with you in the future, I'll have you add your preferred contact information to the show note. And with that, I'll move us into the picks. And for this week, I'm going to choose the book, The Anybuddies by NE Bode, which is a book that's targeted at sort of, probably about 8 to 10 year olds. Been reading it with my son and it's a lot of fun. It's very tongue in cheek, has a lot of references to other literary works and the author sort of breaks the 4th wall a lot of times and talks directly to the reader in very humorous fashion. So it's been a lot of fun. It's the first of a series of books and, I've been enjoying that. So I definitely recommend that for other people. And with that, I'll pass it to you. Bogdan, do you have any picks for us?
[00:36:16] Unknown:
My pick would be, PIPANF. I've I've been using it for, let's say, maybe 2 or 3 months now, and it's, honestly, it's it's great. It's it's what I've wanted out of Python packaging or Python package management for a very, very long time, and I'm I'm happy that someone finally took that up. So Pipenv is a sort of the analog of something like bundler or cargo from Ruby or Rust for Python. It wraps pip and, yeah, I would just recommend people look it up and and use it because it's it's great.
[00:36:55] Unknown:
Yeah. I've been using it for a while as well. And 1 of the other nice things is that it has the option of consuming a requirements dot text file to convert it into the PIP file format as well as producing a requirements dot text file for deployment systems that aren't yet able to use that PIP file format.
[00:37:14] Unknown:
Yep. That's that's a great feature. And the the actual, you know, verification of package hashes and just the peep the the lock file. Yeah. I've liked everything about it so far. I'm really happy to exist.
[00:37:29] Unknown:
Alright. Well, I wanna thank you for taking the time out of your day to join me and talk about the work you've done on dramatic. Definitely seems like an interesting project that fills a valuable use case. So I appreciate your time and I hope you enjoy the rest of your day. Thank you for having me. Have a good 1.
Introduction and Sponsor Messages
Interview with Bogdan Popa
Bogdan Popa's Background and Introduction to Python
Overview of Dramatic
Comparing Dramatic with Celery and RQ
Challenges with Celery and Solutions in Dramatic
Scheduled Messages and Task Training
Dramatic's Middleware and Extensions
Internal Architecture and Design of Dramatic
Testing and Documentation
Integrating Dramatic into Applications
Migration from Celery or RQ to Dramatic
Use Cases and Limitations
Licensing and Sustainability
Future Features and Development
Picks and Recommendations