Visit our site to listen to past episodes, support the show, join our community, and sign up for our mailing list.
Summary
In an attempt to improve the performance characteristics of the CPython implementation, Dino Viehland began work on a patch to allow for a pluggable interface to a JIT (Just In Time) compiler. His employer, Microsoft, decided to sponsor his efforts and the result is the Pyjion project. In this episode we spoke with Dino Viehland and Brett Cannon about the goals of the project, the progress they have made so far, and the issues they have encountered along the way. We also made an interesting detour to discuss the general state of performance in the Python ecosystem and why the GIL isn’t the bogeyman it’s made out to be.
Brief Introduction
- Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
- Subscribe on iTunes, Stitcher, TuneIn or RSS
- Follow us on Twitter or Google+
- Give us feedback! Leave a review on iTunes, Tweet to us, send us an email or leave us a message on Google+
- Join our community! Visit discourse.pythonpodcast.com for your opportunity to find out about upcoming guests, suggest questions, and propose show ideas.
- I would like to thank everyone who has donated to the show. Your contributions help us make the show sustainable. For details on how to support the show you can visit our site at pythonpodcast.com
- Linode is sponsoring us this week. Check them out at linode.com/podcastinit and get a $20 credit to try out their fast and reliable Linux virtual servers for your next project
- I would also like to thank Hired, a job marketplace for developers and designers, for sponsoring this episode of Podcast.__init__. Use the link hired.com/podcastinit to double your signing bonus.
- Your hosts as usual are Tobias Macey and Chris Patti
- Open Data Science Conference, Boston MA May 21st – 22nd, use the discount code EP at registration for 20% off
- Today we are interviewing Brett Cannon and Dino Viehland about their work on Pyjion, a CPython extension that provides an API to allow for plugging a JIT compilation engine into the CPython runtime.
Interview with Brett Cannon and Dino Viehland
- Introductions
- How did you get introduced to Python? – Chris
- What was the inspiration for the Pyjion project and what are its goals? – Tobias
- The FAQ mentions that Pyjion could easily be made cross platform, but this being a Microsoft project it was bootstrapped on Windows. Have any of the discrete tasks required to get Pyjion running under OSX or Linux been laid out even in outline form? – Chris
- Given that this is a Microsoft backed project it makes sense that the first JIT engine to be implemented is for the CoreCLR. What would an alternative implementation provide and in what ways can a JIT framework be tuned for particular workloads? – Tobias
- What kinds of use cases and problem domains that were previously impractical will be enabled by this? – Tobias
- Does Microsoft’s recent acquisition of Xamarin and the Mono project change things for the Pyjion project at all? – Chris
- What are the challenges associated with your work on Pyjion? Are there certain aspects of the Python language and the CPython implementation that make the work more difficult than it might be otherwise? – Tobias
- When I think of Microsoft and programming languages I generally think of C++ and C#. Did your team have to go through an approval process in order to utilize Python, and further to open source your work on Pyjion? – Chris
- How does Pyjion hook into the CPython runtime and what kinds of primitives does it expose to JIT engines for them to be able to work with? – Tobias
- Would an entire project be run through the JIT engine during runtime or is it possible to target a subset of the code being executed? – Tobias
- In what ways can a JIT compiler implementation be purpose-built for a given workload and how would someone go about creating one? – Tobias
- Could a JIT plugin be designed with different trade-offs, like no C API compatibility, but that worked around the GIL to provide real concurrency in Python? – Chris
- One of the most notable benefits of having a JIT implementation for the CPython runtime is the fact that modules with C extensions can be used, such as NumPy. Does that pose any difficulties in the compilation methods used for optimizing the Python portion of the code? – Tobias
- What kinds of performance improvements have you seen in your experimentation? – Tobias
- Which release of Python do you hope to have Pyjion incorporated into? – Tobias
- Has any thought been given to making Python a first class citizen in Visual Studio Code? – Chris
- What areas of the project could use some help from our listeners? – Chris
Keep In Touch
- Dino
- Brett
Picks
- Tobias
- Chris
- Brett
- Dino
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Hello, and welcome to podcast.init, the podcast about Python and the people who make it great. You can subscribe to our show on Itunes, Stitcher, TuneIn Radio, or add our RSS feed to your podcatcher of choice. You can also follow us on Twitter or Google plus and please give us feedback. You can leave a review on iTunes to help other people find the show, send us a tweet or an email, leave us a message over Google Plus, or leave a comment in our show notes. And you can also join our community. Visitdiscourse.pythonpodcast.com for your opportunity to find out about upcoming guests, suggest questions, propose show ideas, and follow-up with guests of the past episodes.
I would like to thank everyone who has donated to the show. Your contributions help us make the show sustainable. For details on how to support the show, you can visit our site at pythonpodcast.com. Linode is sponsoring us this week. Check them out at linodedot com/podcastinit and get a $20 credit to try out their fast and reliable Linux virtual servers for your next project. I would also like to thank Hired, a job marketplace for developers and and designers, for sponsoring this episode of podcast.init. Use the link hired.com/podcastinit to double your signing bonus.
Your host, as usual, are Tobias Macey and Chris Patti. Couple of announcements before we start the show. There's the Open Data Science Conference that's happening in Boston, Massachusetts on May 21 to 22nd. And I've actually been given 2 free tickets to give to our audience at random. So if you sign up for our newsletter, then I will pick 2 lucky winners for those tickets. And also, I'm going to be at that conference. So if you happen to be going on your own or get some of those tickets, then I'll see you there. So today, we are interviewing Brett Cannon and Dino Vieland about their work on Pidgin, a CPython extension that provides an API to allow for plugging a JIT compilation engine into the CPython runtime. So could you guys introduce yourselves, please? Dino, how about you go first?
[00:01:59] Unknown:
So I have been working at Microsoft on Python for a pretty long time now. I've worked on several different projects, Iron Python, Python Tools for Visual Studio. I've, at 1 point in time, did a tiny amount of work on our Azure SDK and our Azure ML SDKs. So, I may be the longest employed person working at Python at Microsoft in general at this point. And so last year, I thought the idea of starting a JIT for, CPython based upon dot net and maybe opening the world to have a generalized JIT was an awesome idea after the language summit or the the runtime summit at PyCon.
[00:02:48] Unknown:
And how about you, Brett? Yeah. So I've been on the Python team with Deno since July, so nowhere near as long as Deno. Although I've been a core contributor to Python for, oh, god, about going on 13 years next month. And after Deno started this little project of his, I got brought on shortly after I started work to kind of act, I guess, as program manager while Dino's tech lead to help keep moving this forward in, kind of our spare time, part time on it, and just trying to see if we can get it to go anywhere.
[00:03:22] Unknown:
So how did you folks get introduced to Python?
[00:03:25] Unknown:
So I actually, you know, I'd heard about Python a little bit, but my real introduction to Python is quite strange in that I started learning Python by working on an implementation of Python, which was working on iron Python. I'd not really written any Python code before I started doing that, And so I came in it from a very odd direction and learned kind of all the horrible intricate language things that most people probably don't know while not learning all the great standard library things that, people know and love. So I had quite quite a strange introduction.
[00:04:07] Unknown:
Yeah. So I got introduced to Python in the fall of 2000 when I was looking for a programming language to learn object oriented programming from, where I did my undergrad. They had an intro, the intro course had a entrance exam, and I figured I should know I already knew C at the time, and I figured I should know some object oriented language, and Perl and Python kept popping up at the time. But every everyone kept saying, oh, yeah, Perl, it should be like your 5th or 6th language. Well, everything I kept reading said, like, Python was a great introductory language. So I decided to go with that, loved it, and have a look back since.
[00:04:41] Unknown:
Did you ever get around to learning Perl?
[00:04:44] Unknown:
Actually, I did. I went to, UC Berkeley, and they allowed students to actually teach courses for pass fail for 1 or 2 units, basically, to pad out your unit count so that you were considered a full time student. And some friends of mine at the Open Computing Facility, which was a student run computer lab, taught an intro pro class. So I actually took that. So I actually did learn it. Great. So,
[00:05:08] Unknown:
what was the inspiration for the Pigeon Project, and what are its goals?
[00:05:12] Unknown:
So I was inspired when Larry Hastings got up, last year at PyCon and was talking about what it would take to help move people to Python 3. And he had a lot of interesting ideas. But the 1 that stood out in my mind was just making Python faster. And, you look at the JavaScript community, and you have a lot of competition going on amongst JavaScript implementations. And I think that has really spurred them on to be competitive and make better, faster, versions of JavaScript. And so I'd love to see the same thing happen with Python, but really it's in a world where you have CPython and you have a couple of other distributions which are pretty significant, but their goal is really to run on different platforms and then you have Pypot. And PyPI throws out a lot of the compatibility in order to achieve its speed. And so I thought there could be a way forward to add a jet to CPython and potentially open that up to other people to add their own JITs and get sort of that same sort of competitive landscape going on.
[00:06:25] Unknown:
What was your reasoning for, going for a JIT implementation as your approach for speeding up the CPython runtime or providing a faster Python experience versus any other approach that might be taken?
[00:06:38] Unknown:
I think, you know, at the end of the day, getting out of the interpreter loop is going to be required to speed up CPython. And also being able to take runtime type information and flow that back in and make more optimal decisions. So kind of all those things require having a JIT. There's a lot of other things that can be done to speed up CPython as well, but this can be done without changing anything else about CPython, which maintains the compatibility, and I think it should have a lot of room to grow in it as well. So those are the main
[00:07:17] Unknown:
2 reasons. So the FAQ mentions that Pigeon could easily be made cross platform, but this being a Microsoft project, it was bootstrapped on Windows first. Have any of the discrete tasks required to get Pigeon running under OS X or Linux been laid out even in sort of, you know, vague outline form?
[00:07:34] Unknown:
Yeah. Actually, if you go to our GitHub page at github.com/microsoft//pidgeon, which is pyjion, There's actually an issue there to port our build file, which is a Visual Studio project file to CMake, which is the same build system that the core CLR JIT that we use, uses as well. So it's literally just been laziness on our part of just not learning CMake to port the build file over and then verifying that our c plus plus is fully cross platform.
[00:08:05] Unknown:
I I gotta be honest with you. Having locked horns with CMake, in a couple of occasions, I don't blame you. It's a really powerful tool, but it it it it does not exactly present a pleasantly shallow learning curve.
[00:08:20] Unknown:
Yeah. Exactly. And, I mean, we could always do a custom make file just for Linux and OS 10 or any UNIX platform, but then we've bifurcated between Visual Studio build files and then UNIX build files, and that's just too much of a hassle. So this is the exact reason we haven't just gotten around to it yet is just buckling down and then just doing the work.
[00:08:43] Unknown:
So if any of our listeners out there is a CMake expert or even a CMake journeyman, and would be willing to pitch into the project, this is your opportunity.
[00:08:54] Unknown:
So given that this is a Microsoft backed project, it makes sense that the 1st JIT engine to be implemented is for the core CLR. What would an alternative implementation provide, and and in what ways can a JIT framework be tuned for particular workloads? And also, I hope, realizing that we're using the the acronym JIT quite a bit. So maybe if you could also take a moment to just describe what JIT actually means.
[00:09:17] Unknown:
So a JIT is a just in time compiler, and that is a compiler that gets invoked at run time within your process typically and takes some, intermediate representation and turns it into native code and then turns around and executes that immediately. You can contrast that with your normal program, which, you know, would be compiled ahead of time. And you have native code and you run that binary, and there's no, extra translation steps. So, we chose core CLR, mainly because I, long, long ago, was actually on the CLR team and kind of knew how the CLR just out and thought it would be a good thing that we could come and just drop in place. And Microsoft had recently open sourced it, so that made it just super ideal. We've already talked about 1 other jet engine that we could use which is shopper core.
And so it's easy to kind of contrast some of the big differences there in that, the CLRJIT is very much meant for static languages, whereas Chakra core was designed for JavaScript. And so it has things like it's designed to bail out to an interpreter loop when optimizations are, known to be true. It has facilities for pulling out local values of optimized code and various things like that that have been tuned for dynamic languages. So that's kind of 1 access where things can vary a lot. Another access might be using a more powerful compiler. So you have things like LLVM which has tons of optimizations, which typically makes it not very good for a JIT because it takes a long time to compile and go through all those optimizations.
But if you had some code that you knew you wanted to get as optimal as possible, something that's based off of LLVM, might be useful. And that's, for example, what Numba, which isn't a generic JIT solution but lets you write Python code and have it be jitted. That's what they're using, and that's what Unleads and Swallow used in the past. Other things might be just raw compilation throughput. So LuaJIT is known to be a very efficient JIT at producing code and doing so quickly. And so that might be yet another access that people could, look at.
And then, you know, there's also you could imagine some sort of meta jet which tried to pick which 1 was right and use any other various JIT framework. So lots of spaces to play around in.
[00:12:02] Unknown:
It also occurred to me as I was hearing you describe these various axes, core CLR has been around for a long time at this point, hasn't it? I mean, dot net has been in under active development for it's gotta be at least a decade now. Right? Yeah.
[00:12:19] Unknown:
Maybe 20 years. I joined Microsoft almost 15 years ago, and I joined the CLR team. And they were close to shipping and had been under development for several years at that point.
[00:12:35] Unknown:
Now I feel old because I remember I remember when when that whole Sun, Microsoft, Java war, which led to the, you know, directly or indirectly to the development of of of dot net. So that has to be an advantage when you're talking about using the CLR as the basis for something like this. You know, it's it kinda reminds me of the hotspot jittering in the Java world. It's like, you get these code bases that have been lovingly optimized and evolved for decades, and you end up with with these really incredibly performant tools. It's gotta be great to be able to leverage all that.
[00:13:15] Unknown:
Yes. It is so awesome that it's open source now and that we can just pick it up. And, like, 1 of the big things I really didn't wanna do is go off and write my own code generate. Right? Like, that is just going to be a gigantic raffle, and it's not going to be that great. And, you know, it's probably gonna be buggy and just having like, there's now a choice choice of Jits from Microsoft alone, let alone all the other Jits in the world that are also open source. And so there's a lot of possibilities here, and so that is 1 of the inspiring things.
[00:13:51] Unknown:
So you've listed off quite a number of different JIT implementations. Are there any aspects of those that would either lend themselves particularly well to being, running against Python or perhaps make them, less ideal for running against Python? And do you think that there is room for somebody to plug in a different engine that is more well suited to Python, perhaps even, you know, something built with RPython or PyPy itself?
[00:14:17] Unknown:
So the actually, there are basically 3 goals we have for Pigeon in descending order of importance. So the most important 1 is this c API we've designed for CPython that we are going to hopefully get we're hoping to get upstreamed if we're if we can convince the Python development team to go along with it. And then because that's the key goal, basically, using the core CLAR JIT is more of a proof of concept and an idea. So we've designed the c API that we're using, from the beginning to be fully pluggable. Right? So as Dino said, JavaScript has had a lot of work, and it's extremely dynamic language and much closer to operating the way Python works in terms of being able to toss on an attribute on a random object compared to, like, c sharp or Java where you have, the hotspot VM or, what CoreCLR typically targets more.
So, honestly, we don't know what the best JIT JIT is gonna be, and that's 1 of the reasons we've made sure this is designed in such a way that anyone who wants to experiment and try that out and put the time and effort in has the opportunity to actually make this all work. So we don't know, and that's kind of the exciting bit and why we're hoping people if we can get this c API put in upstream, can give it a shot and see what comes out of it.
[00:15:36] Unknown:
And what kinds of use cases and problem domains that were previously impractable or even intractable do you think will be enabled by having this new, capability with the CPython implementation?
[00:15:48] Unknown:
I think that really remains to be seen. At this point, we, have barely scratched the surface of doing any optimizations. My hope is that the big use case is that people don't have to go to c like they think they do sometimes now or how they just simply sometimes have to do that today. But beyond that, I think first we need to prove that this is actually going to work, and then we can actually start, seeing where it helps and where it doesn't.
[00:16:21] Unknown:
I'm assuming that the answer to this is no. But would having your code compiled by via JIT provide a way of sidestepping the GIL in some respects?
[00:16:30] Unknown:
No. Okay. That that's what I assumed. Yeah. Well, the the issue is is it kind of honestly depends on what kind of compatibility you want, but we designed this from the beginning such that the c API from CPython is in no way a hindrance. It's just part of the system. So that immediately requires that the GIL be operational and in effect because otherwise, you're gonna end up with with reference counting issues, and you're going to leak memory or,
[00:16:58] Unknown:
free memory that's not ready to be freed, and you're just gonna have massive issues. So Right. This has absolutely nothing to do with the GIL. Yeah. And that would just introduce a whole whole slew of new security concerns. And Python being well known for having such a well maintained and, well security audited code base, that would definitely be a pretty large black mark that nobody wants to introduce.
[00:17:21] Unknown:
Yeah. I mean, getting rid of discussing I mean, we can have discussion about the GIL if you want, but trying to get rid of it is its whole own problem that we don't even wanna attempt to tackle on top of trying to prove that a JIT plugin system for CPython is tractable.
[00:17:37] Unknown:
Yeah. We actually, talked spoke with, Trent Nelson, I believe his name is, about his work on py parallel in another episode. So that was 1 approach of trying to sidestep the GIL. So we'll, cut this line of, inquiry short and move on with the other questions.
[00:17:52] Unknown:
Does Microsoft's recent acquisition of Xamarin and the Mono project change things for the Pigeon project at all? I don't think it really does. I think even in the dot net space, what you see happening with CoreCLR is that Microsoft is taking its CLR implementation and making it available elsewhere. And I think the places where mono really adds value in the CLR space are things like it's ahead of time compilation, although the CLR hasn't has something for that now, although it's, you know, obviously much younger than mono's implementation is.
And we're really focused on the JIT space, and I think core CLR generally offers a better JIT than what mono does. So for that reason, I don't see that being a huge delta.
[00:18:44] Unknown:
And what are the challenges associated with your work on Pidgin? Are there certain aspects of the Python language and the CPython implementation that make the work more difficult than it might be otherwise?
[00:18:54] Unknown:
Yeah. You gotta be able to stay calm while you answer this question, Deepak.
[00:18:59] Unknown:
So when I started this, you know, my kind of thought process was, you know, going to, IL, which is what the JIT consumes, would be a fairly easy process because CPython's byte code is based around a stack based VM and so is IL. So, you know, it's just move 1 over and translate, and we'll be done. So it turns out that c Python has a lot less restrictions on its bytecode than what the CLR does have on has on IL. And a couple of those are, the CLR doesn't let you transition to the same byte code with varying stack depth. So that's because it needs to figure out, like, what register each value of the stack is going to correspond with and generate kind of same code. There's no real stack there. Whereas Python has a array, which is its stack, and it's just pushing things on and pushing things off. And so it can pop off a value and be like, what's this value? And then it can be like, okay. Because this value was whatever, I need to pop off 2 more values now.
And another thing that the CLR doesn't like is significant flow control with values still on the stack. So you can't return from a function with values still on the stack. So these little differences have actually proven to be pretty challenging in coming up with a way that we can actually transform the bytecode into io and then exception handling has just been super interesting as well.
[00:20:45] Unknown:
That's a nice way of putting it.
[00:20:49] Unknown:
You know, the CLR obviously has its own exception handling mechanisms, but we can't really use those. And so the the big reason why we can't use them is actually it goes back to reference county and that we can't just throw an exception and unwind the stack and not decrement the ref counts on all of those objects that are sitting on the stack still. And so that works great in a garbage collected non reference counted environment. But in a reference kind of environment, all those deck refs have to happen. So luckily, the test cases are very good for exception handling, and so we were able to beat it into submission.
But, getting all the nuances there correctly, getting them all working correctly has been,
[00:21:34] Unknown:
a long bug tale. But we did solve them in the end, which is the important thing. Yeah.
[00:21:39] Unknown:
So when I think of Microsoft and programming languages, I generally think of, c plus plus and c sharp. Did your team have to go through an approval process in order to utilize Python and further to open source your work on Pidgin?
[00:21:51] Unknown:
So I've been doing Python stuff at Microsoft for the past decade, basically. So dealing with LCA and Python,
[00:22:00] Unknown:
it used to be difficult. They didn't really mention, by the way, real quick. LCA is basically the legal department at Microsoft. Okay. Thank you.
[00:22:09] Unknown:
It like, initially, they're fine with us using it. Right? And in Iron Python, they were fine with us distributing iron Python, but, initially, they didn't even want us distributing the c Python standard library along with iron Python. So iron Python was Python without the batteries included, and it has been a long time. But finally, you know, and especially in the past couple of years, you really see Microsoft coming around. Getting pitching out was really, really easy. It was a couple weeks before they're ready to announce. We looped in our legal contact. They looked over the code. They're like, yeah. Go for it. And so it's it's night and day from when I first started. And now it's just I I don't think they blink at the language
[00:22:57] Unknown:
really at all. Yeah. And having just joined Microsoft coming from Google, I mean, the approval process is no worse or different. And honestly, I found Microsoft's easier to do open source mood lighting work in my spare time, easier than how I found it at Google. So it's definitely from people's old perspective of Microsoft, just does not hold anymore in terms of the new current Microsoft and its view of how to deal with open source. It must feel really great for those of you folks who've been there for a long time to be able to see that
[00:23:29] Unknown:
that culture change happening because, I mean, developers are developers. Right? We all wanna build cool things, and we all wanna share them with everybody. And so it's gotta be nice to have Microsoft embrace this idea that if it's not a core competency and it can generate some goodwill and perhaps get more eyeballs and more mindshare on Microsoft products, then open sourcing is a perfectly viable approach. It must feel awfully good.
[00:23:56] Unknown:
It certainly does. Especially after, you know, 10 years ago, it was so different. And I've been doing open source at Microsoft this entire time. And so for me, I'm I'm really excited about it.
[00:24:09] Unknown:
So how does Pigeon hook into the CPython runtime, and what kinds of primitives does it expose to JIT engines for them to be able to work with? If you look at our repository, we actually have a patches directory that's literally just a set of diffs that we apply to CPython
[00:24:23] Unknown:
3.5.1. And essentially, what we've done is we've extended code objects, which, represent code anywhere that gets executed. So even modules, for instance, are code objects, but mainly most people think functions and methods. And, we added a couple attributes on there. The key 1 is a pointer for a struct that we created to store JIT information mainly a c function that can be set that acts as a and, kind of a, scratch space which is just avoid pointer. And what happens is is in the eval loop of the interpreter when you come across a code object, we check if it's been executed enough to warrant, execution. Although, currently for compatibility testing, all code is considered hot and ready to go to be compiled.
We just check to see if a JIT compiler function has been set, and if it does, we pass the code object to it and then get back the appropriate struct to store into the code object. And then when execution starts for that code object in the current frame, we pass in, the frame and basically that includes the code object and the information that was stored by the JIT compiler in there. So more or less basically JITs compile against code objects such that they can execute against a frame object. And that's basically it. It's honestly not terribly complicated. The complication from an API perspective, the complication obviously is just be able to take Python bytecode and know how to emit the proper, intermediate representation or language to actually make the semantics
[00:26:00] Unknown:
fall out. And so when when you actually do execute the compilation, does that then substitute the bytecode code path with a compiled C module?
[00:26:11] Unknown:
So, it's basically if you look in Python in, the ceval.c file, which has the eval loop, there's basically a function. I believe it's pyevalevalframeex is the technical name. The the loop calls to start the execution of, every frame objects basically, every function call you make. And in there, we literally just have an if statement that just says, alright. Here for this frame, get the code object for the frame that we're gonna be executing. And does this code object have the struct that we created to store the JIT information, which is the function with the trampoline and its scratch space and the void pointer? And if it's set, then just call that instead of going into the actual eval loop.
[00:26:54] Unknown:
1 question that I actually asked Machek Filkovski of the RPython and PyPy projects when we were speaking with him, I was curious whether it was possible to save the results of JIT compilations between code execution. And in his particular case, it gets cleared up, when the program ceases executing in with Pigeon, and I'm assuming it's probably somewhat specific to the JIT engine being used. Is it actually possible to cache or somehow store the compiled code so that the next time that the program gets run, you can skip the runtime compilation and just leverage the work that's already been done?
[00:27:35] Unknown:
We don't currently do that. In theory, it would be possible just because core CLR has this engine feature for pre compiling code. There'd be a bunch of work of kind of doing some saving and then doing the pre compilation stage and writing that all out to a module somewhere. So it would be nontrivial to go off and implement that, but theoretically not.
[00:28:00] Unknown:
And would it be a worthwhile endeavor to actually do that, or do you think it would just be a lot of work without much practical gain?
[00:28:09] Unknown:
It may be worthwhile. Back on IronPython, 1 of our biggest challenges was always startup time. And a lot of our start up time was actually dominated by time, spent in the JIT. And so 1 way we got around that was using dotnet's engine feature where we could save all the IL or even your Python code, and we could engine all of that and have everything be executable code ready to go. And that helped with our startup time a lot. I don't expect it to be as much of a problem here because Python already starts up so fast because the eval loop is very tuned, and, Python's been optimized and optimized and optimized.
And so our main goal is to really fall back on the eval loop and let that be used for fast startup, and then we can jit hopefully, at some point, we can start jitting in the background even. And so it doesn't even really affect your performance at all. I that might start to get interesting if someone had something that they wanted to be both high performance and high start up, And maybe we'll cross that point, but probably a long ways out.
[00:29:25] Unknown:
Another part of the project that you guys have worked on and I believe is already ready to use is an actual C plus plus framework for making it easier for other people to plug in their own JIT implementation. So I'm just curious what kinds of capabilities that will allow for, and how that will ease the implementation for for other people who wanna do that? So,
[00:29:49] Unknown:
in general, that starts to change the bytecode into higher level concepts, and it's a lot of higher level concepts. There's a lot of helper calls that happen, but, figuring out, for example, exactly how you write that tricky exception handling code, that can be figured out once and then you can deal with actually turning that into the raw, native code that needs to be created instead of figuring out exception handling again. We also hope to start doing type inference. We do a little bit of type inference to that now, but start, applying that more deeply, and then doing type specialization.
And so, hopefully, JIT implementers won't have to think about that stuff so much and can instead focus on the how they emit the code for adding 2 integers.
[00:30:43] Unknown:
And would the, type module in Python 3.5, as long as it was leveraged appropriately, provide any advantages or benefits to a JIT implementation to maybe simplify the work that needs to be done there?
[00:30:58] Unknown:
It's theoretically possible. We could just personally haven't done it yet. That kind of information is stored at the function objects level, not the code object level. So, technically, the JIT, the way the API is structured, does not have access to that information. But it honestly would not be difficult to, have a decorator, say, much like Numba, that was at the c level as through, like, a c extension able to grab the type hints out of the and the type annotations on the functions and methods and then burrow that into the code object for access to for the JIT. And then the JIT could potentially look take that information, then use that in its own needs for doing JIT compilation. So theoretically possible, but nothing yet. And there is a question of whether or not people
[00:31:47] Unknown:
would really want to see that happen, in that you would then have code that could potentially execute under the JIT that would be different than how it executes in c Python normally just because, those type annotations are usually not enforced at all at runtime.
[00:32:03] Unknown:
Right. And then there's also the risk that the type annotations are incorrect or not as complete as they could potentially be and might actually end up being more harm than good. Yep. Exactly.
[00:32:15] Unknown:
Yeah. This is why anytime we've discussed this, it's been yeah. This might be interesting. But if we did it, it'd be a decorator much, as I said, like Numba, where it's basically an opt in feature where you basically promise. And if you crash your system, that's your own fault. Right.
[00:32:32] Unknown:
So for somebody who's actually using Python with the Pigeon project and the JIT implementation plugged in, would the entire project be run through the JIT engine during runtime, or is it possible to target a subset of the code being executed?
[00:32:48] Unknown:
Ideally, we'd only target the code that really matters. So before when Brett was talking about how this plugs in and he mentioned the fact that we have a counter and we currently JIT every single method. So, once we fall back to the counter mechanism, we'll only JIT methods that are hot. And that way, if you have something that just runs during startup once or whatever, it's never gonna see the jit at all. And that will need a bunch of tuning and smarts, and, hopefully, over time, we can get that to the ideal spot where, we're jitting the stuff that just matters.
[00:33:26] Unknown:
Would that be an exposed parameter that somebody can tweak to specify what that threshold is for how many times through a loop the code needs to execute before you actually kick in the JIT?
[00:33:38] Unknown:
Absolutely. It could be a it could be exposed, and it probably will be as 1 of the dash x command line options. And, you know, that is a bit of a heavy hammer. So you could also even imagine just having some sort of JIT module that you can import that has some decorators that says, you know, jit.alwayscompileorjit.nevercompile and things like that. And if people can agree on some common sort of annotations that they've won across several different jets, inlining is frequently something people care about and wanna tune. And you could imagine some standard module that lets people opt in at the method level.
In what ways can a JIT JIT compiler implementation be purpose built for a given workload, and how would somebody go about creating 1? I think the biggest example of that is if you look at Numba, where you have a JIT compiler, which is very much designed for doing numerical code and generating that and making that efficient. Going out and creating 1, that is a lot of work, no matter what you're trying to do. So I I don't know that there's an easy how to guide. Yeah. I mean, there's a reason why there aren't very many Jits floating around in the world, and
[00:34:53] Unknown:
those that are are usually because some corporation has decided it's really necessary to exist, such as all the JavaScript VMs and that whole speed war that the browsers have gone through. But to answer Tobias' question about specificity and to really kind of hammer on how Numba does it, For instance, Numba has, support for GPUs. So they're able, because they focus on numerics so much, to actually JIT their code using LVM down into GPU GPU usage. And so they're able to get very specific performance and great performance increases in their specific use cases. But once again, they're relying on LLVM because once again, Jits are not necessarily an easy thing to do. Because if you really think about it, what you're trying to do is write a compiler more or less. Right? Because what these should do is emit assembly in the end that the CPUs, get compiled instantly for. So you're basically writing a compiler.
[00:35:46] Unknown:
So could a JIT plugin be designed with different trade offs, like no CAPI compatibility, but that worked around the GIL to provide real, parallelization in Python?
[00:35:57] Unknown:
So if you look at something like PyPy, I think the the answer in some ways is obviously yes. I mean, PyPI in many ways is a jet. But in order to really break c API compatibility, you really need a lot of other things changing than just the JET. Working around the GIL, I think Numba's, GPU based math is going to probably be a good example of that, and that GPUs run things in parallel. As long as you know kind of that your inputs aren't random Python objects that you have to interact with while holding the GIL, you could release the GIL and do whatever you want with whatever your inputs are. Like, if it was just a bunch of numbers, you could have something that ran without the GIL, did all the math without the GIL held, and then returned back to Python and produced the actual number object where the the GIL held again. So there is some possibility there, but I think you're quickly going down the path of creating a whole new runtime or you would have to do a bunch of deep understanding of types in order to understand when you can release the GIL. And there's the question of, are you actually doing enough work where it makes sense to release the GIL?
So it's it's complicated, but certainly possible.
[00:37:22] Unknown:
I guess my reason for asking is I a number of episodes ago, we had a guest on Jessica, and I can't remember her last name. Tobias, you might know. McKellar. McKellar, thank you so much, who is a former president of the Python Foundation, and she talked about the a lot of work that they're doing to make sure that Python is not just the popular programming language, you know, for education and and other uses today, but for 20 years from now. And I just I I I wonder, like, I you know, is the fact is the GIL going to hold Python back in that capacity, you know? Because I I see so much happening now with languages like Go that have some form of parallelization built in out of the box.
I kinda feel like I understand why Python was designed the way that it was. But I also can't help but feel that, you know, maybe there's nothing stopping Python, the language, as opposed to Python, the implementation from having really good parallelization as well.
[00:38:23] Unknown:
Well, another language that you could look at for an example of something that might be more feasible for the current way that Python is run is erling in the actor model, where rather than relying on a threaded model, you rely more on message passing and using that as your method for scaling out. And JavaScript is an example of another. You know, you don't have threads in JavaScript. Right? Right. When you do have any parallelization,
[00:38:47] Unknown:
it is done through more of a message passing way. Right. Yeah. So
[00:38:52] Unknown:
having at at least for me, having been a core dev for 13 years, like, this question's come up a lot. And, I mean, there's a couple things to answer what Chris said. So for instance, Go actually, the Go creators really like to harp on concurrency versus parallelism, and that Go is concurrent and not necessarily parallel. For instance, for the longest time, the default setting on Go was to use a single CPU and not actually parallelize anything using threads. So that is not always necessarily been a thing for Go. The thing with the GIL is it only is an issue if you are CPU bound, and the real issue with that is if you're IO bound, it really shouldn't matter. Right? If Right. Everything that does networking is gonna release it properly. And if you're doing something like using the concurrent dot features module, you'll get take care of that perfectly fine. If you are using Python 3.5 or even 3.4, you can use any of the new asynchronous event loops we have, and it's dead simple now. Well, not dead simple, but much simpler to write asynchronous networking code.
Once again, IO bound stuff is not difficult to deal with. It's purely the CPU bound stuff, which some people obviously have, but some people don't, honestly. And you have to be running for a decent amount of time for that to really kick in. The other thing that really becomes an issue with all this is the GIL is baked into the C API. So the only way, honestly, at least for CPython to get rid of it, is to break the C API, more or less, which would break every C Extension module out there. So the Python community, if they want to see Python to get rid of the GIL, would have to make a decision to no longer use the CAPI as is. Right. We would either have to design a new API that worked, or, honestly, some people have talked about completely not even having a c API and only have a foreign function interface as most languages do.
So it's really comes down to what kind of pains are people willing to put up with in a transition. And we've done the Python 2 to 3 transition, and I don't know if the community really wants to go through something similar with c extensions. And I should also mention, by the way, PyPI has a GIL itself. So it's not like the fastest Python implementation out there doesn't also have a GIL. So there's I sometimes think the GIL is kind of a bugaboo for why isn't Python as fast as anything else out there versus sometimes it's algorithms, sometimes it is an issue, but it's really I wonder sometimes how much it's truly an issue versus
[00:41:26] Unknown:
just something people like to point at and say, this is holding me up without fully understanding why. I think you may very well be right. And first of all, I just wanted to clarify that what I actually meant here is concurrency, not true parallelism, because you're right. Go for a long time with concurrent, not truly parallel. And secondly, I think you're entirely correct that this is mostly a mindshare issue. Right? Like, what I what I was addressing was I'm seeing people choosing, rightly or wrongly, languages like Go because they perceive that Go has better support for concurrency out of box than Python does.
And so maybe the answer is we already have rock and concurrency support.
[00:42:10] Unknown:
We just have to market it better. Do you know what I'm saying? Oh, yes. And it's I mean, Go is the 1 everyone always brings up because at least a year or 2 ago, it was the 1 that seemed to be stealing people who were saying, oh, I'm not gonna port my code to Python 3. I'm just gonna rewrite my entire stack and go instead and get some benefit. But for instance, Ben Bangert at Mozilla, who used to be big in the Python community because he helped start pylons, he ended up doing a lot of Go work server side for Mozilla. And they always had the common issues that people had with Go. For instance, testing is a real pain in the rear in the language, and I speak from experience on that. What ended up happening was Ben decided, okay, why don't we try rewriting this service in Python 2 0.7 with Twisted?
And in fact, he actually used PyPI in Twisted, and he did it. And what happened was is he actually got less memory usage than Go because Go actually had more memory overhead per Go routine than Python did with Twisted. And then performance wise, they were totally able to meet their performance needs with just CPython 2.7, but with PyPI, they were completely way ahead of any perceived or planned capacity needs. So they actually found out that while Go might technically be faster, it was completely unnecessary, and they were able to completely blow past, like, test coverage, rewrite the whole thing in 2 weeks, and make the whole thing much more productive in terms of management and maintenance than they ever had with Go. So once again, it's 1 of these things where we all love fast, which is why Dino started this project. Right? It's trying to give people a way to get faster, but the key thing here has always been Python's other strengths, such as ease of use and ease of learning and ease of maintenance and everything else and having to balance that with what potential real or thought of performance issues are actually are. So we're doing our best to try to deal deal with that boogeyman of, let's see if we can make it so you can plug any jet that people have adapted to use with CPython, assuming we can get this in, of course.
But, honestly, I wanna get to the point where people just stop bringing up the GIL as a thing and just go, is Python fast enough for my use case, period? Don't worry about the technology behind it. Just can I make it work with the tools I have?
[00:44:30] Unknown:
Yeah. I think that's definitely sort of a good mindset change that we should be working to get the community into because I think simply the way people are communicating about things is hurting the language a little bit. And just also another thing you mentioned, sort of the testability and performance. I think another thing that in addition in in ease of learning with regards to Python, another thing it brings to the table is a very high level of abstraction. That's 1 of the things I love about working in Python. I've learned. I've gone through a number of the tutorials, and I've done a little bit ago. I haven't really sort of written it in production. But 1 thing and I know Go's adherence love this. Go forces you to think at a much lower level of abstraction than Python does when you're actually coding. And I think for a lot of people and and for, you know, for developer velocity in in a lot of regards, that's a trade off that needs to be taken into account, and we should be marketing accordingly.
[00:45:26] Unknown:
Yeah. I mean, I think the real key point here is the global interpreter lock is an implementation detail of 1 or 2 implementations of Python. That's it. Like, people aren't coming to us complain about other implement implementation details of CPython, which they could potentially. It's just for some reason, people have really latched onto this 1 implementation detail and just made it the boogeyman of Python performance. Instead of going like, maybe we should use a register based VM instead of a stack based VM, or we should be caching more, or all this other stuff, such as making it so it's easier to do AST transformations. And by the way, all this stuff I just mentioned, people are considering actually testing out instead of putting all this time and effort into worrying about the GIL where it would have massive breakage issues with c extensions. So I just as you said I think you said, Chris, is the as a community, I think we kind of need to just stop worrying about the guilt behind the curtain and just worry about the language overall itself and think of just maybe other things that could still get us performance boost and not this 1 thing that's extremely difficult to tweak that may or may not even get you a boost. Right? Because, honestly, people have actually tried removing it in the past and It was much slower. Yes. Yeah. You have to scale up to several cores in order to break even. Right? Like, it I believe in previous times, it took, like, at least 2, if not 3, cores to see any benefit. And while it's great that we all run on multi core machines, you still are also competing with everything else on these machines that are multi core and trying to use the CPU.
And there's no guarantee you're gonna get scheduled the way you want and all that. So there's no guarantee that even making fully parallel is even gonna be that great of a benefit.
[00:47:09] Unknown:
And then, too, you're still running into the issue of shared memory space and all of the difficulties that that brings up, whereas going more along the message passing model, it greatly reduces the shared state that needs to be managed, which simplifies the way that you can think about your problems. It also decreases, and, in some cases, eliminates that whole class of bugs. So, we should we should be focusing more on improving our conceptions of message passing among multiple processes and perhaps maybe decreasing some of the runtime overhead of the multi process module so that we can if we wanted to scale across multiple CPUs in that regard.
[00:47:48] Unknown:
Yeah. I mean or or making, like, multi interpreter single process work better or something. I mean, basically, if someone wants to get rid of the GIL, they have to not only make it work in Python, but then also be willing to go out and convince the entire community that they need to scrap most of their c code. So that's why it's not just been done at a drop of a hat.
[00:48:11] Unknown:
And, also not to dwell too long on this, but there have also been a number of performance improvements in most of the recent releases of the Python 3 series in various areas. I know that there were some drastic speed improvements in some of the standard library algorithms. The 1 that's coming to mind most readily is, being able to do the, walking the directory tree and the algorithm that was used for that. I think that's something like a 20 or 30% speed improvement. So for those particular use cases, you can definitely see, see some movement happening. But anyway so speaking of performance a bit and,
[00:48:47] Unknown:
Honestly, just touch on that real quick. A lot of work's actually going into Python 3.6 that I don't think people are aware of yet because it hasn't landed in, the Mercurial repository. But for instance, Victor Sinner of Red Hat, as part of his open stack work, has 3 PEPs open right now, all doing different things to try to improve performance. Yuri Slivenov, has taken 1 of Victor's PEPs and added some caching at the byte code level, and it's seen 5 to 10 with, in some cases, up to 20% speed improvements. So once again, it's 1 of these things where performance has also continued to increase, and it's worth also, as always, checking out the latest version of Python to see if your workload is actually improved with some change we've made.
[00:49:33] Unknown:
And, bringing it back around to the topic at hand, what kinds of performance improvements have you seen in your experimentation with the, Pigeon and, CoreCLR implementation?
[00:49:45] Unknown:
Not very many so far. Right now, startup time is is way, way worse. We have a couple benchmarks where we get small improvements on, a bunch where we are the same and some where we are worse off. But we haven't really even started doing much tuning on it yet whatsoever. We just finished up going through the entire test suite and making sure that we can run and pass all the tests modulo of few exceptions that we're going to address at some point in the future. But we wanted to make sure that the JIT was working properly before we really moved on and started conquering performance. So from here on out, we're going to start looking at some performance improvements and see if we can get some real world gains for at least some test cases.
[00:50:43] Unknown:
And do you guys have a target release of Python that you're hoping to have Pigeon incorporated into?
[00:50:50] Unknown:
So we have a talk at PyCon US this year on Pigeon that we'll be presenting. Our hope is to have a Python enhancement proposal written and ready to be presented to the Python development team at the language summit at PyCon this year to get a yes slash maybe slash no feel from them of does this have any the API changes we wanna make if they have any chance of either going in, going in provisionally, or no chance in hell, don't bother. And then based on that, it's gonna kind of depend on where it goes. But since we already have a working implementation and have the patch ready to go, it could very easily end up going into Python 3 6.
[00:51:32] Unknown:
That would definitely be a pretty exciting, step forward. And with the other PEPs and work that's happening on 3 6 that you mentioned, that sounds to be definitely a release for everybody to keep their eyes on. There's even cool new features coming in too, like the new format string stuff. I don't know if you guys Yes. I did see that. That actually is 1 that I'm very interested in. And actually, I believe it was you who I heard mentioning, recently that that actually provides some additional performance improvements over the standard either percent encoded or, the string dot format methods.
[00:52:04] Unknown:
Yeah. For those of you who don't know, in, Eric Smith, the implementer of str.format, actually has implemented, string interpolation where you don't have to call dot format anymore. Basically, you use the same syntax, and you you can specify variable names in the curly braces, and it will directly substitute it. But because it's syntax now, because you use the f prefix, much like the u prefix from Python 2 for mentioning Unicode. The compiler can notice that you're doing that, and it's able to tease out the string literal, realize where the code is, and it basically becomes kind of a big str.join call with proper calls the format built in. And because all that's being done at the bytecode level, it's able to actually operate than using str. Format or the old modulo, percent way of doing string interpolation.
So once again, another really nice feature coming out in 3.6.
[00:52:57] Unknown:
So 1 of the most notable benefits of having a JIT implementation for the CPython runtime is the fact that modules with C extensions can be used, such as NumPy. And And I'm wondering if that poses any difficulties in the compilation methods used for optimizing the Python portion of the code, or if there's any special handling that needs to be done to make sure that you're not recompiling already compiled code?
[00:53:19] Unknown:
There's no special handling that to happen. From our perspective, we are just going to invoke the helper function to do a call or to access a member or whatever. And so from that standpoint, we don't have to do anything special. And that sort of fallback code is always going to exist. As we get into optimizing things, we wanna stop going through the helper method to look attributes up. Instead, know what attributes are there ahead of time, know what method we're calling ahead of time, things like that. So instead of being a challenge in any way or a difficulty, it will be an opportunity.
And that would be somewhere where we probably want to figure out how a JIT can interface with c extensions in a generic way. You know, can a c extension expose an API that says, here's a function that is really only gonna take ints. I have a version that'll work on the 3 2 bit ints natively, and you can call this instead of calling a version that works on, box dense, for example. So there's some opportunities there, but we can ignore it as long as we're just focused on optimizing pure Python code.
[00:54:38] Unknown:
Has any thought been given to making Python a first class citizen in Visual Studio Code?
[00:54:43] Unknown:
There has been thought. We would like to do it. So, you know, my time is spent on a lot of different things. But most recently, I've moved back to Python tools or Visual Studio, and I've been working there. And some of the work that we're doing there is really just, in preparation for taking a lot of the services that PTVS has and making sure that we can expose the Versus code. Because 1 of the issues there is that we don't get to run c sharp inside of Versus Code. And PTVS is all c sharp code, but it has a lot of great features around the analysis of your code and things like that. So we are going to make some progress in preparing for that, but there's still a bunch of work that needs to be done on the Versus Code side of things to integrate with whatever we end up with there. So we'd love to make it happen. It's just a matter of, priorities and timing.
[00:55:43] Unknown:
I totally understand. And I guess you really answered my question in that I just sort of discovered, Versus code the other day as a result of an episode of the JavaScript Jabber podcast where they showcased it. And I don't do a lot of JavaScript development, but, what little I I do I was very impressed with Versus Code. I mean, I'm I'm a Mac user. I'm just not a Windows user. And, it's so nice. They've done such a really great job with the JavaScript integration that I think if it could ever get to the point where Versus Code had that level of Python integration and support,
[00:56:20] Unknown:
I think that would give PyCharm a run for its money for sure. Yeah. I mean, I actually personally use Versus Code now instead of Atom, and I've been loving it. So, I mean, as as Dino said, it's planned to happen. It's just a matter of, yeah, time and resources and being able to get around to it. But we definitely want to do it.
[00:56:39] Unknown:
So what areas of the project could use some help from our listeners other than the CMake contribution that we sort of say would be really nice earlier with regards to making it cross platform?
[00:56:50] Unknown:
Well, I we have some I mean, basically, at this point, because we've more or less reached compatibility, there are some things that could probably stand to be optimized, and that's really where we're focusing now. If If you look at our issue tracker, we have a couple things lined out in terms of simpler, although I don't wanna, mislead people as in it's completely anyone could just walk off the street and try. But if you're willing to get into, MSIL and learn how the core CLR's, intermediate language works, you there are some things you can do to try to help us optimize the emission of that stuff. And then there's some higher end stuff. Right? Like, we aren't tracing types at all. And the way we've structured our c API, it wouldn't be hard to put in a trampoline in the code to actually trace the types through and then sometime later use that information to actually do some compilation. So it varies from simple, like, let's make a ROT 2 or ROT 3 bytecodes work better all the way up to let's see if we can actually trace types through and actually use that. So people are kind of into optimizations. That's probably where we can currently use the most amount of help.
[00:57:50] Unknown:
And another area might just be trying it out. We don't make that very easy right now because we don't have binaries that you can download. But, you know, if you have some workload and are willing to build it, then you could download it, build it, run it, and report back any issues.
[00:58:09] Unknown:
See, that's why we need some enterprising listener to contribute that CMake file so that people like me who don't run on Windows boxes can help with that. I'm partially blind, and until Windows 10, Windows meant eyestrain for me. So that that makes a lot of sense. And and and, hopefully, you know, somebody will at some point or you guys are will contribute that CMake file so more of us can sort of dive in and give it a try.
[00:58:35] Unknown:
Is there anything that we didn't ask that you think we should have or anything else that you wanna bring up before we move on? I think that was pretty good. Yeah. No. I don't have anything specific. Okay. So for anybody who wants to keep in touch with you guys and follow what you're up to, what would be the best way for them to do that? Dino, how about you first?
[00:58:53] Unknown:
You can see what I'm doing on GitHub all the time. You know, there's our Pidgin repository. There's the PTVS repository. In theory, I am on Twitter and have a blog and I never post anything. Okay.
[00:59:09] Unknown:
And, Brent, how about you?
[00:59:11] Unknown:
Yeah. So, unlike Dino, I am on Twitter and somewhat active. So you can find me at Brettsky, b r e t t s k y. I also blog at snarky.casnarky.ca. I should also mention that, our team has actually started a, Python Engineering at Microsoft blog. So if you just Google Microsoft Python engineering blog, you should come across it. And, we're starting to try to post a new thing about every week or 2, and that's probably another good way to keep up with what we're all doing. Excellent. Such as the recent blog post about, Python 3, pie pie PI releases, being more favorable to Python 3 than Python 2 starting in May this year. Projector.
In case you guys didn't see that blog.
[01:00:01] Unknown:
No. I did not. I'll have to take a look. Yeah. It's pretty exciting. Alright. So, with that, we'll move on to the picks. So for my first pick today, I'm going to choose the Logitech Wave keyboard and mouse combo. So I started a new job recently, and at first I just had a standard issue, straight, flat keyboard, and it was not very much fun to try and use, particularly since I've gotten used to that Logitech 1. So the Logitech 1 just has a very nice, contoured shape that makes it much more comfortable for typing and using. I've been using it for a while now, and it's been great. I definitely recommend checking that out for anybody who has an uncomfortable keyboard.
My next 2 picks are tools that I've been able to leverage more recently with my new job. So SaltStack is very advanced and flexible and utile piece of technology. We actually had Thomas Hatch on as, I believe, our first interview for this, podcast. So It was. Go back and take a look take a listen to that. But it's just amazing the different things that you can do with it if you if you approach it with a proper architectural design sense. So it's essentially a way to make your entire infrastructure event driven, reactive, and fully automated if you have the vision to see it through.
Along with that, I've been using the test infra library, which is a Pytest plugin that lets you that that provides some primitives and makes it easy to hook into actually run unit tests against your infrastructure. So I've been using that in my development cycles for developing SaltStack formulas in Vagrant machines to make sure that what I'm writing and executing is actually producing the outputs on GitHub. So if you go to github.com/mitodlib.com/mitodlib.com on GitHub. So if you go to github.com/mitodl/saltstackformulacookiecutter, I think is what it is, you can check it out there. And, I plan to write a blog post about it at some point, but just, working on finding the time for that. So with that, I will hand it to you, Chris. Thanks, Tobias.
[01:02:20] Unknown:
My first pick is an iOS app. I hope they do an Android version because a lot of Android people have been asking for it, but there isn't 1 yet. It's called Anchor, and the tagline is public radio for the people. As I mentioned a few you know, a couple of minutes ago, I'm I'm somewhat visually impaired, and so this app has been really, really awesome for me. It is a new kind of social network that is strictly audio. You literally have conversations with other people. So you post what they call waves, and you can use different sort of keywords and hashtags and the like to sort of signal, you know, what's in them, and then people can post replies. And so you end up with these really sort of interesting, you know, topical yet somewhat meandering conversations actually had using people's real voices. And for me, anyway, that lends a whole different sort of, you know, mindset towards social media interaction. It's it is fundamentally different at least to me and and other others have said this as well. When you can actually hear another human being and their voice, which conveys so much depth and so many details about them on the other end of, you know, the Internet connection, as opposed to just seeing text on a screen. I've been having great fun with it, and I recommend it highly. It's great.
My next pick is a another sci fi series called the magicians. It's not quite as sort of high rent in terms of effects and writing as the other that I picked recently, but it it's very entertaining. And and if you like sort of, you know, magic fantasy, that kind of thing, that kind of fiction, you might enjoy this. My final pick is a YouTube, episode of a YouTube show that I really like called PBS game show from PBS Digital Studios. It's a recent episode called Portal is a feminist masterpiece, and it is a discussion of how the portal games from Valve, were written by an ardent feminist and are in fact feminist masterpiece.
Really interesting things like, you know, your character in portal is actually GLaDOS's daughter, and GLaDOS is actually, you know, a really driven, woman scientist who ended up being sort of, you know, imprisoned in this AI. It was just a really interesting analysis and really well done. It's definitely, definitely worth watching. And that's it for me. Brett, what do you have for us for picks?
[01:04:57] Unknown:
So in honor of my wife, who Andrea, who is quite the tea nut and has gotten me into tea, I got, a couple tea themed ones. First of all, I have a something called the Breville Tea Maker, and it's a little pricey. I think it retails for maybe 250 or $300 in the States. I happen to be in Canada, by the way. But it basically is a fully self contained tea making device. It's pretty awesome. Basically, you fill the, jug with water, You put your tea in this middle basket that sits on this magnetic vertical rail, and you choose what type of tea you're brewing, whether it's oolong or black or white or whatever. And you click BREW, and what happens is it has a temperature sensor, and it heats the water until it gets to the right temperature.
Once it hits that, the magnetic rail lowers the metal basket with the T down into the water, times how long it brews, raises the basket, then back out of the water, and then beeps saying that your tea is ready. So you have perfectly brewed tea every single time. There's no more having to pour it and having to worry about how long it's been sitting. There's even a keep warm option. It's fantastic. And I can't, recommend it hardly enough for anyone who likes, loose leaf tea. To go along with that, my second pick is Bodum Mugs. They're glass, but the nice thing is that they're double walled.
You can't feel the heat from T. So if you pour it, you don't have to worry about, whether or not the handle's gonna be too hot or not because the double wall keeps it completely cool. We actually have to warn people when we hand it to them. Don't drink the tea just because the mug feels cool because it insulates so well. So those are the 2 big tea themed ones. And then I guess as a 3rd pick, I'm actually going to recommend a little game I've been playing on my Android that just came out recently called Alto's Adventure. It was out on iOS originally, and it recently got released on Android, on a freemium model, and they've really done a really good job. It's basically, a little, Infinite Runner where you are a guy in the Alps on a snowboard trying to collect runaway llamas.
It sounds silly, but it's actually a lot of fun. And, they've done a really good job with the leveling, where they've made, the goals just enough that you have to try a few times to get them, but not so difficult that there's no way to get past them. And, honestly, if you really get stuck, you can choose to skip them. But the mechanics are great. It's nice, graphics, the music's good, and the progression seems to have been really balanced out
[01:07:34] Unknown:
really well. I think I picked that 1 myself a couple of weeks ago, and it really has surprised me because I'm terrible at runner games because they're all, you know, Twitch coordination based and I don't have any. But the thing is just so beautiful and just so well put together that I find myself still going back to it playing it anyway just because I want to see the landscape go by and the game design is is, you know, so nicely designed that I really wanna try to get to that next, you know, next section.
[01:08:04] Unknown:
So definitely a really exceptional game. I'm glad it came to Android. Yeah. It does kinda, honestly, to me, feels like almost like the infinite runner version of Monument Valley in terms of
[01:08:14] Unknown:
Alright. How about you, Dino?
[01:08:16] Unknown:
Alright. I only have 1 pick. It's this British reality TV show called Come Dine With Me. And I caught some episodes when I was in, London, over Thanksgiving this year. It is just a hilarious, great fun show. It's 4 contestants show up and they come to each other's houses night after night. So each night, 1 of them hosts and prepares a meal. And some of these people have no idea really how to cook. Some of them are actually decent cooks. They sit around, they drink some wine or some champagne or some beer and, just hilarious antics ensue every single time. And then at the end of each night, the other 3 who are not cooking get to rate their, counterpart, and, it's just a lot of fun. And so since November, I've been watching it. There's 37 seasons, and they all appear to be on YouTube just about. So you can probably just watch this show for the rest of your life.
[01:09:31] Unknown:
Alright. I'll have to take a look at that. So, I definitely appreciate the both of you taking time out of your day to come join us and tell us about your work on the Pigeon project. It's been a great conversation. Definitely very interesting work. Definitely look forward to seeing it become part of the standard CPython runtime. So thank you very much. Thank you, Tobias and Chris. Thanks, guys. Alright. Have a good night. Bye.
Introduction and Announcements
Upcoming Events and Ticket Giveaway
Interview with Brett Cannon and Dino Vieland
Dino Vieland's Background
Brett Cannon's Background
Inspiration and Goals for Pidgin Project
Explanation of JIT and CoreCLR
Challenges and Cross-Platform Considerations
Microsoft's Acquisition of Xamarin and Mono
Legal and Open Source at Microsoft
Technical Details of Pidgin Integration
Optimization and Performance
Concurrency and the GIL
Future of Python and Performance Improvements
Visual Studio Code and Python
Community Contributions and Help Needed
Closing Remarks and Contact Information
Picks