Visit our site to listen to past episodes, learn more about the show and sign up for our mailing list.
Summary
In this episode we talked to Holger Krekel about the py.test library. We discussed the various styles of testing that it supports, the plugin system and how it compares to the unittest library. We also reviewed some of the challenges around packaging and releasing Python software and our thoughts on some ways that they can be improved.
Brief Introduction
- Welcome to Podcast.__init__ the podcast about Python and the people who make it great
- Date of recording – July 8th, 2015
- Hosts Tobias Macey and Chris Patti
- Follow us on iTunes, Stitcher or TuneIn
- Give us feedback on iTunes, Twitter, email or Disqus)
- We donate our time to you because we love Python and its community. If you would like to return the favor you can send us a donation}. Everything that we don’t spend on producing the show will be donated to the PSF to keep the community alive.
- Overview – Interview with Holger Krekel about his work on Pytest
Interview with Holger Krekel
- Introductions
- Programming for 25 years
- Runs a consultancy
- Been to almost every EuroPyCon and PyCon US
- How did you get introduced to Python? – Chris
- Wanted to write an HTTP proxy and Java I/O was too confusing. Jython took less than a day to get it working after 2-3 days on it with Java.
- What inspired you to create Pytest, and how did the existing unittest framework play into the story? – Chris
- Introduced to agile methods through the Zope community
- Zope used unittest – didn’t like the boiler plate
- Not in the spirit of Python
- Only took ~200 lines of code to get a testing tool working
- Original name was ‘utest’ – 2003
- Pytest name came in 2004 on Pypy project
- Huge number of tests on that project (20,000) – distributed test runner – xdist helped solve this.
- There are many different styles of testing, such as BDD, unit testing, integration testing, functional testing, what attributes of py.test make it suitable or unsuitable for these different approaches? – Tobias
- What are your views on black box testing and how would someone use py.test to implement this approach? – Tobias
- Pytest’s plugin architecture enables you to hook into the various phases of test execution enabling you to extend Pytest in all kinds of ways beyond the original design.
- I have been hearing a lot about property based testing which was popularized by the Quickcheck module in Haskell. Does py.test support anything like that? – Tobias
- hypothesis-pytest
- Do you think the characteristics and nature of the unit testing framework being used have any effect on the number and quality of the tests developers write? – Chris
- Developers find writing tests in Pytest to be fun compared to unittest
- Which will help people write better tests
- Encourages refactoring
- Is there ever a time when you would advice against writing tests? – Tobias
- When exploring a problem, writing tests first doesn’t make sense
- When getting feedback on a potential approach, writing tests first can be a waste of time
- What are some signs that you watch out for when writing tests that tell you that a particular feature needs to be refactored? – Tobias
- When the test code is fragile it should be refactored
- Requires experience to really understand when to refactor
- When it’s not fun anymore or the tests are repetitive
- For someone who is converting their existing unit tests from UnitTest/Nose style to use py.test in an idiomatic manner, what are some of the biggest differences to be aware of? – Tobias
- Generator/yield based testing should move to property based testing
- If py.test can’t run a UnitTest/Nose style test it is considered a bug and gets fixed
- Has the strict backwards compatibility policy presented any interesting technical challenges thus far? – Chris
- Yes it definitely makes more work
- However breaking the API in a large project like this will cause too many problems for users
- py.test supports execution of tests written with other frameworks, how much ongoing maintenance does this feature require as changes are made to the other implementations? – Tobias
- The web page says that Pytest is designed to work with domain specific and non Python tests, and in fact a coworker is using it to test a node.js project – how did Pytest’s design enable this? – Chris
- Pytest uses a collection tree model to represent your project
- This is not Python specific
- All classes and functions are just mapped into this tree, not directly on the Python function
- There are few Python specific hooks for fixtures etc.
- People have written plugins so they can express their tests in YAML, Microsoft Excel
- Tests are represented as items
- All plugins are written in Python
- Pytest uses a collection tree model to represent your project
- What are some of the most interesting applications of py.test that you have seen? – Tobias
- Plugins!
- Pytest-BDD
- Pytest-C++
- Pytest-sugar
- Py.test plugin list
- Speaking about adoption, do you have any sense of the relative adoption of Pytest versus unitest or other tools? – Tobias
- Very hard to actually know
- Download numbers are not a clear indicator due to robots, CI systems, etc.
- Quantifying market share is hard to do
- Popularity is not a useful heuristic in determining a good fot for technology adoption
- But popularity is an indicator for the level of support you might receive
- Tech can be popular but very poorly maintained
- Are there any features of py.test that would make it suitable for use with configuration management tools and infrastructure testing? – Tobias
- Example driven testing
- Run py.test from a blackbox approach
- Largest benefit would be from having one testing tool used across the organization
- Where do you see Pytest and more generally test frameworks headed in the future? – Chris
- No big changes for Pytest – lots of incremental things
- Plugins will add functionality
- Holger is also the author of Tox
- Integration testing and testing in more complex environments are a direction that test management tools will likely go
- Tools like Jenkins can be a real headache in trying to have a good testing story for your company
- https://devpi.net/hpk/dev/devpi-server/2.2.0/+toxresults/devpi-server-2.2.0.tar.gz?utm_source=rss&utm_medium=rss
- Any questions we didn’t ask?
- Pytest is a very healthy project! There are 10 regular contributors – this is exceptional among OSS projects
Picks
- Tobias
- Chris
- Holger
- The Utopia of Rules
- IPFS.io – The interplanetary file system
- A New Way to Look at Networking
Keep In Touch
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Hello, and welcome to podcast dot in it, the podcast about Python and the people who make it great. We're recording today on July 8, 2015, and your hosts as usual are Tobias Macey and Chris Patti. You can follow us on iTunes, Stitcher, or TuneIn Radio, and please give us feedback. You can leave us a review on iTunes. Contact us on Twitter, that's podcast under init, send us an email at post at podcastinit.com, or leave a comment on our show notes. We donate our time to you because we love Python and its community. If you would like to return the favor, you can send us a donation. There's a link to our site in the show notes. Everything that we don't spend on producing the show will be donated to the PSF to keep the community alive. Today, we're interviewing Holger Krekel about his work on pytest. Holger, could you please introduce yourself?
[00:01:03] Unknown:
Yes. Hello. Hello Tobias. Hello Chris. Very great talking to you. I'm involved with Py since, like, I don't know, 15 years or so. And I'm besides, like, being heavily into programming since about 25 or even longer. Yes. I'm also a father, so I spend a lot of time with my child and run a small consultancy and give some trainings also on Pytest and other tools. And yeah. Otherwise, feel free to ask some some more background questions. I could, of course, go on for a lot for a while. I'm also giving talks, at conferences a lot. I've been to almost every EuroPython and
[00:01:50] Unknown:
PyCon in the US. And you're keynoting this year for EuroPy. Very exciting.
[00:01:55] Unknown:
Yes. I'm keynoting, but I I think I already did this, like, 2 or 3 years ago in, for EuroPython and a couple of other conferences. So it's not completely new to me, although I'm looking forward to this year's edition.
[00:02:09] Unknown:
Great. So in the vein of background, I know you alluded to it just a moment ago, but how did you get started in Python? How are you how are you introduced to the language?
[00:02:19] Unknown:
It was around 2,000. I was I was doing c plus plus and Java and and wanted to program an HTTP proxy, got completely lost in the Java documentation on how you do IO. And then for the fun of it, I just because I had heard about Python from somewhere, I just tried Python and and tried to use that. And to my to my surprise, it took me only, I don't know, a day a day maybe or even less than a day to get more or less the HTTP proxy I wanted to have running working, and I had spent, like, 2 or 3 days with Java, which I knew at the time, unsuccessfully getting it to work the way I wanted it.
So that kind of, like, kicked me off because I I like the language very much, in terms of of how it worked and the ease of programming that it provided to me.
[00:03:15] Unknown:
So what inspired you to create pytest? And how did the existing unit test framework play into the story?
[00:03:22] Unknown:
I got introduced to testing and also agile programming around 2,001, I think, when I joined the Zoop community, which had already a tradition of sprinting and of writing tests first. And at that time, we actually used, I was only briefly involved there, but we used unit tests. And indeed, I didn't like, the boilerplate. And I thought, well, if there's so many things in in Python that are so easy and straightforward to use, why is unit tests so cumbersome? It appeared to me like this. Didn't appear to me like to be in the spirit of how how Python actually works and felt to me.
So that's when I got into the idea of of, writing something that doesn't require boilerplate and allows to write tests very in a very straightforward manner. And I found, like, it only took a couple of 100 lines of code to actually get a nice first testing tool working. It wasn't called Pytest at the time. It was called uTest or something. It got renamed, like, twice, I think, since then. So that was around 2,003, I guess. The name pytest actually, I think, started in 2004, if I'm not mistaken, in the context of PyPy, which is a project I also cofounded, the just in time compiler for Python.
And we used testing from the start, and we used pytest from the start. And, actually, many of the improvements that followed from, in the in the upcoming years actually were inspired from the needs we had from PyPI, which nowadays has something like 20,000 tests.
[00:05:09] Unknown:
Wow. That's definitely a lot of tests. And keeping tests fast particularly when you have that many is very difficult. Are there any aspects of pytest that lend themselves to keeping tests fast?
[00:05:21] Unknown:
I think in 2006, if I'm not mistaken, we introduced already maybe even 2,005 at the end, we introduced distributed testing. That means that you can distribute the running of tests to multiple processes, ease either on your own machine or to remote machines. So that's something how you how we sped up running of tests that each might only take, like, 0 dot something seconds. But if you have, like, 100 or 1000 of them, there's not really much that the test framework can do to speed up other than this kind of distribution. So and that has been with Pytest, the so called xdist, the distribution plug in has been there
[00:06:08] Unknown:
since that time and got improved all the time. Yeah. I've definitely been seeing a lot of particularly hosted continuous integration systems that are supporting just reading your tests among multiple processes or multiple instances in order to get them to complete faster. So it's interesting that xdist has been part of the pytest framework for so long. There are many different styles of testing such as behavior driven development, unit testing, integration testing, functional testings. What attributes of pie dot test make it suitable or unsuitable for these different approaches?
[00:06:42] Unknown:
Well, as as I can tell, it's used for all of these different kinds of testing. Myself, I use it for, like, very small unit tests, but also very large and slow functional testing. And certainly, people are using it for behavior driven testing as well, and all kinds of testing approaches, actually. Pytest has a bit of a model that does not maybe a bit unlike unit test, it does not mandate writing fast running tests or, small scale unit tests.
[00:07:20] Unknown:
What are your views on black box testing, and how would someone use Pytest to implement this approach?
[00:07:25] Unknown:
I'm using it a lot. I mean, black box testing refers to the idea that you don't actually test particular fine grained code path in your system, but you just invoke the whole system and you, for example, perform HTTP requests, and you don't care how the code satisfying your request is actually internally structured. So you make assertions on the whole behavior, and I'm I'm doing this a lot actually because I I think, there's a lot of value in in functional testing, because it allows you to express assertions about the behavior that you actually want to have, towards the outside world.
And the way how you use pytest for that is that you typically create so called fixtures. And fixture functions can can be scoped on on, like, you can create them each time you use them, or you can reuse them throughout the whole testing process. So for example, if you have some kind of heavy database object or so with some initialized tables you need in order to test your application, You can put this into a fixture, and then tell pytest to use the same base database for all of your tests. That's called a session scope for your fixture, and pytest makes this very easy. So the the the main support that pytest has for functional black box testing is to have a very flexible fixture system that allows to cache your fixtures across different scopes up to the whole test run scope.
[00:09:16] Unknown:
And you mentioned using a database as an example for potentially loading your fixture data into. Does Pytest have the ability to reset that database for every for every test? Because I know that sometimes there are issues with isolating the scope of your tests that can lead to unpredictable outcome in your test, particularly if there's a if they're run-in different orders?
[00:09:38] Unknown:
Sure. Well, isolation is certainly something that is of paramount importance for, adding any kind of test, especially functional tests. And as far as I know, pytest Django, plugin for the Django framework does that. So it offers options to actually have, have your test isolated and your, database state reset for each test. And it's basically a matter of how you program your fixtures. Because in the fixture, you cannot only say this is how my not only provide an object, but you can also say, every time test actually uses it, please call me when the test has finished so I can clean up and reset the state.
But this resetting of the state is, application specific, and that's why you need to write this kind of finalizer or reset function yourself.
[00:10:37] Unknown:
So it sounds like what you're saying is is that the fixed year plug in or framework for pytest has callbacks supported so that you can hook into different events in the life cycle of the test and the fixed year?
[00:10:50] Unknown:
Exactly. So you can say, I have this, database object that I'm going to reuse across the whole test suite, and then for each test function, I'm going to provide a database object that is actually the session scope 1. I mean, the 1 that I'm reusing throughout the whole session, but then I'm also adding a callback and saying each time a test finishes using this fixture, I'm going to reset the state. And you can actually express this very briefly within the fixture system that is part of the pytest core.
[00:11:24] Unknown:
And are there other aspects of Pytest that support that event based callback?
[00:11:30] Unknown:
There's lots of, lots of events you can react to. The whole plugin systems of Pytest, which offer something like 40 different hooks, and these are basically events that you can subscribe to, that you can can get called back on. And with this, all of the, I don't know, a 150 plugins or so that we currently have, they all implement these handlers. They're not only event subscriptions, it's not just a pure event in the sense of a read only thing where you So it's not just a pure event in the sense of a read only thing where you just do something, do some additional logging or something. But actually these hooks are called by py test to perform, for example, the execution of a test function, or the setup of the fixtures and so on.
And you can actually overtake this, or you can do additional things in front of it. So it's a bit more than just an event system. It's it's really Pytest is decomposed into lots of hook functions that are called and that are implemented then by the plugins to provide the specific additions and extra behaviors that you want to see.
[00:12:44] Unknown:
That definitely sounds like a very powerful architecture for Pytest and sounds like it would give you a lot of capabilities that aren't necessarily offered by, for instance, unit test having the ability to plug into all those different points in the execution of the code and being able to add in or remove different components that you may or may not need.
[00:13:05] Unknown:
Yeah. I think that's 1 of the compared to unit test. I mean, nose with this which is another popular testing framework also, has plugins. I'd say that, we kind of managed to have a very good compatibility story in terms of plugins. So the way how the hook system is designed makes it somewhat easy to maintain compatibility over the years to all of the existing plugins. So there's a number of plugins that are 5 years old or 4 years old, and they still work.
[00:13:34] Unknown:
That's very impressive. So I've been hearing a lot about property based testing, which was popularized by the quick check module in Haskell. Does pytest support anything like that?
[00:13:44] Unknown:
Yes. There's a number of people who who use it. There's a so called Pytest minus quick check plugin, which mimics, I think, most of what you can do with the Haskell parallel. I'm not using it very much myself, but I've played with it, but it's there If you want to go for this kind of, like, sending random data to your API.
[00:14:11] Unknown:
That's very interesting. I know that there's another library called hypothesis that is supposed to mirror the functionality of quick check within Python. What would be involved in getting that plugged into Pytest?
[00:14:24] Unknown:
I'm not quite sure. I think that I think there is some integration. I can't remember the details currently on on hypothesis and and pytest. At least there was some communication between the author of that and some pytest contributors, but you would have to check on that. I mean, generally, people using pytest, they prefer to have 1 framework and then plug in lots of other things, and, I think that's would be the way to go, actually. If hypothesis offers interesting functionality, then the way to integrate this would be to write, a plugin.
[00:15:02] Unknown:
And in fact, someone already has. I I checked, really quick while you folks were talking, and in fact, there is a a hypothesis pytest plugin. So that problem is already solved.
[00:15:14] Unknown:
Probably. Yes. I mean, it's always you have to know that some of the plugins I mean, some of the plugins are heavily maintained, and they get released often. Some of the plugins in Pytest, and it's I think also true for no no's. They have been released, but they're not really maintained very well. So you always have to check a bit, if that's actual activity, and if it works nice enough for you.
[00:15:37] Unknown:
I think that's sort of par for the course with any sort of open source technology that you might wanna adopt. Right? You always have to kick the tires and build it, play with it, make sure it it serves your needs. So do you think the characteristics and nature of the unit testing framework being used have any effect on the number and quality of tests developers write?
[00:15:59] Unknown:
Well, there have been many people, there are some quotes on the Pytest org website. There have been many examples where people say that they find it a lot more fun to to write tests, with pytest. Also because of the reporting, and also because of the reduction in boilerplate compared to unit test. And I think that if that's true and that's what many people feel like, then I I would I would guess that indeed it it will help people to actually write more and better tests. So the quality of a testing tool certainly has influence on how much you like to write tests and how much you modify your tests and advance your tests.
And that's 1 of the ideas of Py test, especially the fixture system and and some other aspects really is that we want to make it want to encourage people to actually reflect on their tests to make them easier to maintain and to extend. So it's it's a bit like with applications and libraries themselves. And you also want to have applications and and libraries be easy easy to refactor, and the tests shouldn't stand on the way, but also support restructuring your application. And that's something that we, that many people in the writing py tests find, easier to do with py tests than, to do with run test.
[00:17:25] Unknown:
So is there ever a time when you advise against writing unit tests? Are there any situations where rating tests might just be a waste of time or might cause potential issues?
[00:17:35] Unknown:
Well, it's yes. I mean, I can tell when I don't write tests, which is usually when I'm encountering a problem, and I'm not really sure how to solve it. And then I typically first play around. I don't write a test, but actually play around with the application. I do manual testing basically, to explore. So basically, when I'm exploring something, exploring a solution if something actually might work, I'm not, necessarily writing a test first, but I'm rather trying to to see what the solution actually looks like overall, especially if it involves changing, like, 5 different places in the application. And once I get this to work, then I go back and then I actually know how to how to fix it, and then I go back and try to write a test for it, And and then, like, implement the proper solution. And that that's often sometimes I do it with partners in in projects that I just do a pull request on without any tests and just ask for feedback. Like, do you think this is, like, the right way to go about this problem?
And when the when the answer basically we discussed that a bit, and once we actually agree on let's do it this way, then we actually go for writing the tests and the actual implementation. So I would say for exploration of solutions, it's often if you don't really know how to do something and want to understand the behavior of the system, then, writing tests first can be very cumbersome and and slow you down unnecessarily.
[00:19:11] Unknown:
What are some signs that you watch out for when writing tests that tell you that a particular feature needs to be refactored?
[00:19:17] Unknown:
That's usually when I when I find that my my test code is fragile. So I change little bits in my application or library, and suddenly I have to change lots of places on my tests. That's usually something where I step back and try to refactor the tests to the point where it's not necessary anymore to have, like, many changes, in the test, but just like, like 2 or 3 places. So it's basically the idea that when you go from 1 to n or 1 to 2, basically. You you have something, like, for 1 particular situation, and then you find you need to write a test for a different situation, but which is very similar. It has only maybe, like, 1 or 2 things that it does different in the setup and so on.
Then I try to refactor the tests so that I can factor out the the common parts and have the tests focus only on the on the different parts. And that's basically something that arises out from experience. It's not something that you can put into an easy recipe. But I think it's, like, once you repeat yourself, once you actually see that you're doing the same things, like, you're basically copy pasting a test to another test, and just rename the test, and then change, like, 2 or 3 lines. That's for example, when you need to do something like this, it's a sign that you could spend some time on refactoring, especially when you do it like a 3rd and a 4th time. Then latest, that makes sense to factor it out. Because if the way how you actually constructing your test then changes because your application structure changes. You will have many tests that you need to change, with possibly is very subtle differences, and that's not fun anymore.
So, repetition is also in tests something to be avoided just as much as in a regular application on admin.
[00:21:18] Unknown:
For someone who's converting their existing unit tests from the unit test slash no style to use pytest in an idiomatic manner, what are some of the biggest differences to be aware of?
[00:21:28] Unknown:
I I think for the, like, most when I give trainings and people actually run their test suites, it's usually there's usually not much prohibiting actually just using pytest there. Sometimes there's there's an old concept called general, which actually pytest introduced, like, way back, which is called generator based testing or yield based testing, where you can have a test function that actually yields other test functions. And that's that's something where Pytest actually a couple of years ago said, you know, it's it's like, has all kinds of complexities that evolve from that, in the internal Pytest core. And also has limitations on how you can actually use it. And in pytest, we introduced parameterized testing, and for the old style, it's still supported, but it's not documented anymore.
And I think that in those, for example, the yield based testing along with very particular fixture semantics or set up tear down semantics, that's still something that is used there, and I think that pytest is not trying very much to mimic every detail there. But those 2, for example, what Jason Pennerin, the author of those started, I think, 2 years ago. They also got rid of this yield based testing, because it is actually somewhat difficult concept. So that's 1 area. If you have lots of these tests, then there's a bit of work involved, but you actually want to stay with the current way how to do things. Although, even then, mostly it just works. We're not going to drop this support very soon, although we might drop it, well, some point in, like, I don't know, in a couple of years or so. So when you do new test projects, then definitely don't use it. Well, other things, I'm not actually aware. Usually, when we when you use regular features of nodes or unit test and you find that pytest cannot run it, we usually treat it as a bug in py test to fix.
So if people report it and it's not like super arcane or something, then it usually gets fixed in pytest, even to this day. So it's, something I would say that pytest still aims to to be very compatible there.
[00:23:46] Unknown:
So has the strict backwards compatibility policy presented any interesting technical challenges thus far?
[00:23:53] Unknown:
Oh, certainly. Being backward compatible, including to the plugins, is quite some effort. And it requires careful thinking when you introduce new features on how you don't that that you don't disrupt, and and introduce errors for existing test suites, for previous pytest versions. And it's well, I think it's the thing is that the testing tool is really such a core part of infrastructure, unlike like like a particular application or even a library, that you don't really want the test tool to become incompatible. That can be avoided at all. Because, I mean, pytest certainly has something like tens of thousands of test suites, that exist, that are that can use the current pytest version.
So even if you introduce, incompatibility for just like 1% of people, it's going to be a lot of projects actually that that have problems. It is, of course, it's much more complicated for the pytest contributors to to keep this kind of compatibility, and there have been lots of discussions to actually drop it here and there, but usually we manage to actually maintain it and fix any regressions. And I think, again, that the internal hook architecture and the fixture system, how it's done, actually, make this easier. Although, I wouldn't mind actually being able to drop some long deprecated features.
But whenever you do that for a large scale project, it's it's, of course, going to cause some pain. I mean, Python 3 being the case the case everybody knows about, but, even with smaller changes, you're always going to lose some people, and we try to avoid that even if it means that it's more effort for maintaining it.
[00:25:47] Unknown:
Pytest supports execution of tests written with other frameworks. How much ongoing maintenance does this feature require as changes are made to the other implementations?
[00:25:55] Unknown:
You mean, the fact that you can run non Python tests with with Pytest?
[00:26:00] Unknown:
Well, that and also if there are changes in how unit test or nose executes or adds or removes features, what additional work does it make for you and the other maintainers of pytest?
[00:26:13] Unknown:
Usually, not that much because even those and unit tests have exactly the same compatibility pressure, basically. People don't people don't enjoy nose changing its behavior or unit tests for that matter. And that's why the changes that we have to track from unit test and those, they're usually not hard to keep up with.
[00:26:32] Unknown:
The web page says that pytest is designed to work with domain specific and non Python tests. And in fact, a coworker is using it to test a node. Js project. How did pytest design enable this?
[00:26:45] Unknown:
The general thing in in pytest is that there's a there's a collection tree model that represents your test suite, and that is not Python specific. So basically, you have, like, a normal tree, like, like a typical, like, programming tree used in programming data structures. It means that for example, all the test functions and also the test classes that you have in Python, they are just mapped into this tree. And the running of tests is then kind of is on this tree. It's not directly on the Python function. We call this, test nodes and test test collectors and test items, basically the internal model that is also used by the plugins.
And people have been writing for example, the pytestc andc+ plugins. So where they run their c and c plus plus tests using pytest. So the model is not tight. This kind of like internal model on how the tests are represented is kind of independent from Python, from testing Python functions. There are a few hooks that are specific to just Python tests and also the fixture system. But in general, you can have you can express your tests, and some people do this in in YAML, like, all kinds of data formats even there's I even know there's 1 company used it for they actually use Pytest in conjunction with Excel. So they actually you provide some kind of file that contains tables in in Microsoft Excel, and then it goes there and gets the data from there, and and, like, uses the Python report and things to actually, run the tests.
So that's something that is just that dropped out of the model that we don't internally in the data structure that we have in pytest, we don't talk too much about Python functions, but we actually talk about test items. And a test item has run test method. So if you are plugged in, you just provide a different kind of item that provides a certain run test method, and then you're good to go and integrate non Python test items, into your test
[00:29:04] Unknown:
run. Having the data structure being the unifying layer between different plugins or different interfaces with other languages seems to be a pretty popular paradigm that I've seen used in SaltStack and also most recently in Neovim that allows people to use whatever language they feel comfortable with to write plug ins and interface with a given tool without necessarily having to work within the same language. So that's definitely very interesting that iTest adopted that approach as well and seeing the capabilities that that has enabled other people to take advantage
[00:29:38] Unknown:
of. Yes. I think so as well. It's of course I mean, you still need to when you write plugins, like unlike, SaltStack or other things, the plugins are written in Python. So even that person who who implemented the c plus plus testing plugin, this person had to write plugin in Python. He couldn't just use c plus plus So it's kind of in that sense, it's it's limited. It is still a Python framework, and all the plug ins are implemented in Python. In that sense, it's a bit different from maybe more general tools like Salt.
[00:30:08] Unknown:
What are some of the most interesting applications of Pytest that you have seen?
[00:30:12] Unknown:
Well, I think this c plus plus was got me by surprise actually. I can I'm much glad I can quickly find the link now, but you should be able to if you look for Python C plus plus test run or so. That was surprising. Another thing that was surprising at the time was being able to run JavaScript tests. There's a plugin called oejskit, which is kind of an old plugin. I'm not sure if it's maintained very much anymore, but that I found surprising in that you well, you can just write JavaScript files and tests, and then have them talk to your whiskey application.
And these tests actually run-in the browser, but everything is reported back to the command line to pytest. So that I found quite an interesting approach at the time. Otherwise, generally, I'm I'm I'm impressed by what kind of plugin ideas people come up with. I think the the main advantage of a flexible plugin system is that people come up with usages that you haven't thought about before. And I mentioned these 2 plugins. There's also several other plugins where I read what they actually implemented. And I was a bit surprised that they could do that by using a certain combination of hooks. So I think that's really the thing about a good plugin system that it allows usages that go beyond what the original hook offers or or plugin that the the core people actually envision.
[00:31:38] Unknown:
Can you give us some examples of some plug ins that are particularly cool or made novel use of those hooks?
[00:31:44] Unknown:
Well, I think the Pytest BED, which is well maintained, plug in is is 1 good example. The like I like I mentioned, I mean, the Py Pytest, c plus plus, is a is a very funny plug in. There's all kinds of reporting enhancement plugins, for example, pytest sugar, which kind of, like, introduces all kinds of colors and whatnot for reporting. And it's it's a bit hard to actually pinpoint. I mean, there's there's, like, it depends a bit on if I talk about reporting, there's there's there's many plug ins in terms of of of modifying the reporting of pytest, and then the integration of other languages, and it's a bit, it really depends on on what I'm aiming at when I talk about what I find interesting. So I think I mentioned the plug ins that I that surprised me. I would need to to look again through all the plugins, because that's really too many to just, keep them in the head. So even by by if I had, like, 2 years ago, I I might have been surprised at something, but I might have forgotten now. So Those are some really good examples that our listeners can go check out if they're interested in seeing some of the depth and breadth of what Pytest can offer them in their in their testing work.
Yes. And we do actually try I mean, just for the record, if you go to pytestorg and then the plug ins page, there's, like, lots of plug ins that are listed and even tested in conjunction with new pytest releases. That's some work that Bruno Olivera from Brazil has doing has been doing. So he basically takes all of the plugins from from PyPI, from the package index, and tries to run them against the release that we are preparing so that people can see if it works on Python 2 and Python 3 and the short descriptions and so on. So we actually try to get some kind of minimum quality there. But, of course, it also depends on the plugin that it actually provides tests itself that we can run.
[00:33:39] Unknown:
That's a very nice dedication to your users and community of people who take advantage of Pytest to make sure that there is that visibility as to how well supported in the newest version a given plug in is because I'm sure there are lots of people who rely on a number of different plug ins in their day to day use of Pytest and being able to have that quick verification that a new version coming out isn't going to break their tests all of a sudden or cause them to have any issues with the different plugins that they're using, or if there is an incompatibility being able to fix that ahead of time.
[00:34:11] Unknown:
Yeah, exactly. It's basically the the contributors of Pytest, I mean the main people actually doing prs and and maintaining pytest nowadays, they all of course, they each use all kinds of plugins. Myself, I would say I'm regularly using something like 10 different plugins. So if I'm developing if I'm working on the new version of pytest, I will instantly recognize it anyway if there's some kind of breakage. And I think it's very similar for the other contributors. So I guess we have probably a closure of maybe 30 plugins that are used by the actual people preparing a release.
So, so that they are kind of guaranteed to work. But yet, of course, lots of other plugins we don't know about or we don't use for whatever reason. Yeah, we try to actually get some some kind of quality testing in there. Although, I think that it's really I mean, that that requires to make this, like, really good, requires, I think, 1 or 2 week of dedicated work to just focus on this particular area of of testing and reporting and maybe filing filing bugs. So it's always has to be balanced with the actual time that people can make for for doing Pytest improvements.
[00:35:22] Unknown:
I also think there's a benefit there in terms of the psychology of adoption. Right? You know, it benefits your project as well because if I'm someone who uses a particular plugin and I noticed that in your new release that plugin is broken and it clearly says that on the dashboard, then I might very well say, well, gee, I could probably dive in and fix that. I think that surprises hurt adoption. It's really tough when you download the new release and you use your plugin that you use every day and it blows up in your face. Whereas if you see a thing on a dashboard that says, hey, this isn't working, it's like your whole outlook towards that situation changes, I think. Exactly. Yes. And it's like there's a saying that for every,
[00:36:04] Unknown:
10 or even a 100 people who experience a problem using your tool version of the plugin, there's only going to be 1 person who actually reports it. So for every person who reports a certain problem, in compatibility, you can usually assume there's lots more who just took this as as stopping the the try. Right? They just went there and tried it, and it didn't work, and then they move on to something else. But it's, of course, yeah, it's effort to to to get this kind of smooth beginning experience because a testing tool is really used in all kinds of different situations and projects and directory layouts and all kinds of different ways. So there's a lot actually to be done in terms of avoiding surprises. And it's it's an ongoing effort, actually. We we are still trying to reduce the number of surprises you can possibly get.
[00:36:55] Unknown:
Is that dashboard visible to the public at this point or is it a work in progress?
[00:37:00] Unknown:
It should be visible. Yes. I'm a bit let me see if it's it might not be might not be prominently, has a link called 3rd party plugins on the pytestorg website. If you go there, then you will see the plug ins index.
[00:37:21] Unknown:
Continuing the conversation of adoption, do you have any sense as to the relative use of pytest versus unit test or nose? And what do you, as the project creator and maintainer and also the broader Pytest community, do to evangelize and make other people aware of Pytest's existence and why you might wanna use it versus some of the other options?
[00:37:45] Unknown:
Well, first of all, I'm I'm personally I'm pretty happy with, the usage. And, of course, they can can always be more users, but, it's not like I'm trying very much to get more users to Pytest. That being said, Brianna Laffer from Australia, she was instrumental in doing something called adopt Pytest month, which we did in April 2015, where we paired volunteers. We had something like 15 volunteers with open source projects that wanted to migrate to pytest. And that was a, I think, quite a success, and there's a there's a talk at EuroPython 2015 now in 2 weeks about how this went. So that was, like, a major effort to get some more publicity and so on. And otherwise, it's, I think, word-of-mouth.
And hearing from others, like, also this podcast, and other bits about possibility to use pytest and to give it a try. In terms of popularity, it's that's really hard because it's the problem is that the download numbers in the package index, there are well, at most, there are rough indication, but it's really hard to make big deductions from these numbers. I mean, sometimes there are packages that cannot even be installed, and they got downloaded, like, 3,000 times. So how does this happen? It happens because there's all kinds of robots, and some people have set up their continuous integration, framework in a broken way, so they download everything all the time, like, every hour or something.
And this and other effects, like, especially robots, make it very hard to to know, to to make to deduct to deduce anything from the download numbers. I mean, pytest is available in all the Nuxt distributions, and there's certainly many many companies using it, but really exactly quantifying kind of like the market share is something I wouldn't I would like to do, but I don't really know how to do actually. I can only use some some indications, and so that remains. Maybe there could be some kind of poll or so that the Python Software Foundation could do in terms of usage of testing tools. I don't know. But to my knowledge, it's not really it's not really easily quantifiable. You can certainly say that unit test, and nose, and pytest all are very popular.
But it's I think it's well, I don't really know how much more people use unit tests than Pytest, or how much more or less knows is used used. It's really hard to know.
[00:40:17] Unknown:
I think in general, this is 1 of those things adoption is 1 of those things that's virtually impossible to track. And the reason I say that is that every year or so, we see an article showing somebody who compiled statistics on language popularity by virtue of some index, like as a for instance, projects on GitHub or check ins on GitHub. And then it's like, okay, except that not everybody uses GitHub. As a matter of fact, I can't think of his name. 1 of the Microsoft evangelists basically coined the term cold dark matter programmers. And it's it's it's because there's this huge universe of people out there who are using technology and tools that just don't participate. They make no noise. It's it's, you know, it's like you said about the people who for every x people who use your tool, only 1 reports it. It's exactly the same kind of thing. You absolutely have no way of knowing. There are people in government. There are people in in in circumstances where it's a completely closed environment, and you're you're never gonna know. So this I think it's almost impossible to answer that question.
[00:41:28] Unknown:
At least precisely. I mean, you can get some ballpark ideas or so. But let me let me add something else about popularity because I encounter this, quite a lot. Nowadays, there's so many projects, software projects, and then when you actually want to determine what kind of web framework do I want to use, what kind of testing tool do I want to use, what kind of database do I want to use, and so on. I mean, there's all these kinds of questions that software developers have all the time. And 1 of the main strategies people are going about this complexity is looking at popularity.
Because I think the reasoning is basically, if something is more popular and I'm also using it, I cannot be completely wrong about it, right. I mean, there must be something to it. If I'm using something really be my fault, right? It's then it's basically I took the most popular solution, it really be my fault, right? It's then it's basically I took the most popular solution, it doesn't work, so it's not really my problem. So there's this psychological thing about navigating the complexity of software choices by looking at popularity, and I think it's not actually a very good strategy.
I mean, it certainly makes sense to look at a project. Is it maintained? Does it are there some people? Is there some kind of community? What are the values that, this project actually tries to to adhere to? But the real thing, I mean, once you have basically, you have this baseline, and you have the impression, okay, this project is kind of alive, and people are, and I like the way how they're doing it, then you should not actually spend too much time thinking about the popularity thing, but spend the time is it the right thing for us to use, right? So rather, that's basically my recommendation in general for any kind of software choice.
Do make some some some kind of general minimum things you want to see, but then don't decide by popularity, but decide if you think it really fits what you want to do. So that's that's why I don't that's also the reason why I don't care too much about popularity and including in any way, very hard to measure popularity numbers.
[00:43:44] Unknown:
I also think that popularity is a poor indicator for what makes a good technology fit. As just an example, in the Java world, people used Struts forever and hated it. At least a lot of people hated it. And I apologize if there are any stretch project maintainers listening. But there was a lot of discontent in the user community. But because it was seen as an industry standard, it got widely used and adopted. And a lot of people probably would have been better off going with a different technology choice but didn't because it was seen as such an industry standard.
[00:44:17] Unknown:
1 thing I will add about popularity as a means of choosing a given project though is that can be a potential indicator as to the level of support that you're likely to be able to achieve whether it's through forums or Stack Overflow or whether it's through having a company that will provide paid support for a given project because that can definitely be a consideration when making those technology choices as well, and popularity can sometimes be a proxy for that.
[00:44:45] Unknown:
It can. Yes. But sometimes you could also have the problem of something is very popular, but there's not actually too many people behind it. And that means the few maintainers that's also, I think, a problem in open source increasingly. They have very popular packages, but very few maintainers, sometimes even just 1. And then they get swamped with all kinds of issues. The likeliness that you get some kind of fix or reaction with your problem, thus this decreases with popularity. So it's certainly not just a matter of popularity, but that's what I meant by basically look at the community, look at how many people are there actually doing something.
That's probably, better. I mean, when you when you say, I mean if you have like for a for a library, you have like 3 or 4 active maintainers, that's very very good, you know. That's a very good situation to be in, because most open source stuff has like 1, right, most 2 main payments. So it's not like it's again not about having the most number of committers or something, but it's about having something healthy. And once you have, like, this minimal baseline of of judging something as healthy, then you can check if it actually suites, and not just compare, like, say, the number of committers. So the 1 project might have 20 committers and the others a 100, but that doesn't really matter that much. I mean, if from these 20 you have like 3 or 4 very dedicated people, whereas in the under other case from the 100 people you only have like, 2 very dedicated and the rest is just, like, minor PRs, then it's still not very, comparing and saying the other 1 is, like, 5 times more interesting.
[00:46:23] Unknown:
Doesn't really make much sense. Absolutely. I totally agree with what you're saying right there. Are there any features of Pytest that would make it suitable for use with configuration management tools and infrastructure testing?
[00:46:33] Unknown:
Well, I guess so. I haven't used it for that myself, but that's also because I'm not myself very much in the cloud business and and DevOps things. What I think is where you can use it is to have kind of data and, example driven testing, where you actually, for example, describe your tests in in YAML files, so in some kind of markup language, and and then you you run pytest to actually set up whatever a virtual machine configure some some application, and then run some standard tests against it. You can use pytest for that. Although, truth be told, I think that many of the features that you that Pytest has you, you wouldn't use at that point in time. The only benefit there is that you can use the same tool for many different things.
So it might be worth the effort to actually make it, to come up with some kind of scheme like the 1 person actually that did with c plus plus to be able to have, like, 1 tool that you use for all kinds of different testing activities. But for these kind of, like, configuration management, situations, I wouldn't say that pytest has, like, super distinctive features, why you should use it there. But I might be wrong. I mean, maybe some people who actually use it see this differently, but I'm not the person to judge.
[00:47:58] Unknown:
So where do you see pytest, and more generally, test frameworks headed in the future?
[00:48:03] Unknown:
I think testing tools in general, they have evolved quite a bit. Pytest certainly has in the way how we are evolving things and making it easier and things more consistent and allowing certain usage patterns that haven't been possible before, but I don't expect any big changes. I think from at this point, it's really lots of incremental things. And I expect like new plugins to actually add functionality, but not from the from the core. The 1 thing where I think testing tools in general need to evolve, and you might know that I'm also the author of talks, which is a testing tool that is independent of pytest. It you can run those and pytest in all kinds of test runners.
And it manages the creation of virtual environments and installing dependencies and then running things against different interpreters. And I think this kind of direction is is where more things are bound to happen. 1 thing that I find interesting is that in no language that I know of, certainly not in Python, is it possible to express a testing need in the following terms. When I actually want to release a new version of my library, I want to have all of the dependent all of the packages that actually depend on my library to rerun their test using my release candidate, right. I mean, for example, if you have like a certain parser for for some kind of file format, and other people are using, like, a 1,000 projects are using your parser, then how do you actually say, oh, I have this new release candidate.
How can I now verify that it doesn't break? Lots of tools that actually successfully worked with the older version. And these kinds of questions, like the whole bigger picture of integration and, and release testing and testing basically in a more complex world of dependencies, this is something where I expect improvements. And I think Travis and and some other testing facilities, they don't really go there. They actually they don't really know about basically the the Python, dependencies between packages and stuff like this. That's something that I'm personally interested in in working on. Also, because I maintain a number of tools and it it becomes harder to actually it's kind of like a lot of effort. We talked about this in terms of the plugins for Pytest itself to to have some kind of quality guarantee that when I release something, I'm I'm not, involuntarily accidentally breaking lots of stuff. So it's more these these higher level questions of integration and testing of different packages against each other that, we have we expect, like, more innovation possibilities and more new things to happen.
Not so much in the running of of test itself. That being said, in pytest, there's certainly room for optimizing, for example, distributing tests, and making it quicker to actually to get very useful feedback when you are ref you're refactoring things, and there's there's some improvements there. But I I would say, like, compared to the other area I just mentioned, it's more of an incremental improvements that we're going for there. There's no paradigm shift or anything like big new thing. Whereas, in the other area, I think there's there's room, like, for better and more tooling that would dramatically improve in the usefulness of tests in terms of a whole software ecosystem.
[00:51:42] Unknown:
So integration testing frameworks in particular is something that I've been interested in recently as well, having tried to install and use Jenkins for managing the test for my company and having a lot of headaches associated with that. So it's been on my mind for a little while to try and find the time to create a new integration testing server using something like Python or possibly even having a kernel written in Rust that does the execution and then having a web app built in Flask or Django built on top of that to provide an interface to users. And, again, having that plug in architecture so people can plug in whatever runners or language frameworks that they wanna use to do that. And 1 of the things that I find most particularly useful about integration testing servers is the ability to track code metrics over time, and I haven't really found any services or tools or frameworks that really support that as a first class citizen and a primary consideration.
They generally seem to be bolted on after the fact. Whereas I think that that's actually 1 of the more useful aspects of it even beyond just being able to verify that the test is complete and that you can use a given tool alongside another given tool because if you really want to, you can just do that on your own machine. But having a server with continuity and persistence is where the data story really becomes powerful and particularly with the recent resurgence and growth of data science and data analytics as a discipline, having access to that data can really give you a lot of insights that are just locked up and you may have some potential intuition about, but you don't have any hard answers when trying to make decisions about what is the best way to improve your code quality or changes that you might need to make in your overall software architecture to improve maintainability.
So that's definitely a project that I would love to get started and potentially collaborate on.
[00:53:45] Unknown:
Yes. It's a somewhat complex project. I'm actually doing something in this area some last couple of 2 or 3 years, on and off. There's, the devpy dev py is a packaging server that actually also stores test results. So when you run devpy test, it actually that's a sub command, then it actually takes a package, it runs the test, it stores all kinds of meta information, and then stores back the test results. So what you what you get basically is if you look at the I can show you. You get a page like like this. I'm pasting this where you can see, oh, here's my package, and I got all kinds of, tests passing on Windows for Python 26, Python 34, for Linux, and so on. And, basically, we want to from that side, we actually want to enrich this data model that is behind that, so that we also have the whole dependency information and can actually trigger tests.
I think it's a it's a somewhat complex project. Also, especially in Python, there's the problem that it's not very trivial to actually know all of the dependencies. You usually only get that at install time. Like, you can only see when you actually try to install a project, there's no declarative metadata describing what kind of dependencies you have, and that makes it harder to actually have this dependency tree and to know which package version depends on which other package version and so on. And that's part of the problem. There's some improvements going on in terms of Python packaging that allow this, but these days it's still the case that most packages actually use setup dot py for describing your package, and that is code.
That is not a declaration of your dependencies, and that makes it very not harder for tools to process it, because you need to actually run code and then figure out what happened. And that's something that makes it harder to actually to go for the goal we you also mentioned, and and I, I want to go for as well. I think in other languages, like in JavaScript, it's a bit easier because they have had from the start, they had declarative data formats that describe dependencies. So it's very easy to just pass all of these data formats, and you you have a have a more or less complete picture of how everything relates to each other. That being said, Python is more complex in JavaScript in terms of the platforms and the configurations in which you deploy it on. In JavaScript, you basically have Node. Js or Unix machines, and you have the browser, and that's about it.
Whereas in, you have, like, all kinds of different Python interpreters, and you have c extensions, and all kinds of stuff that you can and also, like, 15 years worth of packages that exist already. So it's a bit of a different different situation to work from. But this part podcast is is not about packaging, so I'm going to stop here. It's it's it's a whole it's a whole different thing, of course, to talk about issues of packaging. But I think there's a relation between testing and and packaging, because if you can't automatically test your packages, the relation to other packages, everything suffers. Like, you you cannot really test what you want to test. The quality of the packages in terms of do they actually pass the tests, suffers.
[00:57:03] Unknown:
I think it is related and important. And and I will also say 2 points. I think a number of people are coming to the same conclusions that both of you guys are with regards to the fact that the current generation of test running in management tools, a la Jenkins, etcetera, etcetera, it's like they're not up to the to the task of today's test management environment. That's 1 thing. And and and a lot of people are working on it. It'll be interesting to see what comes out of that and what might garner a lot of adoption and market share. The other thing is I think a lot of people are aware that Python's packaging story is kinda rough. As a for instance, languages like Go are gaining an awful lot of adoption simply by virtue of the fact that when you have your Go program, you can compile it into a binary that's completely standalone. You literally drop the binary on your system and away you go. There's no virtual ends. There's no packages that you need to install that it requires.
You can just compile the thing into 1 binary blob, dump it where you need to go, and have it run. And I think I think there's a lot of value to that. Right. 2 things to that. Packaging has improved a lot, and I think, it's it's still improving. So the experience you get today
[00:58:17] Unknown:
is, I think, much better than what you got 4 or 5 years ago. And second, when it comes to stand alone binaries, you can use c x freeze or py2x either. I think c x freeze is also compatible to Python 3, and you can get the similar experience. It's just not advertised as a primary way, like, from Python dot arc to do things, but you can get the same thing when people actually are using it. Dropbox, for example, there in the first couple of years, Dropbox was written in Python, and they just visited a binary. Didn't have to install any kind of virtual end for something. So, of course, you can do it, but it's not like a first class thing as it is in in Go. But still it's very possible, and I've done it myself. So I think in terms of packaging, Python is really it's actually compared to other languages from what I get, it's actually not that bad anymore. Also, when it comes to Go and even Node also, I've had friends of mine who are very deep in in these languages.
They have a lot of complaints about the packaging situation. So I think the problem with packaging is that packages are not created by the some same persons all the time, but they are created by 100 of 1000 and tens of thousands people across the world in all kinds of different situations. And there are so many things that that people do, that it's very hard to enforce any kind of strict way to normalized way to do something. But again, I think we are we are basically going a bit too much into into packaging. I think I think we we can probably agree that testing and the next that was the original question integrational aspects and better integration into higher level testing approaches is something that is needed.
[01:00:09] Unknown:
That's very interesting. So is there anything that we didn't ask you that we should have or anything that you would like to add or say to our listeners before we move to the picks?
[01:00:18] Unknown:
Maybe just I just want to say that it's also important, I think, to to see that Pytest currently has, like, I would say, 10 kind of very active people, doing things, contributors and and core committers. And that's something I'm very happy about. Because like I said, in open source, it's not these days, it's not so easy actually to get so many ables. Like, for for the very popular packages, maybe. But for many projects, it's very hard to even go beyond 1 person as a main contributor. And so I think the this kind of like situation, I'm I don't I mean, I'm certainly still a person who's who's who's doing a lot on pytest, but there's really a number of other people who are also doing a lot. And that is something I appreciate very much, and it's I think it's also important when you discuss a certain project to to talk about this. So that's maybe the 1 thing that was a bit missing from the question so far about the, well, basically about the other people and what's the what's the situation in terms of maintenance and and people being involved.
[01:01:24] Unknown:
With that, we will move on to the picks. My first pick along the lines of packaging is a project called Bundler, which is a play on Bundler from Ruby, and it's a library that will allow you to define a gem file style file in your project so that you can then install all of your dependencies that way and freeze them and provide, So So it's just a potential alternative to the requirements dot text style format. So definitely worth taking a look at. My next pick is Python Future, which is a library that will make it easier to convert a Python 2 code base to Python 3 using the 2 to 3 library as well as adding some other niceties on top of that. I recently converted a library that I take advantage of for my work to Python 3. It was almost there, but there were still some things that needed to be modernized.
And that helped me a lot in doing a lot of the manual drudgery of just converting imports and print statements, etcetera. So definitely take worth taking a look there and in conjunction with that, the 6th library because that was very helpful as well. My next pick is the movie The Way Back, which is a movie about some prisoners in a Siberian prison camp in World War 2 who managed to escape and walk about 4000 miles from the north from Northern Siberia all the way to India crossing the Gobi Desert in the Himalayas. And was just a very well done movie, very interesting story, so well worth a watch. I'll also pick Pip Deptree, which is a utility that will print out your installed packages in a dependency hierarchy because sometimes when you type pip freeze, you get a whole bunch of packages and you say, well, I didn't install all of those manually. I wanna know which ones are actually the top level and which ones are all installed as dependencies.
And that lets you do that so that you can end up with a much cleaner requirements dot text file without necessarily having to list everything that you install because some of that may change over time. And my last pick is the Rosewill BK 500 a and also the I, which is a Bluetooth keyboard for use with Android and also potentially iPhone or iPad that I recently picked up for using with my Nvidia shield tablet so that I can work on blog posts or I also have a contained ch rooted Linux environment on my tablet. So having a physical keyboard is really useful for being able to interact with that or write blog posts or just do some other things without necessarily having to holler at my laptop. And I tried a couple other Bluetooth keyboards before I landed on this 1, and I'm really happy with it. It's got a couple of little pop out pieces that will let you use it as a stand for your tablet as well. So definitely worth checking out. And that is it for me. So, Chris, go ahead.
[01:04:24] Unknown:
My first pick is as usually beer pick. We were in Vermont this last weekend in the Stowe area, and there's a really great, brewery brewery there, Crop, Brew Pub, and they make a Bavarian Weizen that I actually think is better than, a lot of the ratings that I'm seeing out there in a number of the beer rating, sites. It's a little bit fruity, very crisp, really delicious. I like it. The next pick that I have is and I'm not going to try to pronounce the actual name for these because it sounds vaguely rude in English, and also I'm sure I would horribly screw it up, but Dutch pancakes, these are something that I had never encountered before until, again, in the in the Stowe area. There's a restaurant called the Dutch Pancake Cafe, and they are pancakes not like not at all like the traditional sort of American things that you put maple syrup on and nothing else.
These things are whole plate sized crepes for lack of a better way to describe it and they are savory as well as sweet. You can put various vegetables and bacon and all kinds of really good stuff in them, and they are delicious. My wife and I need to investigate making these. I'm told it's not hard. But they are really a tasty treat and more Americans should try them. My last pick is a graphic novel series. I hesitate to use the word comic because once again, you know, this is not men in tights romping around saving the world. There are some fairly serious themes being dealt with here called Prophet. And, this is just a really unique series from stem to stern. The artwork is incredibly creative and beautifully drawn. The story writing is very interesting and postulates a very far future in which, I won't spoil anything, but humanity has undergone some really incredible changes unlike a lot of science fiction that I have found where aliens are are really quite a lot like us with only very minor modifications, the world that Prophet inhabits is very, very, very different. Different. The writing is very good. It's a really great story arc. There's a lot of it. I've I've only been through the first 2 volumes, and I definitely look forward to reading more. It's just it if you like science fiction, you should definitely be reading it whether you think of yourself as someone who reads comics or not. It's it's top notch. That's it for me. Holger, what do you have for us?
[01:06:51] Unknown:
Well, I wasn't actually aware. I was, I didn't actually think about any kind of picks because, the picks are basically something recommendations, like general recommendations on on on something. Yeah. It can be it can be anything that you like and you think listeners might enjoy. It can be food, beer, a book you've read, a movie you've watched,
[01:07:11] Unknown:
a technology that you've used.
[01:07:14] Unknown:
I see. The, I can I can say that I was, there's a book called The Otopia of Rules from David Graber, which is also about technology and bureaucracy topics, and I enjoyed this very much? David Graber is an anthropologist who discusses more or less, like, a couple of 1000 years of human history and comes up with the with the criticism of of our current times, which I find very interesting, and especially also in in terms of what we're doing with respect to bureaucracies and technology. So that's something I can recommend. The other thing which I think is very interesting, and that's something I'm also going to talk about in my keynote at EuroPython, which is the interplanetary file system, ipfs.
Dot io. It's kind of like rethinking a bit how we are doing networking because we are still kind of using the old telephone model TCPIP that was developed 45 years ago, from the beginning of the seventies. And I think there's some very good ideas on IPFS, but you can also probably watch my my keynote about this. So I'm I'll just leave it at that, because like I said, I won't really be prepared. Those Those are those are 2 great picks, Holger. No no worries there. It's interesting that you bring up IPFS. I had not actually heard about that, but I definitely agree. We're we are definitely suffering
[01:08:42] Unknown:
with some really, really early decisions in networking. They just don't make any sense anymore. Like as if for instance, even IPv4 versus IPv6. Right? I'm seeing people scramble and I'm seeing various projects to deal with the fact that even, you know, large devices with their IP stacks and and because of, you know, devices with their IP stacks in in firmware, people are just not adopting it.
[01:09:13] Unknown:
Yes. Well, I think a very good, 1 of the persons, who has explored this since a couple of years and who has been very deep in TCPIP for, like, 2 decades or 3 decades is mister Van Jacobsen. If you look for Van Jacobsen, I have to look for for the right slides, and the keyword main data networking, you will find some very interesting information in terms of how did TCPIP actually come to exist, what was it modeled on, and why doesn't it fit anymore. And HTTP being based on TCPIP kind of, like, suffers from similar problems. But it's a it's a wide ranging discussions, and, that's something I'm also want to get involved in where I think that it would be interesting for Python people also to be more involved in.
[01:10:01] Unknown:
Well, we'd like to thank you very much, Holger, for taking the time to speak with us today. It has been a very interesting and informative discussion, and I'm sure our listeners will really appreciate listening to it. I guess my Twitter blog, like, Twitter is, is, is,
[01:10:18] Unknown:
I guess my Twitter and blog. Like, Twitter is, is this, h p k 42, and my blog is my full name like this, h t t p hugercragle.net.
[01:10:33] Unknown:
Well, again, thank you very much.
[01:10:35] Unknown:
Thank you to be as in Chris.
Introduction and Host Welcome
Interview with Holger Krekel
Holger's Background and Introduction to Python
Inspiration and Development of Pytest
Keeping Tests Fast with Pytest
Different Testing Approaches with Pytest
Black Box Testing with Pytest
Fixture System and Test Isolation
Event-Based Callbacks and Plugin System
Impact of Testing Frameworks on Test Quality
When Not to Write Unit Tests
Signs for Refactoring Tests
Converting Tests to Pytest
Maintaining Backward Compatibility
Running Non-Python Tests with Pytest
Interesting Applications and Plugins for Pytest
Ensuring Plugin Compatibility
Popularity and Adoption of Pytest
Pytest for Configuration Management and Infrastructure Testing
Future of Pytest and Testing Frameworks
Challenges in Packaging and Integration Testing
Final Thoughts and Picks