Summary
We’re delving into the complex workings of your mind this week on Podcast.init with Jonathan Peirce. He tells us about how he started the PsychoPy project and how it has grown in utility and popularity over the years. We discussed the ways that it has been put to use in myriad psychological experiments, the inner workings of how to design and execute those experiments, and what is in store for its future.
Brief Introduction
- Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
- I would like to thank everyone who has donated to the show. Your contributions help us make the show sustainable.
- Hired is sponsoring us this week. If you’re looking for a job as a developer or designer then Hired will bring the opportunities to you. Sign up at hired.com/podcastinit to double your signing bonus.
- Once you land a job you can check out our other sponsor Linode for running your awesome new Python apps. Check them out at linode.com/podcastinit and get a $20 credit to try out their fast and reliable Linux virtual servers for your next project
- You want to make sure your apps are error-free so give our last sponsor, Rollbar, a look. Rollbar is a service for tracking and aggregating your application errors so that you can find and fix the bugs in your application before your users notice they exist. Use the link rollbar.com/podcastinit to get 90 days and 300,000 errors for free on their bootstrap plan.
- Visit our site to subscribe to our show, sign up for our newsletter, read the show notes, and get in touch.
- By leaving a review on iTunes, or Google Play Music it becomes easier for other people to find us.
- Join our community! Visit discourse.pythonpodcast.com to help us grow and connect our wonderful audience.
- Your hosts as usual are Tobias Macey and Chris Patti
- Today we’re interviewing Jonathan Peirce about PsychoPy, an open source application for the presentation and collection of stimuli for psychological experimentation
Interview with Jonathan Peirce
- Introductions
- How did you get introduced to Python? – Chris
- Can you start by telling us what PsychoPy is and how the project got started? – Tobias
- How does PsychoPy compare feature wise against some of the proprietary alternatives? – Chris
- In the documentation you mention that this project is useful for the fields of psychophysics, cognitive neuroscience and experimental psychology. Can you provide some insight into how those disciplines differ and what constitutes an experiment? – Tobias
- Do you find that your users who have no previous formal programming training come up to speed with PsychoPy quickly? What are some of the challenges there? -Chris
- Can you describe the internal architecture of PsychoPy and how you approached the design? – Tobias
- How easy is it to extend PsychoPy with new types of stimulus? – Chris
- What are some interesting challenges you faced when implementing PsychoPy? – Chris
- I noticed that you support a number of output data formats, including pickle. What are some of the most popular analysis tools for users of PsychoPy? – Tobias
- Have you investigated the use of the new Feather library? – Tobias
- How is data input typically managed? Does PsychoPy support automated readings from test equipment or is that the responsibility of those conducting the experiment? – Tobias
- What are some of the most interesting experiments that you are aware of having been conducted using PsychoPy? – Chris
- While reading the docs I found the page describing the integration with the OSF (Open Science Framework) for sharing and validating an experiment and the collected data with other members of the field. Can you explain why that is beneficial to the researchers and compare it with other options such as GitHub for use within the sciences? – Tobias
- Do you have a roadmap of features that you would like to add to PsychoPy or is it largely driven by contributions from practitioners who are extending it to suit their needs? – Tobias
Keep In Touch
Picks
- Tobias
- Hackers: Heroes of the Computer Revolution by Steven Levy
- Chris
- Jon
Links
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Hello, and welcome to podcast. Init, the podcast about Python and the people who make it great. I would like to thank everyone who has donated to the show. Your contributions help us make the show sustainable. Hired is sponsoring us this week. If you're looking for a job as a developer or designer, then Hired will bring the opportunities to you. Sign up at hired.com/podcastinit to double your signing bonus. Once you land a job you can check out our other sponsor Linode for running your awesome new Python apps. Check them out at linode.com/podcastinnit and get and get a $20 credit to try out their fast and reliable Linux virtual service for your next project. You want to make sure your apps are error free, so give our last sponsor Rollbar a look. Rollbar is a service for tracking and aggregating your application errors so that you can find and fix the bugs in your application before your users notice they exist.
Use the link rollbardot com/podcastinit to get 90 days and 300, 000 errors tracked for free on their Bootstrap plan. You can visit our site to subscribe to our show, sign up for our newsletter, read the show notes and get in touch. And by leaving a review on Itunes or Google Play Music, it becomes easier for other people to find us. You can also join our community. Visit discourse.pythonpodcast dotcom to help us grow and connect our wonderful audience. Your hosts as usual are Tobias Macey and Chris Patti. And today, we're interviewing Jonathan Pierce about PsychoPy, an open source application for the presentation and collection of stimuli for psychological experimentation. So, John, could you please introduce yourself?
[00:01:35] Unknown:
Hi. Yeah. I'm John Pierce. I'm a lecturer in psychology, an associate professor, I guess, in American terms, in, psychology at University of Nottingham in the UK.
[00:01:45] Unknown:
How did you get introduced to Python?
[00:01:49] Unknown:
Way back in 2002, I was a postdoc in a neuroscience lab, and I was using MATLAB to present stimuli and and, collect responses from participants. And I was not very sure about the direction MATLAB was going. In those days, they weren't clear if they were going to support Apple going forwards. In the end, they did, but but they were wondering whether or not they would ever support Macs or or or see support for their Macs. And I started hearing about Python as this other alternative that was free and and, you know, the cool kids were were having a play. But these were really early days for Python. Right? These were there was no matplotlib. There was no pandas.
NumPy didn't exist. So there was a a debate between whether we should use num array or numeric. And and and later on, you know, the NumPy sort of came out of the the the burning embers of those 2. So it was really early days for Python. But what it had that that was really useful to me was it had OpenGL bindings. And so I could write my OpenGL code and get hardware accelerated graphics right there in a Python script. Whereas in MATLAB, I had to write MEX files, which is like a compiled bridge from from MATLAB to c. A bit like, I I guess the, I can't remember the name. The, a bit like, no. It wasn't c types. Anyway, compiled files that that you had to write for MATLAB, and I I didn't want to write those. MATLAB actually improved, in in many ways, and it did support Apple's, Apple Macs, and, they did get wrappers for various different things. But by the time MATLAB was getting better, I was already in love with the Python syntax, and I moved on. So so, yeah, it was early days, basically, and and, and I just kind of fell in love with the syntax of Python and decided this was a way to go.
[00:03:52] Unknown:
And can you start off by telling us what PsychoPy is and how the project got started?
[00:03:58] Unknown:
So in 2003, I then moved to the UK and and got my own faculty position, and I was starting up my own lab. And PsychoPy then began so it started off as a proof of concept where I was trying to show other people, look. This is what I could do in trying to get other people to write it, to be honest. But when I moved to the UK, I started my own lab, and and this became the library that that my lab used for for Python script. So we would write code in other editors, and then we would use the the PsychoPy library in order to, you know, provide some convenience functions for saving data, checking keyboards, presenting stimuli in a window.
And gradually, that changed. So I put it online. It was on SourceForge as of 2, 002. But gradually, people were sort of sending me messages saying, hi. I saw your thing. I downloaded it. It was cool, but I don't understand. Where is it? I installed it, and it's not in my start menu. Right? Because it was a library, and people didn't know what that meant exactly. And and so people were struggling also to get all the dependencies from Python and that sort of thing. So I turned it into an app, which was basically just a code editor with all of its sort of dependencies built in, including its own copy of Python. So people install an app, they could write their PsychoPy scripts, which use the library, and get all of the dependencies built in. And then it was in their start menu, and they got it. Right? So so gradually, people started using that in the sort of 2, 005, 2, 006. We had a couple hundred users maybe, Python enthusiasts.
And then the thing that started happening was that I was getting frustrated because I was teaching to undergraduates who didn't know how to program and wouldn't want to learn Python. I was teaching them how to use other software packages that use, like, a graphical interface, And and I started getting frustrated by that, and the students were frustrated by that too. So around 2008, 2009, I decided I was gonna write a graphical interface on top of this library that would write your script for you. You would build your experiment graphically, or you would write the script yourself if you wanted to. But but you could you could construct it graphically in the builder, and it would compile the Python syntax script for you from what your from your visual representation of your experiment. And and that was the thing that really sort of changed the numbers of PsychoPy, quite drastically.
That enabled it to be used by an awful lot of people who who didn't want to learn a new language.
[00:06:36] Unknown:
And can the output of that builder, interface then be modified further?
[00:06:43] Unknown:
So you can compile it to a you can either just press run, or you can compile it to a script and hack the script, But, also, you can insert into it custom code of your own that can be executed at various different either on every screen refresh or just at the beginning of experiment, or wherever you like. And so you can customize it as well, which once you've compiled the script and hacked the script, you can't go back to the graphical interface from that. Right? So so, the way of providing something that you can edit code within the graphical interface was was that you got these inline code components as well. And all of this was possible because Python does great text handling. Right? So we could build a WX widgets application that would run on any and and it could just create the appropriate text because of because of the great text handling of Python. So that part was well, it took a lot of thinking, I guess. But but now when I look back at it, it all kind of just hangs together. It makes sense.
[00:07:56] Unknown:
So how does PsychoPy compare feature wise against some of the proprietary alternatives?
[00:08:01] Unknown:
So I think the key there is that Python and interpreted languages, and the open source world have meant that nowadays, software can be written by the people who use the software. Right? So I'm not a software engineer, and I know how an experiment should run. I know how a scientist will think of their experiment. I know even how an undergraduate, a very junior scientist, will think about their experiment. So I think in terms of ease of use, that's that's where we win. That's where that's where the open source community, where enthusiasts and users develop the software themselves, rather than the traditional model of trying communicate that to a professional software engineer who's never personally run an experiment and doesn't know what that means. Right? So so I can actually build the thing the way that I want it to run and the way that I think about my experiment. And so I think that's that's where we win in terms of features. And in most other things, in terms of sort of high end features, yeah, we can do we can do a lot of the fancy stuff as well. Again, because of the nature of open source software and the fact that you've got lots of enthusiasts who are contributing, we've we've now got 50, 60 people who've added some level of code, 3 or 4 of us who've contributed a lot. And those are various different enthusiasts who will add some high end feature of their own. So, and and whereas commercial software can't just sort of, like, suddenly add this new thing really quickly and and have it out there. So in terms of features, we do really well, I think.
And in terms of usability, that's that's where I think we're doing extremely well, to be honest. Where we don't do so well is is in bugs and, you know, reacting to that sort of thing. We're all busy, and and so and we're not professional sci computer scientists. So sometimes we've we've added features, but we've not been fixing the bugs for which we've got workarounds. Right? So I know I know that there's this annoying little glitch, but I know how to avoid it, and so I just do. And and so sometimes that's the downside of these sorts of projects versus a commercial enterprise.
We understand the users better, but also we don't have as much time to fix bugs, and we don't spend hours and hours testing things. So that's I guess those are the those are the 2 main differences.
[00:10:30] Unknown:
In the documentation, you mentioned that the that your project is useful for the fields of psychophysics, cognitive neuroscience, and experimental psychology. And I'm wondering if you can provide some background into how those disciplines differ and, what constitutes an experiment for each of those fields respectively.
[00:10:48] Unknown:
Sure. Sure. So my own work is in vision science, and that's the psychophysics is this, this area of psychology where we treat the brain as basically it's like an engineering a physics project, and we we map its inputs and its outputs and and that sort of thing. It's a very dull area of psychology to many people, but that's what I do. And that's that's where PsychoPy has its origins was was presenting very precisely low level stimuli, visual things in my case. So measuring your ability to detect something that's barely visible, for instance, that sort of experiment. Then you've got lots of so cognitive psychology would have an experiment where, say, you would test someone's memory to a set of objects, and we'd measure what sort of things impair your memory and what things make you, I don't know, respond faster.
So something and and linguistics blends into this as well. So something like the Stroop task is a task that's very, well known in this sort of field where you present the word red written in green text versus the word red written in red text. Sub subjects participants have to report the color of the letters, but they're slower to do that if the letters are spelling out the wrong word. Okay? So you're slower to respond that you've got red letters in front of you if the if the red letters say the word green. So that would be a sort of cognitive psychology or linguistics type of study. And, you know, again, it revolves around presenter stimulus, measure response, measure your reaction time to that stimulus.
Neuroscience might be similar sorts of experiments, but now you're doing that in an MR scanner. So you're measuring the the fMRI, BOLD signal of of a participant while they see different stimuli. So there are various different sort of fields around, what what overall we'd call the behavioral sciences, and they all tend to link to the same sort of experiment where you present stimulus and you get a response. There's also social psychology where maybe you present images, and then you'd ask people to rate them, or you'd ask people's opinions in some sort of a way. And and, again, you can use the same software to to run all of those different sorts of studies.
[00:13:12] Unknown:
So do you find that your users who have no previous formal programming training come up to speed with PsychoPy quickly? And what are some of the challenges there?
[00:13:21] Unknown:
Yeah. For some of our users, that's that's a real challenge. For some, those in more neuroscience based areas, they are probably fairly used to programming because they'll need to do it for their analysis anyway, and they've been used to other things like MATLAB and and that sort of stuff. So so we've got a huge spread. People in psychology and people in linguistics are typically less likely to be programmers beforehand. And so we've got this quite broad church of people from from all different backgrounds, and that's where the graphical interface or the scripting interface really helps. So so if you prefer to code and have absolutely every every bit of control over every aspect of your experiment, then you should write your script yourself. Because the builder is gonna be trying to do things generally and can't implement every feature that you can write in code.
But but then for those less technical users, they seem to be finding the builder interface very easy to use. And I think 1 of the nice things actually is that, along the way, they learn a little bit of the logic of programming as well. And and maybe they didn't intend to move, they didn't realize they're even doing that. But the builder is kind of set up. You know, it has objects and loops, and objects have parameters, and those can be repeated and and changed at various different times. You know, they're essentially learning the basics of programming, but they're doing it in a graphical interface rather than line by line of code. They're also as they start to make those experiments more advanced, they'll want to add in little snippets of code. And it might be the odd line or 2 that just, I don't know, makes the next stimulus depend on the previous response.
And that's something that requires a line or 2 of Python code. So they're comfortable with adding a line or 2, and they don't consider that programming. There's nothing to be frightened off. But, you know, I think with a lot of people, if you tell them that they're going to learn programming, that's that's just too terrifying a task. Or if you tell them they're going to write a script, you know, it's it's too big a thing for them to imagine. Whereas if they start off just writing the odd line or 2 here and there and and learning to hack a little bit, something that's already done, then they they get sort of eased in. I think we do both things. With the builder, we manage to support people who don't really program and who possibly never will. But also, we we sort of ease people in a little bit to to programming as well.
I guess also, I mean, I now run quite a lot of workshops and and that sort of thing, and we spend a lot of time training people. I've taught people from right away from undergraduates, PhD students, and professors how to program in Python on to some level. And and I guess it comes back again to that thing that I'm not a hardcore computer science trained programmer. You know, my background, I did a degree in psychology. And and I can empathize a bit more with a novice programmer because I remember what it was like to be that. And and maybe it's a bit harder for a a more sophisticated programmer to be training people who who don't have that sort of a background. You know? They they can't imagine what it's like not to understand the difference between variable and a string and and and that sort of thing. So maybe it's my lack of computer science background that means we can we can do this a little bit better.
[00:16:45] Unknown:
That's great. I think it it it really speaks to 1 of Python's core strengths in that. It really seems like for whatever reason, whatever its attributes, the language really allows itself to be picked up comparatively easily, at least compared to other programming languages by folks whose, you know, primary occupation or or focus is not programming. So that's that's fantastic.
[00:17:09] Unknown:
My experience is that, you know, the very first the the starting point for quite a few users. It's easier when with an absolute beginner to teach them MATLAB for the first few hours, where they don't have to worry about things like was it an integer or a float. Right? They they it just does the right thing, and they don't have to think about different types. They don't have to import their libraries and that sort of thing. But very quickly, I think you're right. That that after that first couple of hours, they get it. And then if you then try and show them about MATLAB, they get very frustrated that it can't do all of the things that they found so easy in Python. So, yeah, I I've I've had a great experience with teaching people Python.
And if they've not learned a language before, they they get into Python usually fairly quickly.
[00:18:03] Unknown:
Yeah. And I think that your comment about having the ability to work against a known good piece of software and then just add some modifications here and there is definitely a great way to introduce people to some of the different concepts because they don't have to start off staring at a blank page and say, okay, well, where do I begin? Because it's already begun. They just need to continue it along, and then that will provide greater confidence when they do go back and have to start it with a blank slate.
[00:18:33] Unknown:
So the other thing that we do, all that this this within the application, as I say, I sort of provide this this code editor, you know, and it's not as sophisticated as, say, Spyder or, you know, lots of the other more professional dedicated editors. But it has a a menu that says demos. And it has a you know, they so users new to programming can look at the demos menu and and see a whole range of relatively small digestible demos that show 1 aspect of the software, in a in a unit. And there's a big green running man that says run. You know? So although a lot of users then will want to go on or or pre more experienced users from other programming language languages will want a more feature rich editor. To a new user, they get something that's very simple with a green running man, and it just works. That's that's the aim. So so it provides them again with a very simple inroad. And lots of other users, I'm sure, are just gonna use PsychoPy. Those sophisticated programmers will just use PsychoPy as a library, and they'll use their own Python installation and their own editors, and and and that's great too.
[00:19:44] Unknown:
Yeah. And I'm sure too that by having the demos, which are going to tie into a discipline that they are familiar with, will make it easier for them to then understand what's actually happening in the Python code itself.
[00:19:57] Unknown:
Yeah. Yeah. So we've we've got a range of sort of from very small unitary things like how do I present a sound to sort of full fledged experiments. We've we've got quite a range of of levels there.
[00:20:12] Unknown:
And can you describe the internal architecture of PsychoPy and how you approach the design and how it's evolved over time?
[00:20:20] Unknown:
Yeah. So evolved over time is pretty much the key phrase there. Design is, there have been moments where we've really designed the architecture, but at the core, it's this library. That's that's the basic thing. So it's a library which has stimuli that might be visual or might be, there might be classes for handling your trials and saving your data and that sort of thing. The the key in terms of architecture, the key step was this development of the builder and and how that was going to work. So there was an abstraction layer that I started and that this was very much designed. So I started thinking about what are the key aspects of of an experiment, and how should that look in terms of a Python dictionary, essentially?
And then how do we save that dictionary to a file, and how do we represent that dictionary visually? So what we've got is a an experiment can be built in terms of a number of components, like a text object or a keyboard or a image, and those get combined into routines. Those routines are basically just a list of components, but the component knows within that routine when it should start, when it should stop, and what its parameters are. Then we've got a sort of a flow that combines those routines in various different ways, and and that's essentially the whole thing. So so the flow is a list of routines that happen in a certain order, and that's all there is to it. It calls each routine and says, write your code. The routine then calls each of its components and says, write your code, and and then goes to the next routine in the flow list.
So by having these units that are combined in various different ways, it turns out that you can make quite a rich structure of experiments. And you can have loops on the flow that cause something to repeat. And, of course, in terms of the Python code, that's just adding an indent level, essentially. And the end of the loop on your flow just de dense the code again, and we carry on to the next piece of code. So in terms of the graphical interface, writing the code, it was all relatively straightforward. There's this abstraction layer, which is a dictionary containing some lists.
And that gets saved to a data file, and we we sort of created an an open standard not a data file. Sorry. We created an open standard sort of experiment file, which is an XML formatted file that essentially just stores these lists and dictionaries with their various parameter types. So that can be loaded into or potentially, I don't think anyone's written anything that has loaded these into other pieces of software. But that that's agnostic about what the software is that's using it. It could be loaded into another piece of software, and it literally just defines what sequences of stimuli and keyboard, etcetera, get used to create an experiment and how those iterate to create a full study. It's really a graphical interface that writes the code. So that that's that's the key part. Builder writes your script for you. And so and it literally writes a Python script. Going forwards, and we'll I think we'll come on to this later on. But going forwards, we're actually now having it output a JavaScript script instead. So we will be able to this this builder interface that defines your experiment doesn't need to care about what language the script is that it writes. It just has these chunks of code, and, those chunks of code be could be in any syntax you choose.
[00:24:06] Unknown:
So how easy is it to extend PsychoPy with new types of stimulus?
[00:24:11] Unknown:
Super easy. That's straightforward. Either in the library, you can add your own Python classes or subclass existing ones. That's all very straightforward. The rendering code is all pure OpenGL, so some people have, I don't know, done some transforms. For instance, they've got some particular unusual display, like a like a huge curved screen rather than a regular computer display. So they've had to subclass the window in order to add some alternative spatial transform to their window, for instance. So So all of those sorts of things are really easy to do. Or just adding a new stimulus, someone just recently contributed a new stimulus is essentially a visual button that people can click on within the regular Open GL screen. So all of those things can be added very easily in the library. But, also, in the builder, the builder is completely modular. You can add your own components, and all that a component needs is it needs to know what its pieces of code are that need to get executed at the beginning of the experiment, at the beginning of a trial, on every frame, at the end of a trial, etcetera. So it has 6 pieces of code. And if it knows what its initialization code is, then the builder will call that initialization code for each of the components in turn.
And so all you've got to write for your new component is how how should my object be initialized, and and and you're done. So, yeah, it's it's really very extensible. And some people then go on and share those extensions that they've made with the rest of the community. I'm sure loads of other users are kind of shy to share their code, and they'll keep their component just in their own lab. And that's fine too. You know? So but it's being widely extended.
[00:26:01] Unknown:
That's great. So what are some interesting challenges you faced when implementing PsychoPy?
[00:26:08] Unknown:
The interesting intellectual challenge has been how do you create create this sort of abstraction of an experiment that isn't about lime so working out what that should look like in a way that's really general, and that would that would be easy enough that a 1st year undergraduate in psychology who's a technophobe can still create their study. But also that a professor in neuroscience is quite happy to use the software to create their fancy experiment as well. So to to generate something that was sufficiently general that it could run a a real scientific study, but also sufficiently easy that a new user wasn't terrified. That that's been that's been the big interesting challenge, and and that's been a lot of fun trying to get those things right. The more frustrating challenges have been trying to support multiple platforms and trying to keep a track of all. So we have lots of dependencies. You know, I didn't want to write my own library for loading videos and and another 1 for presenting this, I don't know, other other sound stimulus or or something like that. So we we used a lot of other dependencies, and those keep changing. And that's that's quite frustrating that I spent a lot of time.
Just each time that we provide the new distribution, the new large application, I have to go and dig through what's changed. And I don't know if pandas has changed its input, call or something like that, and then and something else has changed. It's, it's become 64 bit only or something. And so maintaining those has been quite hard, and I'm afraid we do have releases where we push out the package and and then someone comes in and says, oh, but the library has changed or there's some 1 of my dependencies has disappeared out of the package, and it's because I've had to rebuild everything from scratch at some point. That's been the hardest part and the most frustrating part as a as a developer, keeping on top of many different platforms, trying to test on all of the different platforms. So we do have a test suite, but a test suite only really works for the library. It's not very good for an application because testing the sequence of button clicks is is on on arbitrary display shapes is is a really hard thing to do. So a lot of the application side, isn't tested in the test suite. And and that means that, I'm afraid, sometimes we release things where I haven't gone and pressed all of the buttons on Linux and on Windows 7 and on Windows 10 and on Windows 10 64 bit. You know what I mean? So sometimes we break things that way because I haven't been able to test all those things. That's that's quite frustrating for me and for the users.
But but on the whole, I think I think the program the ship is gradually moving forwards, not backwards. That's great. I should point out that, I sometimes talk about I, but the other users and the other contributors are huge in this aspect. And and a lot of people will submit quick fixes. You know, they will users will notice that something's not working. And rather than saying it's broken, they will come back and say, hey. I've discovered that this doesn't work anymore, and I've I've dug around, and it's because pandas now needs this new argument.
So here's the here's the change on GitHub, and and I can just do a pull from from their pull request on on GitHub. And and that has GitHub has made my life so much easier in terms of that sort of thing. And it means with it's really genuinely a community, so I I should stop using I altogether. It was I for about 7 years. And and now when you look at the contributions, some of the releases don't have any contributions from me at all. I just merely put it all together and and release it. So so, yeah, the community has grown really strong around it. Well, that that's a testament to your hard work on the project that made it a success that made people want to contribute. So
[00:30:11] Unknown:
1 thing real quick, you mentioned previously that in future releases, PsychoPy may emit JavaScript. Is that at least in part an attempt to deal with some of these difficulties into integrating third party multimedia libraries?
[00:30:26] Unknown:
That wasn't the aim of that, but it has crossed my mind that increasingly some of the problems, yeah, will get solved. And and so running a study online in a browser was something that people have been asking for that as a feature for a very long time, and I've always just written back and said, no. PsychoPy is about hardware accelerated graphics, and and it needs that in order that an interpreted language can do things in real time. And so timing precision is really important to us. We need to know, and most pieces of software wouldn't know, exactly when did your screen refresh, for instance. And we want to know, did I manage to get my stimulus drawn before the screen refresh occurred? Because if not, then our timing went wrong. So I basically always push back on these users that were requesting a web interface because I was just like, no. The the browser can't do that. It doesn't have the the graphics hardware, pardon me, to be able to perform at that level with that level of precision. So So I always just said no. And then relatively recently, WebGL has has changed all of that, and even Canvas to some extent in the HTML 5. Canvas code has has changed a lot of that, and you can actually now with WebGL, you can use Open GL shader language in a browser. It's it's it's crazy what you can do. And suddenly, all of that sort of performance issue that I thought we would have is disappearing. And the performance will probably never be quite so good in terms of how quickly you can get responses, and we'll never be able to connect to hardware like parallel ports and that sort of thing from a browser. But for an awful lot of studies, suddenly, the web actually has the performance that we might be looking for. So that it was that change that was really driving this this potential of creating JavaScript.
But as a side issue, I've I've then started looking at the way that those things are done and and, particularly a library called pixie. Js that that I've been, looking into a lot lately. And, wow, it makes it makes all of that media stuff really easy. And and a lot of the things that have sometimes caused me a lot of pain might get a lot easier. But it will require quite a lot of testing as well. You know, when you ask a sound to be played in your browser, how long does it actually take for noise to come out of the speakers? That will take a lot of testing for us to know exactly how good it is. And and it it may be that the performance is still not that great, and we'll have to look into that a lot.
[00:33:01] Unknown:
And I imagine that by being able to target the browsers using the same builder interface, would potentially drastically increase the size of experiments that can be conducted because of the fact that you can just publish them to a website and have people interact with the experiment without necessarily having to be in a lab environment. Exactly. Yeah. It's a it's a game changer for 2 reasons, actually. 1 is because
[00:33:27] Unknown:
you can send an email around and get people to participate that you've never met, that you didn't have to bring into a lab and individually get them to sign off the forms and sit them down and explain the you know, you just send them an email. And so technologies like Mechanical Turk have been quite big recently in getting people to essentially be enrolled in a study and paid by Amazon via Mechanical Turk to be a participant in a study. And some of these studies are having tens of thousands of participants. Whereas if you're going to run those participants yourself in your lab, you'd have maybe a 100 if you've got enough time. So, yeah, it's it's a game changer from that perspective. But, also, it means that you can if we're running a study in a web browser, we can run a study on any device. Right now, PsychoPy needs something that will support Python and Piglet and OpenGL and all of those dependent libraries.
And that means it won't run on an iPad, and it won't run on well, it will run on a tablet if you want to go and install Linux on your tablet or Windows, I guess, or Windows Surface. So suddenly, move it to a browser, move it to a website, and your existing experiment that you created in Builder can now be compiled to a website, but it can be run on any device, whether it's a phone, whether it's Android, whether it's iOS. All it needs is something that supports JavaScript and a web browser. So, yeah, it's a it's a game changer in many ways, and and it'll be very exciting to see how that how that pans out.
[00:35:01] Unknown:
Yeah. And I think another thing worth mentioning too is that by having that greater reach and, potentially much larger audience, I imagine it's easier to prove statistical significance and by the fact that you do have such a larger corpus of data to work with.
[00:35:16] Unknown:
You can dig around and find things more with with very large numbers. I'm a little bit concerned actually to some extent on that score because you could also find yourself, reporting a lot of noise, you know, or tiny, tiny effects, that wouldn't have been significant when you look at a few hundred people are significant in in tens of thousands of people. But, actually, if it's that small that you needed tens of thousands of people to measure it, maybe it's not big enough that we should be carrying. That's that's a that's a personal query. Yes. That's a scientific issue.
[00:35:48] Unknown:
In looking through the documentation, I noticed that you support a number of different formats for the output data, including pickle. I'm wondering what are some of the most popular analysis tools for users of PsychoPy.
[00:35:59] Unknown:
Yeah. Sure. So pickle file is something that we save as a sort of safety net, and it it contains an awful lot of it of information. So it contains a copy of your experiment, which pickle is sort of, a very it's literally a copy of the study that you ran as the data were were collected, and it's and it's the Python object just pickled. So that's a pretty useful thing that that can, for instance, be used later on to save out some of the other formats as well later on. So if you accidentally ruined your CSV file, which I think is what most people would use for their analysis, you can go back to the pickle file, recreate the CSV file from it because it's just a copy of your experiment. Or if you forgot 1 of the parameters of your study and you accidentally didn't save it in the in the CSV file, then you can go back and and dig out what the actual parameter was from the script that has been saved within your pickle. So that's a sort of a belt and braces kind of a data format that I suspect most people aren't using for for real analysis.
Although you you can load up that pickle file, and you can use NumPy or pandas to go and analyze. I mean, the the data were then saved in NumPy arrays within that file. And and so you you could certainly use that that way if you like. We also output a CSV file, so a comp comma separated value file that most people would use. It's a trial by trial, what happened. But you have to tell the CSV file, or you have to tell PsychoPy what to put into the CSV file. So what what variables are of interest to you, and it's going to load those as headers in the file. So a builder will try and do that for you automatically, but but maybe there's something that you wanted to study within your experiment that you need to tell Builder that it needs to do manually.
So, occasionally, people will get that wrong, and then that's where the log file comes in. So so that we also output a log file, which is just an automated stream of events happening in your study as it occurs. So I've just changed the stimulus to this. I've just done that. We've just started a new trial. Subject has just pressed this button. Now that's not always easy to analyze because it's a stream of way too much information. Just in chronological order, and it doesn't it doesn't know the semantics of your study. It's just, a chronological stream. But that's very useful if you've forgotten to do something that you can go back and check all of the details, and you could write a script that passed this nasty log file and and, you know, dug out the data that were critical to you, in most cases. So so we've got various there's also an Excel file, which which we basically don't recommend anyone use. You can save your data in an Excel file if you so desire.
In terms of the what people are using for analysis, I guess a lot of our users would be in psychology, and they would be using SPSS. We would naturally try to encourage people to use either Python or R. So they might load their data in pandas, or they might load their data as a as a table in R. And and the CSV file is designed so that either of those packages would would allow you to to load your data in a very easy step and start doing start doing analysis on that. So so that's what I would, naturally imagine most people to do. Again, I used to do my analysis on the PQL file using NumPy arrays because CircaPy predates pandas.
And so pandas was a was a new thing to me that I, I subsequently learned is really useful.
[00:39:41] Unknown:
Yeah. Pandas is a somewhat recent entry on the scene, though it seems to have gained a lot of, following because of the fact that it's fairly easy to use and 2 dimensional arrays are a fairly large portion of the problem space that people were using NumPy for. And it adds a lot of niceties on top of that in terms of the data frame format. And I was gonna ask if you've looked into the use of the feather library for being able to add that interoperability between Python and R for people who want to do their analysis in in R?
[00:40:12] Unknown:
I haven't, actually. I don't I haven't personally worked with the Feather Library at all. I haven't really used R myself. I'm I'm still using Python because I've now got all of my scripts that will do various scientific analyses, like bootstrapping or whatever and simulations in Python. I've never had the need then to go and learn R. So but it's something that a number of our other users are certainly using R a great deal, and and I suspect they will be digging around in Feather as well, but but I haven't got that far yet myself.
[00:40:47] Unknown:
And how is data input typically managed? I'm wondering if PsychoPy supports automated readings from test equipment or if it's something that's the responsibility of the people conducting the experiment to control the data input.
[00:41:00] Unknown:
Yeah. No. Typically, it comes from the experimenter writing the code to handle controls or whatever. A lot of it obviously is very straightforward. I mean, most of most of the inputs are coming from things like, keyboards and and microphones and and mice. So that's that's all easy enough. They might if if you've got some hardware like, I mentioned MRI before. So if you're doing an fMRI study, then you need to monitor probably the trigger pulses that the device is sending out. So a scanner will will send out a pulse every, say, 2 seconds to announce that it's just about to start creating a new image of the brain. And so you need to detect when those things occur. But those are relatively straightforward things. They're either a serial input or a parallel port import or a a keyboard sometimes that they'll they'll simulate pressing a key.
So those are all relatively straightforward, and it does come down to the user writing writing their code to to interface with them. Depending on what sort of hardware, we might have them built in, and and for some, you might need to add your own. So eye tracking is another good example where we've got support for, I don't know, 6 or 7 different eye trackers. And if if you're using 1 of those, then you can plug into a sort of a unified API and, say, on each on each frame or on each trial, find out what the position of the eyes was at this point using 1 set of function calls. So we've got built in support for quite a few. But if you've got your own eye tracker that's different to 1 of those, then you might need to use your own custom Python code. Hopefully, your your hardware supplier will will support Python as well. So it it depends on what particular devices you're using.
[00:42:59] Unknown:
And I imagine that because of the fact that timing is of such critical importance in these experiments that it is certainly necessary to make sure that those data inputs are being fed through the PsychoPy program so that you can correlate the input times with the stimulus times?
[00:43:16] Unknown:
Yeah. So we've got essentially 2 different ways of doing this, and and and 1 1 is just using the traditional Piglet sort of, which is the the graphical library that we use. It's like pygame for those who don't know. So we might be just, say, recording key presses using that, which is really easy to do, but isn't actually super precise. So so, typically, you'd be updating your stimulus on every frame, and that means you're tied to a refresh rate of, say, 60 hertz. So if you're checking the keyboard once per frame in in part of that flow of code, then, of course, you're only checking the keyboard every 16 milliseconds.
And so there's there's an alternative system that we have, which is which is a a library called IO Hub, which is now really very much integrated in PsychoPy. It's it's really a part of PsychoPy. And that was written by a contributor who comes from the eye tracking world, Sol Simpson. And he has written that to essentially be monitoring hardware in a separate process and passing g events back to the Python. Well, it is a Python process as well, but it's it's passing events back to the PsychoPy main process. And this avoids the the, global interpreter lock and allows you to to run your hardware acquisition, your data acquisition, and and outputs at times on a separate process, on a separate, core if you assuming that you've got a multi core machine. Now that has methods for synchronizing in both directions. Either either it can save its own data file, so it it also supports a a data format of its own based on HDF 5. And so it can stream data at a kilohertz, say, which is what some eye trackers will be using. It'll stream data at a kilohertz down to an HDF 5 file, but also you can pull it from the main PsychoPy process to get some snapshot of what's going on, for instance.
So we have ways of of doing that. But using IOhub, you know, it's a lot more intensive, in terms of the the user has to work a little bit harder to to get that all working. It's it's more complicated. It's running a separate process and sending events backwards and forwards. So for simple studies, I think people still tend to use the the the basic get keys kind of calls, even though those aren't quite as, temporarily precise as as using IOhub.
[00:45:58] Unknown:
It, it strikes me that this kind of framework with all this eye tracking and millisecond scale input and rich visual display capabilities could also be really useful outside the field of psychology for things like UX researchers or things like that to test new input paradigms and the like?
[00:46:20] Unknown:
Yeah. I don't have statistics on where this is being used outside psychology, which I'd really like to know about, actually. Anyone who's using PsychoPy not in the world of academia, I'd really like to hear from you how you're using it because I it helps me to persuade people that this is a, a useful endeavor outside academia. Certainly, some people are using it in other things like in in clinical research and, in in developing hardware, I believe. But but my I get to track how many people are using PsychoPy, but not in what way. So so I I just don't have very much information about that. Certainly, I've been amazed at the variety of experiments and the variety of ways that people have have used it. I had in mind a very limited set of use, particularly for the builder. And people have gone way beyond what I ever imagined and creating quite elaborate studies that, I hadn't really pictured being done.
[00:47:22] Unknown:
What are some of the most interesting experiments that you're aware of having been conducted using cycle pi?
[00:47:28] Unknown:
Oh, goodness. That's that's a really hard question. Thousands of studies have been run, but I don't follow what they all are. So so that's a it's a very hard question to answer. 1 of the demos that we actually supply that I think is really fun is called the balloon analog risk task, and that's a a way of measuring your risk taking behavior by allowing you to blow up a a virtual visual balloon, and it will pop at some point. And the question is, how far do you keep on blowing it before it pops? You get more score the bigger the balloon gets. But if it pops, you get nothing. And so it's a way of measuring your risk taking by investigating how far you blow up this balloon. So that's actually just 1 of the demos in PsychoPy, and that's actually actually kind of, a neat way of showing the level of flexibility that that we have and the you know, it's an interactive study. It's not a traditional sort of I present a stimulus, I collect a response.
You're you're growing this balloon. So that sort of demos a little bit the the range of things that can be done and and maybe 1 of the more interesting types of studies. There are also things like the implicit association test or IAT. So PsychoPy is used a fair bit for that in in various different labs, which is a sort of test where you you're trying to understand someone has a, an overt opinion of a particular stimulus. They say that they like this thing or they dislike this thing, and you're trying to work out if that's true. So you can present images of various different stimuli, and you ask people to respond about likes and dislikes. And you measure essentially if you know that they're wanting to respond that they like this thing that what that they actually don't. You can measure the reaction time to their response in a in a speeded task, in a in a rapid stream of stimuli.
When you're intending to respond 1 way, but actually you feel something different, then it delays your response. So that would be an implicit association test, and that's sort of 1 of the myriad things that PsychoPy is using. But, you know, it's it's being it's being used in so many different brain scanning studies or brain computer interface studies as well. So so people that are using while measuring EEG, so electrical signals on the scalp and trying to control the stimulus by using the electrical stimulus by using the electrical activity on the scalp. So that's called a brain computer interface and or BCI.
Some people are certainly using Sakupai for that sort of thing as well because it can present stimuli very rapidly based on inputs. So there there are quite a range of different things.
[00:50:08] Unknown:
While I was reading through the docs, I found the page describing the integration with the open science framework for sharing and validating an experiment and collected data with other members of the field. I'm wondering if you can explain why that's beneficial to the researchers and compare it with other options such as GitHub for use within the sciences?
[00:50:25] Unknown:
Sure. So this is a new development for us, and it's an interesting, an interesting new direction. The idea is that when I run my study, a, I want to share it with my collaborators in other departments, say, b, I want to share it with other computers in my lab. And and, c, afterwards, I potentially want to share that with the rest of the world in a convenient way that other people can go and download my data. Now the question about whether so open science framework allows those sorts of facilities. It allows you to store data that either can be private to you and and but also can be exposed to the rest of the world if you want to share it and make it public. Now that I didn't find their interface all that intuitive, although that's what it has over GitHub. So the idea is that for a lot of scientists, GitHub is really written for coders, and they don't they don't know what Git is. So the idea for them of going getting a GitHub account and uploading something using a terminal tool, although I guess now there are graphical interfaces for that as well, that that would put off quite a few scientists that are less technically competent. So open science framework is sort of a way in particularly, it's come from from the field of psychology to allow people to upload, make public, share with collaborators their their experiments and their data.
Now I say, I didn't actually find their their user interface super easy to to for a new big a new user. So they actually had provided some funding for us from the Center For Open Science, who wrote open science framework, to allow us to synchronize files directly from within PsychoPy. And so you can now go to the projects menu as as of the new version that was just released, yesterday, 1.841. You can go to the projects menu. You can take a folder from your local machine, and you can create a project on Open Science Framework based on that folder, and you can just press sync, and it will keep those projects in sync. You can also search for it from another computer and download that project. And, again, you can just do syncing from 1 machine to another within your lab. And then at some point, if you want to, you can go to open science framework and and click a button to make that public. So it's about trying to enable users who are a bit less tech aware or or or don't want to have to learn lots of different interfaces and lots of different technologies.
They've got the hang of PsychoPy. There's a menu that says projects. They can go to that menu and either search for an existing experiment. That's what I'm hoping in the longer term is that people will make their experiments available online. Other users will be able to go and search for, show me an existing implementation of the implicit association test. And then I can download that and build my work on that, and it will speed up science. That's the idea.
[00:53:40] Unknown:
And I imagine too that it will increase the transparency and repeatability of some of those experiments too, while at the same time creating the provenance for the data so that it's easier to go back and understand where the statistics come from when you're publishing the results of your experiment.
[00:54:00] Unknown:
That's that's definitely a great goal scientifically for Center For Open Science. And for me, I would love to see that happen. I don't think that's what's driving the individual scientist who's wanting to upload their data. So so right now, that isn't having people examine your data and look for flaws isn't an incentive to most scientists. But but being able to share it and encourage so in reality, getting other people to see your experiment and your original code and build their study on it, that improves the visibility of your work and gets more people to base their study on your work, which means you get more citations for your paper. So that is an incentive for a scientist that they should be aware of. They they should be trying to increase spin off projects from their work.
And and this allow publicly sharing your code, as well as your data, allows allows people to create spin off projects more easily. And that that is something that that scientists should care about, whether or not they do it for the altruistic reason that they want science to progress well. And and, you know, it's all very good of you to to feel that you want, people to be able to examine your data and look for flaws. People in reality, it's a lot of effort, often to to make your data available in a format that other people like and and all that sort of thing. So people are often quite shy of doing that, unless there's a a clear incentive. So I try to focus a bit more on why it's good for the scientist, on the individual scientist, as well as the fact that I, of of course, I do believe that it's good for science as a field as well.
[00:55:39] Unknown:
Do you have a road map of features that you'd like to add to PsychoPy, or is it largely driven by contributions from practitioners who are extending it to suit their needs?
[00:55:48] Unknown:
Both. Very much both. There are lots of things that I would love to add and haven't had time to do. So PsychoPy's development has been driven by either things that my lab currently needs, and and I want to get in there, and all that a contributor's lab needs. And and so they're going to go and add that feature because they need it for their study right now. And that's the best way, in many cases for things to be added. There are lots of other things that I would love to add, and if I had time, I would. But my priority has to be with, you know, my PhD student and and and and that sort of thing. So it's there's a there's a mixture going on. There's also an issue that although I would love to add a certain feature so right now, I'm in discussion with someone who's wanting to contribute stereo displays and and enable PsychoPy to support, you know, shutter goggles and that sort of thing better without without the user having to write particularly hard code. So how do we make that easy for the user, and how do we support 8 different forms of stereo outputs from from a graphics card. Now I'm not the right guy to write that code because I don't have any of that stereo kit. I don't have a TV with shutter goggles that allows me to to work on them. And my lab isn't doing that sort of study. So it's never going to be the top of my priority list, and it would be very hard for me to go and test that rigorously.
What we need is a user who runs those sorts of studies, who knows more about that technology, and who needs it in their next study. That's the way that the the code gets best written. And that's exactly what we've got going on right now is that there's someone coming to us. They've been using this sort of technology for a long time, and they know how to implement it. Well, you know, maybe with a little bit of support from me about how does the PsychoPy engine currently render things so that we we need to talk a little bit about how to interface his module with mine. That's fine.
But that's that's the way that most of the code has sort of evolved. Or take for instance, the rating scale. You know, that came from Jeremy Gray, and I've never done a study that involved a rating scale, but he does them all the time. So he he wanted to have this stimulus available to make it easier for him to run his studies. And that's how it got contributed. There is a road map as well, though. And there are things that I do very much intend to, get done, either me or someone. Or, ideally, as we get bigger, we will hopefully have funding options, and we will have ways that we can pay people. So far, we've had very, very small amounts of money from a few places that are willing to contribute to to our open science drive.
But 1 of those things is is absolutely the the development of web based experiments and the building of the JavaScript library. Again, I'm not the right guy to write that code. That's not something I've known much about. I've not worked very much on that. So I would have to learn a great deal about JavaScript to write that myself. But we can we can get other people, probably hire other people. So we've been working with a company called elixir.com, and they've been writing the the sort of JavaScript library, that's the equivalent of the PsychoPy Python library.
So that is something that will, for sure, be happening, and, hopefully, that will be over the next year or so. We'll we we've we've got that in a proof of concept right now. And, over the next year, I'm sure we'll we'll be supporting full experiments in a browser using open science framework to save your data and and using builder to generate the the web files that you need to do that. 1 of the things that we that I'm really keen on is making builder better at detecting when a user is about to do something that's not wise. So right now, you could ask builder to draw your stimulus smaller than a pixel or 4 meters to the right of your screen or, with an invisible texture, and PsychoPy won't complain.
It will just fail to draw anything. Right? And to the user who gets no stimulus appear on their screen, that's confusing. So 1 of the things that PsychoPy needs to do is detect and do an integrity check of your experiment and say, you know, it looks like your stimulus is about to be drawn smaller than a pixel. Do is that really what you wanted to do? And just provide little warnings like that. That that sort of code, again, it's the sort of thing that because my lab doesn't need it because experienced users know what's going wrong and they won't make those mistakes, it's hard to make a priority, but it's something that I feel is really important that will prevent people from making silly errors, including me. I make silly errors in my own experiments as well. And and I would like to warn me, oh, you're about to do that thing again that you've done a 1000000 times before, and you still haven't learned that that's wrong.
So that's that's 1 step that's that's, another thing that's important for the future. Improving precision is always 1 more thing that we we're always trying to work to get faster playing of the sounds, faster rendering of of large movies, especially as we go to HD movies increasingly, we've got to improve the precision and performance of a lot of aspects of PsychoPy. And and that's something that's just a constant, aspect that we're always trying to improve.
[01:01:45] Unknown:
So are there any questions that we didn't ask that you think we should have or any other topics that you'd like to cover before we close out?
[01:01:54] Unknown:
No. I think 1 of the things that I find interesting is this question about open source development versus the traditional commercial world and and software engineers versus academic engineers, but we ended up talking about that anyway. Do you want to talk any more about things like so 1 of the things that PsychoPy does that maybe is unusual is this tracking of users, and it's it's hard for an open source project to persuade people to to fund it and to persuade you know, in my case, I've got a job as an academic, but I have to persuade people that that's a useful use of my time.
I'm not always making discoveries, but I'm making software tools that other people can use to make discoveries. And and so sometimes it can be hard to express and and and and persuade people that these projects are important. And that's something that's that's fairly close to my heart as well. Sometimes getting people to fund a project, they're like, but why would we fund it? It's not going to make any money. Or or academics, you know, my seniors would be saying, but why are you doing this and not just paying a software engineer to write the software for you? So or why are you just not using something off the shelf?
So those are sort of topics that I that are close to my heart, and then that's why I ended up writing. So as PsychoPy gets launched, it pings a website, and that's what allows us to collect usage data. So we track how many unique IP addresses launch the application every month. And and that helps us a little bit to to go to potential funders and to go to my superiors and say, look. This is an important thing. We've got 12, 000 users. And and so that's kind of an interesting topic in a way, But I don't know if that's something that that I don't know how to frame that as a question that you would want to include.
[01:03:58] Unknown:
You did a perfect job there. And I think that too that brings out a good point that as software becomes a more integral piece of academic endeavors, that I think that we are probably gonna start to see the paradigm shift a little bit away from the publish or perish or at least recognizing that developing software is a form of publishing as is evidenced by the fact that, a lot of these scientific open source projects are actually cited in academic works. I know that you've got a citation reference in the documentation for PsychoPy and similarly for things like SunPy and AstroPy.
[01:04:35] Unknown:
Yeah. That's but that's 1 of the interesting things is trying to find track those users where the citation count on that paper is, I don't know, something like 6 5, 600 people, something like that. But I know that monthly, there are 12, 000 users. So so, you know, there's a there's an order of magnitude discrepancy there that that is, that is interesting. And and presumably, it's that citations will follow on gradually later or people are using the software for all sorts of other endeavors. And when I mentioned earlier on about wanting to know about nonacademic uses, it's because in the UK, this, by many classifications, doesn't actually count as having any impact.
So the UK counts impact of a scientific of a of an academic endeavor as when it affects society outside academia or when it raises money. And because I don't sell it and because the people using it are academics, tens of thousands of users don't count as any impact, which is a so having 1 person using it in the clinic for for nonacademic use but to find patients counts a lot more, it turns out, than tens of thousands of people teaching their students. So so we've got these, broken kind of incentives in many ways, and and and those those are frustrating for me. Those are those are topics that I find interesting. Let's put it that way. This this comes back to a recurring theme that we've seen on our podcast.
[01:06:05] Unknown:
If you're using a piece of open source and you love it and you, you know, it's making your life a better place and you're enhancing your job performance, tell the right you know, the people who are in charge of that project about it. Drop them an email. It, you know, it it takes 5 minutes to say, hey. Thank you very much. I'm using it for blah blah blah. And it means so much to people like like Jonathan here slaving away, building this this incredible stuff.
[01:06:31] Unknown:
Yeah. Yeah. Very much. And and certainly, it while writing papers, like journal articles versus writing software, I I found it interesting when I go to a conference that I do get people come up and say, oh, you're the guy who writes sci fi. That's great. I love that. That's brilliant. And it's really nice. And no one's ever come up to me and said, hey. You're the guy who wrote that paper on adaptation to PLAD visual stimuli. And I really love that paper. So, yeah, it is it is motivating having individuals come up and and say that they like your work. And that actually, for a lot of open source developers, is I think what drives them. You know, they they like to feel that even though they're not being recognized financially or or in terms of, other other more tangible methods. Hearing people say they like your software, is definitely what gets a lot of us out of bed.
[01:07:25] Unknown:
And on that note, what are some of the best ways for people to get in touch with you and let them know about their use of PsychoPy or follow what you're up to and how things are going with the project?
[01:07:35] Unknown:
We've just recently switched actually to from from a Google group to Discourse. So Discourse is an open source forum product, and and we've now got a a forum on discourse.psychopi.org. That's been brilliant. That's that's something that I I hadn't realized actually how much we needed a forum rather than just a Google email list. So, actually, that's the best place to get in touch with the group. And, yeah, that's if you're writing your own open source projects and or or just wanting an online discussion, I would also recommend having a look at Discourse because it's a it's a pretty awesome package.
[01:08:17] Unknown:
Yeah. That's what we're using for our forum for the podcast.
[01:08:21] Unknown:
Okay. Yeah. It's it's, it's really neat. It does everything right.
[01:08:25] Unknown:
Yeah.
[01:08:26] Unknown:
And and from our perspective, it does things like if a user starts typing, a new post, but it looks kind of similar to a question that's already been answered, then they get a very clear flag saying, have you looked at this post? Because it's quite similar to what you've just written, which hopefully means that that we are having to answer fewer questions that people will find existing answers instead. So things like that is really neat.
[01:08:50] Unknown:
So with that, I'll move us on into the picks. And my pick today is the book, hackers, heroes of the computer revolution by Steven Levy. I listened to it a week or 2 back and thoroughly enjoyed it. It's a interesting way to get some of the history of early computing and how it, how people who were working outside of the intended frameworks of the hardware, sort of pushed the pushed the industry forward and introduced a lot of new innovations that might not otherwise have happened. So definitely worth a look for people who are not who who who who weren't there to experience it firsthand.
So definitely recommend giving that a read or a listen. And with that, I'll pass it to you, Chris.
[01:09:35] Unknown:
That's a great book. I read it decades ago. Man, I'm old. My pick this week is a, an iOS application. Unfortunately, there's no Android version right now. It's called Castro, and it has really just made my podcast listening life a much much better place rather than using the model of you subscribe to these podcasts and therefore you will listen to every episode that they produce in order. It treats it almost like an RSS reader like Feedly or something like that, where you have an inbox and new episodes are presented to you, and you can tell it this podcast is just so good just just queue it don't even, you know, don't even bother. But for others you can say, you know, put it into my inbox and let me choose and you can say, oh, that that episode is interesting, add it to the front of the queue or the back of the queue or archive it and episodes that you archive don't just go away forever. You can go back and leaf through your archives to see if there's something you actually do wanna listen to. It's just great. The interface is phenomenal, and, and I really love it. And that's my pick. John, what do you have for us or picks today, if anything?
[01:10:43] Unknown:
I I think mine would would I'm just gonna go back to discourse. My my pick of the week would be discourse for for forums, and and discourse is a is a a a great, forum package, open source, really well written. It does everything right, and they're also very responsive developers. When we found some issues and had questions, they were they were really good at getting back to us as well. So for any other open source developers or people just wanting a forum in general, I would really recommend you check out Discourse.
[01:11:19] Unknown:
Yeah. And they've also made it really easy to get it installed and upgraded and self host it, as well as having a hosted offering as if if you are not inclined to run your own server for it. So we really appreciate you taking the time out of your day to tell us about your work on PsychoPy. It's definitely sounds like a really interesting project and I hope you enjoy the rest of your day. Thank you very much.
Introduction and Sponsors
Interview with Jonathan Pierce
Jonathan Pierce's Background
Introduction to PsychoPy
PsychoPy vs Proprietary Alternatives
Fields Benefiting from PsychoPy
User Experience and Learning Curve
Teaching Python with PsychoPy
Internal Architecture of PsychoPy
Extending PsychoPy
Challenges in Implementing PsychoPy
Future of PsychoPy: Web Experiments
Data Output and Analysis Tools
Data Input and Timing Precision
Applications Beyond Psychology
Interesting Experiments with PsychoPy
Integration with Open Science Framework
Roadmap and Future Features
Open Source Development vs Commercial
User Feedback and Community Support
Contact and Community Engagement
Picks and Recommendations