Visit our site for past episodes and extra content.
Summary
In this episode we had a nice long conversation with Glyph Lefkowitz of Twisted fame about his views on the need for an established code of ethics in the software industry. Some of the main points that were covered include the need for maintaining a proper scope in the ongoing discussion, the responsibilities of individuals and corporations, and how any such code might compare with those employed by other professions. This is something that every engineer should be thinking about and the material that we cover will give you a good starting point when talking to your compatriots.
Brief Introduction
- Welcome to Podcast.__init__ the podcast about Python and the people who make it great
- Date of recording – July 21, 2015
- Hosts Tobias Macey and Chris Patti
- Follow us on iTunes, Stitcher, TuneIn, Google+ and Twitter
- Give us feedback! (iTunes, Twitter, email, Disqus comments)
- We donate our time to you because we love Python and its community. If you would like to return the favor you can send us a donation. Everything that we don’t spend on producing the show will be donated to the PSF to keep the community alive.
- Overview – Interview with Firstname Lastname about Topic
Interview with Glyph
- Introductions
- How did you get introduced to Python? – Chris
- 2000 – large scale collaborative gaming system in Java
- Asynchronous IO
- Twisted
- 2000 – large scale collaborative gaming system in Java
- Let’s start with the bad news What are some of the potential wide spread implications of less than ethical software that you were referring to in your Pycon talk? – Chris
- Robot Apocalypse (Not really)
- Much of the discussion around this derails into unrealistic nightmare scenarios
- THERAC 25 radiation machine
- Toyota unintended acceleration scandal
- Real worry – gradual erosion of trust in programmers and computers
- First requirement for a code of ethics – a clear understanding of the reality you’re trying to litigate
- The search for ethics will likely begin in academia where this aspect of software dev is more like psychology.
- Robot Apocalypse (Not really)
- In your talk you commented on the training courses that Lawyers are required to take as part of their certification. Do you think the fact that there is no standardized certification body for software development contributes to a lack of widely held ethical principles in software engineering? – Tobias
- Do you think that it is necessary to form such a certification mechanism for developers as part of the effort to establish a recognized ethical code? – Tobias
- If we were to create a certification to indicate proper training in the software engineers code of ethics, how do you think that would affect the rate at which people enter the industry? – Tobias
- Assuming we can all agree on a set of relatively strict professional ethics that would prevent the above from happening, how would we enforce those ethics? Or do you advocate an honor system? – Chris
- Ethics are by definition an honors system
- Enforcement would be straight forward – professional organizations to maintain a record and deviations from that record
- Need better laws & better jurisprudence
- We need an Underwriters Laboratory seal for software development ethics
- Code of software ethics will not and should not tell you how to be a decent human being.
- Devs / companies can create software that could be used for evil – “We are merchants of death and these are lethal weapons” – could conceivably earn the ethical software developer’s seal of approval.
- Where does accessibility of the software we make fit into a code of ethics? Do you think there should be a minimum level of support for technologies such as screen readers or captioning for audio content in the software that we build? – Tobias
- Minimum levels of knowledge required
- Minimum levels of content in curriculum
- In your talk you mentioned how Rackspace’s stance on user support matches the ideals you’d previously laid out, can you flesh that out a bit for us? What does that mean to individual Rackers in their day to day work lives? – Chris
- In your talk you mentioned that availability of the software source should be mandatory for compliance with a properly defined ethical framework. What mechanisms for providing that access do you think would be acceptable? Should there be a central repository for housing and providing access to that source? – Tobias
- Would the list of acceptable mechanisms change according to the intended audience of the software? – Tobias
- What responsibility do you think producers of software should have to maintain an archive of the source for past versions? – Tobias
- How should we define what level of access is provided? In the case of commercial software should the source only be available to paying customers, perhaps delivered along with the product? This also poses an interesting quandary for SaaS providers. Should they provide the source to their systems only to paying customers, or to potential customers as well? – Tobias
- This question of transparency and availability of source is especially interesting in the light of a number of stories that have come out recently about patients who have been provided with prostheses and other medical devices. In a number of cases, shortly after receiving the device, the company who made it, which are increasingly startups, goes out of business, leaving the patient with no way of obtaining support for something that they are dependent on for their health and well-being. Having the source for those devices available would help mitigate the impact of such a situation. – Tobias
- You brought up an interesting aspect of the trust equation and its relevance to the need for an ethical code. Because what we do as software engineers is effectively viewed as sorcery by a vast majority of the public, they must therefore wholly place their trust in us as part of using the products that we create. As you mentioned with the demise of the scribe with the rise of literacy, increasing the overall awareness of how software works at a basic level partially reduces that depency of trust. At what level of aptitude do you think our relationship with our users becomes more equitable? How does the concept of source availability play into this topic of general education? – Tobias
- What can the Python community in particular do to start the ball rolling towards defining a set of professional ethics, and what has it already done in this area? – Chris
- PSF Code of Conduct is a starting point
- PSF is an organization of individuals
- Corporations are cagey about getting involved for fear of it becoming a legally binding contract
- Django Code of Conduct more specific
- PSF Code of Conduct is a starting point
Picks
- Tobias
- Phillips SHP9500
- keybase.io – Tweet us with your favorite thing about the show to get an invite
- Paul Blart: Mall Cop 2
- Chris
- Glyph
Keep In Touch
- Keybase.io
- Glyph everywhere on the internet
Hello, and welcome to podcast.init, the podcast about Python and the people who make it great. We are recording today on July 20, 2015. Your host as usual are Tobias Macy and Chris Patti, and you can follow us on Itunes, Stitcher, or TuneIn Radio. You can also follow us on Twitter and Google plus with links in the show notes. And please give us feedback. You can leave us a review on iTunes, contact us on Twitter, send us an email at hostpodcastinit.com, or leave a comment in our show notes. You can also leave a comment on our Google plus page. We donate our time to you because we love Python and its community. If you'd like to return the favor, you can send us a donation with details on our site.
Everything that we don't spend on producing the show will be donated to the PSF to keep the community alive. Today, we're interviewing Glyph Lefkowitz about software and ethics. Glyph, could you please introduce yourself?
[00:01:10] Unknown:
I'm Glyph. I'm a software developer at Rackspace, although most famously, the founder of the Twisted Project. I like writing programs, and apparently, I like talking about it.
[00:01:21] Unknown:
In that vein, how did you get introduced to Python?
[00:01:25] Unknown:
So my introduction to Python was, in the year 2000, around Python version 1 5 2. I was working on a sprawling multi user gaming server for collaborative text based gaming in Java. And, long story short, it wasn't working so good. And, Python introduced me to a whole new and vastly simpler way to think about the problem. Python actually introduced me to asynchronous IO via the Select module because it was much more, low level and direct mapping to what the operating system was doing, so it gave me a much better understanding of how my computer is actually doing the IO, which allowed me to write much less buggy software and I pretty much haven't looked back since.
[00:02:08] Unknown:
That's great. And that certainly leads into the question that we're not gonna ask, which is what led you to develop Twisted?
[00:02:13] Unknown:
Cool. Well, there are definitely lots of other, podcasts and interviews and blog posts where you can go read that story of how Twisted came to be. Yeah. It's a refreshing change to not have people ask about it right off.
[00:02:25] Unknown:
So let's start with the bad news. What are some of the potential widespread implications of less than ethical software that you were referring to you in your PyCon talk?
[00:02:33] Unknown:
Well, I think there's 1 obvious 1 which I did mention in the talk, which is, Robot Apocalypse. I think we need to be watching out for that 1. But more seriously, there are, I think that a lot of discussion around this tends to get derailed. The reason I brought up the robot apocalypse kind of as a joke in the talk was a lot of the discussion around this derails into sort of nightmare scenarios where you have bridges collapsing and cities catching fire and, like, that kind of stuff, I think. Yeah. There are dramatic examples of software causing really serious issues, like the Therac 25 radiation therapy machine. Oh, yeah. It's like off-site of example. Or more recently, I mean, this, you know, Therac 25 is the mid eighties.
But more recently, the Toyota unintended acceleration scandal. The and, again, I think earlier this week even, Toyota was in the news again for a half a 1000000 vehicle recall relating to a less dramatic software bug that just turned off the hybrid system sometimes. Though those happened on teams that were engineering physical products and that had existing codes of medical and engineering ethics, so there are always lapses. Right? Like, it's not really unique to software And a lot of the really dramatic examples are things which, although the flaw was due to software, it was more due to incompetence or just being too early in the field rather than a lack of ethical judgment. The real consequence that I'm practically worried about is the sort of gradual erosion of trust in computers and programmers.
Much of the world today does all of their computing via mobile devices already, and given that you need, like, a professional programmer to write this offer for your mobile device, it On a general purpose computer, it's sort of hypothetically possible that you could write all your own apps. Of course, it's prohibitively expensive, but it's at least conceivable. Whereas, on a, on a mobile device, you are dependent on someone with a real computer to write that software for you for both licensing and, and practical, sort of, form factor reasons.
And given that we have that increasing level of trust where there's multiple classes of people in their relationship with technology, If the people who really depend on technology to live their lives, but I think it was the European Union famously recently said, to be a productive member of modern society, you need Internet access, and Internet access should be a human right. If that access is mediated by basically untrustworthy people who are just trying to sell more ads, I think that that's going to produce a pretty bad outcome for society generally. And the really, the the end game of that scenario is that there will be lots and lots of things which could make many people's lives better, which we could do with computers, and people just won't do them because they won't believe that it would be trustworthy. So we will be living in an intentionally backward society because the people who could be pushing us to the frontiers of progress are not trustworthy. So I wanna avoid that.
[00:05:33] Unknown:
Absolutely. It just occurs to me, a really recent example of that is the Office of Personnel Management breach that happened with the federal government. And I was just listening to a podcast this morning where all the people who are in charge are now saying things like, oh, maybe we should go back to paper. It's that kind of thing. Right? Like, the people who use this system that the OPM u utilized to vet all these employees, suddenly, they're gonna have to wait probably months months longer to get the job done and to get that job they're looking for because we're no longer using a nice, like, automated web based system. We now have to deal with paper being lost, found, buried in peat moss, and exhumed.
[00:06:15] Unknown:
Right. And I I think it's, you you might wanna talk to some federal employees about the software they're using before you use terms like NICE, slick, and automated to describe it. But, yes, I think that you're you're you're pointing out another sort of dramatic but more practical area where these types of lapses happen and that is security. And again, like, I don't think that the OPM was that was a sort of explicit ethical breach. There was no mustache twirling villain. Of course, the people who broke in stole all the stuff, that that was potentially, at least, nominally evil. But the people who developed the software that was vulnerable, probably just didn't know that it was vulnerable. And moreover, we don't have good industry standards to indicate what vulnerability is and what what an acceptable set of practices to mitigate those vulnerabilities is.
So, the place where the discussion about ethics needs to start, and this is in the session that we had after the talk, at PyCon. We had a sort of breakout roundtable discussion. This is the very first place that we went there too. The first thing you need to have a good code of ethics is a clear understanding of the realities that you are trying to litigate, basically. You need to understand what actually does make secure software. That's where we are right now. We don't know. There are lots of people who think that they know. And lots of snake oil salespeople and lots of very well intentioned people. Like, I certainly have my own opinions about what makes software secure, and I will happily tell you what they are. But we don't have a general social understanding of, like, okay, there's established science. We know that if you do x, y, and z, you will get a good result, you will get secure software, you will get ethical software that faithfully exceeds the user's intent. And then, on the flip side, if you do things a, b, and c, you will get bad results, you will get insecure software, and so on. And, you know, even the few things that we do know about software development at this point, like, I can't remember who said this, It might have been my dad, honestly.
But the only reproducible result in software engineering research is that you have to give programmers offices with floors that close. If you look over the history of software engineering research, you will see that result over and over and over again. Distractions and the lack of uninterrupted periods of time to get work done are very serious impediments to defect rate and all kinds of other things. But if you look at these modern, state of the art in industry, we're not even paying attention to those very basic results. As I mentioned, I work at Rackspace. I'm favorably inclined towards my employer. They're generously donating the time for me to be on this podcast. They like the fact that I'm talking about ethics and it's the kind of culture that we have at Fact Space. But we don't have offices with doors that close. And in in a way, that's a competitive reality. Right? Like, you you have an industry where people expect a certain retail price per square foot in real estate, and then do the math and say, Well, it's going to cost 4 times as much if you put people in officesi. I I made that number up, I donit know if itis 4 times as much. I Sure. I But, you do that kind of analysis, and you say, iThereis a very concrete cost we can tie to this aspect of development. We know that it is going to cost $X more to give people this interior walls, the construction, the square footage.
We don't really know. Even with all those lovely experimental results from, all the different studies that have shown that doors with clothes help, we don't know how much they help. We don't know how how well that's gonna, you know, push along our, our development practices. So given, this big unknown cost on the 1 hand and unknown cost on the other, we have to make the only responsible decision available to us, which is to make the mistake according to software engineering research, because the research isn't thorough enough. It's not well understood enough. So, I think, really, the place where a solid founding for ethics will begin is in academia, in research, and particularly in treating software development more like psychology than, like, computer science.
[00:10:25] Unknown:
In your talk, you commented on the training courses that lawyers are required to take as part of their certification. Do you think the fact that there is no standardized certification body for software development contributes to a lack of widely held ethical principles in software engineering?
[00:10:38] Unknown:
I think the lack of ethical principles and the lack of a standard certification body are both indications or symptoms of the same root cause, which is that software is a very, very new field. And like I was just saying about the scientific understanding of the field, there's that, that novelty, that newness is makes it so that we have a whole bunch of different approaches all going on at the same time, and nobody really knows what's the best 1 is. So you get these these standard certification authorities, the kind of standard curricula in various other fields tend to arise in very long periods of time. The canon gets established.
And then once it's established, then there have been a couple of notable examples of people not learning it and screwing up, then you get the codes of ethics in the professional organizations. So I feel like we're the the question is a little bit leading. It's like, should we have that kind of, you know, certification authority? Should we have that central establishment for for gating who can become a software developer? And I think it's too early for that because if we try to do that, we don't know enough yet. We need to be having this very robust discussion about what is really required and that's gonna take some time. And I know that this is the the weird thing about this discussion is, of course, that novelty and that newness that software developers, you know, are attracted to the field because of it. We all love moving fast and, you know, doing things quickly and coming to consensus on as little data as we can just so we can move forward. And this is 1 of those places where we really, really can't do that. It's it's gonna take it might take longer than, you know, our lifetimes to really nail this down. I hope not. I hope that I will live to see a standard ethical code for software developers. But I think a decade would be optimistic at this 0.1 of the reasons I'm so interested in this topic and I'm trying to talk about it as much as I can now is that if we're gonna be done within a decade, this needs to be a broad discussion with the the industry getting a lot of different voices involved and hearing, you know, what a lot of different people think.
[00:12:54] Unknown:
Yeah. So to your point about whether or not there needs to be a standardized certification board or something of that type, there's definitely a risk that if we were to instate such a such an entity that it could drive people away from software engineering or it could keep people from wanting to get involved in it. Because as you said, it is a new and very popular field. And a lot of people are flocking to it. And people are even leaving jobs of other types to start software engineering, whether they go through a boot camp process or they're entirely self taught or go back to school for it. And having that barrier could potentially hinder adoption of software as a profession. So 1 of the things about software is that there is the potential to get started at it with fairly minimal investment depending on how you go about it. And if if we were to add any gateways, it could potentially exclude people who would otherwise get involved who may not have the resources or the time or the inclination to submit themselves to that standardization?
[00:14:03] Unknown:
Oh, absolutely. I mean, I I think it would be an absolute death knell for the industry if we were to try to institute some kind of requirement or credential that was standard. I mean, most of the good software developers that I know, most of the best, dropped out of college or even dropped out of high school in many cases. And so whatever strategy we developed for dealing with the ethical obligations that we have in the field really needs to figure out some way to take that into account. Like I don't think the traditional schooling works super well for software development. Arguably, it doesn't work super well for anything, but that's probably a different podcast. The software developers who are getting into the field now need to be able to keep getting into the field and we need to figure out a way to promote a culture of ethical responsibility to those folks without having to put them through the central gatekeeping process. And that may mean that unlike the fields, that I allude to and and that I compare us to, like medicine and engineering and so on, those those fields are while I think there's useful things to emulate there, they're also very old fields that have a lot you know, like, the idea of engineering certification goes back to the, I think the Roman Empire, where the engineering certification process. And this is something we might wanna we might wanna take for, software developers just to think about. When you wanted to become a certified engineer in the Roman Empire, you would build an arch, and then you would stand under it for 10 minutes.
And, you know, if you survived, then you were an engineer. That that was a pretty lightweight process, but it had an obvious consequence. And I think that we could develop something even even in the days of the Roman Empire, like engineering was an older profession than software development is right now. So I think we could develop something along the lines of ethical certification process that was opt in, that was something that you could, like, agree to and use to market yourself as an individual or as a company. I think that 1 of the problems with the way that we have the dialogue around this is that, in the case of doctors and lawyers, the ethical responsibility is, although there are sometimes institutional problems, it very clearly lies with the individual, unlike the the penalties of, like, disbarment and, you know, having your medical license revoked are individual punishments because it really comes down to the decisions of a single person. Whereas in in software, I think most of the kind of really challenging ethical problems that we're facing today at least are not the result of individual rogue software developer going off doing and doing some shady thing. It's mostly like teams that have incentives which cause them to do things which their employer has explicitly asked them to do. And of course, you could in, dramatic fashion say, Oh, well, I'm not going to do that thing. It's unethical.
In reality, that wouldn't be that helpful because there's still, like, 6 more people on your team and they've got ETH and so somebody is probably going to do it. We have to think, in the software industry, how are we going to make these ethical obligations beholden to like, make make software engineering managers, make executives beholden to these ethical principles as well. It it's gotta be a tree you do have to have the individual engineers caring about it, but you also can't be punishing the individuals too harshly for systematic things, especially because we're not talking about black and white war crimes type moral decisions, we're talking about really subtle things about, like, you know, how do you present a user interface choice or how big does the ad get to be in the in your software and what how what kinds of UIs are misleading, what are dark patterns, what are not, you know, those those are really complicated, subtle questions that end up getting debated by a group. So the whole group has to have a a shared understanding of what the right, if not the right answer, at least the right way to go about getting to a good ethical answer would be. So
[00:18:02] Unknown:
assuming that we can all agree on a set of relatively strict professional ethics, how would we enforce those ethics? Do you advocate an honor system?
[00:18:10] Unknown:
Well, ethics are almost by definition an honor system, especially professional ethics. I think, enforcing them is reasonably straightforward. You would need a professional organization to maintain some kind of record and then you'd need the public to pay attention to that record. In the case of individuals, you could have lawyers pay attention to it, but like I said, most infractions are going to be committed by organizations, not by individuals. So that it I think that actually the easiest way that we can kind of get the ball rolling early, and by early I mean in the next 5 years maybe, is to have some kind of ethical standards trademark that you can say, like, well, you know, we we got this proposed code of ethics.
This organization claims to adhere to it. They can put the banner on their website, and then to use the legal enforcement mechanisms behind trademark law of, like, you can't display this banner unless we have a clean record for you. And, I think you also need to have a very careful maintenance of that record. It could obviously be used especially against individuals to, like, destroy their prospects for employment. That said I mean I think we do need better laws around software too. I don't think I've seen a single legislature anywhere in the world doing a good job regulating software. Mostly I think though it needs to be better jurisprudence.
Like, judges need to do a better job of just interpreting existing law in the context of computers. Quite often, the decisions that come out of cases relating to things like like harassment. Harassment's illegal. It's always been illegal basically. There's threatening people as there are various punishments for that in various contexts. But you put it on the internet and suddenly it's cyber, and you need to have a whole new discussion about how this is gonna work. And 1 of the things I wanna stay away from in the discussion of, you know, software development ethics is to say, like, it's not that you need a whole new ethics of making stuff or medicine or you know any of these other fields. We already have pretty well established codes of ethics there to the extent that you're making medical devices with software, you should mostly be following existing medical device engineering codes of ethics for how that should operate.
So it's not that something being in software makes it magical. It's that software has a couple of other properties that we need to take into consideration, like the fact that someone can have a device, tell the device to do something, and that device can kind of countermand their wishes. You can't have a lever that if you push down on it it actually pushes backwards, right? Like that that's not a machine that could exist in the real world, but computers can kind of do stuff you didn't want them to. And that that's the big ethical challenge there.
[00:20:55] Unknown:
Absolutely. And it also seems to me that with regards to the honor system, you know, versus a professional organization enforcement, I really like your idea of Underwriters Labs of ethical software dev because you're never gonna prevent the people like, to use another infosec example, the hacking team from selling really just downright evil malicious software to, let's say, governments like the Sudan, because they're gonna be able to make money hand over fist doing it, and no amount of professional organization demerits and violations are gonna stop them.
[00:21:29] Unknown:
Right. And that is a a great example of the underwriters' labs. And the thing about somebody like hacking team is while they had a lot of problems, I think there's there's kind of 2 separate things and you have to look at them separately and say, like, which 1 do I care about more? Because you have to make trade offs sometimes. And 1 is being a basically decent human individual. And a software engineering code of ethics will not tell you how to do that and really shouldn't tell you how to do that. In my mind, and I there's a lot of, debating back and forth over this, and organizations charged enforcement of ethics, so this is more of a personal opinion, generally accepted thing. But in my mind, professional ethics are all about the counterintuitive parts of your relationship to society in a particular profession, and not necessarily so much about good and evil. So, like, the the examples, you know, I mentioned this in my talk and they're often brought up in in lots of other contexts. You know, a doctor saving a patient who's gonna go on and kill people or a lawyer who's, you know, a defense attorney for a a murderer.
In in Hacking Team's case, like, sometimes you actually need the people to go and, you know, do quote unquote bad stuff with computers because they're making a weapon. I mean, Alan Turing was basically a hacker. Right? I mean, he cracked encryption and broke into people's, you know, communications, like, but I think we're all pretty glad he did. But the the thing there is, you know, if if you have an unethical application, you know, of software development, you're using it to do something malicious, there are also circumstances where, you might wanna use a weapon.
And, like, if you're willing to pause it that there are cases where you might wanna have, like, a military, who's gonna use actual physical weapons to shoot at people, then that ethical discussion moves into a different realm and is not really about, like, at that point you can evaluate, like, okay, we have software developers working for the military. For them to be ethical, they have to, you know, make the software that targets the guns shoot at the right targets and yes the result the outcome of that software is going to be, you know, human death and suffering. Right. But that's that's not really a, you know, was this software ethical ethically developed kind of question. That's like a should we be doing this at all kind of question. In a sense, it's almost like
[00:23:48] Unknown:
the hacking team or somebody like them could conceivably earn the, you know, ethical software developer underwriter's seal if they were totally clear about the weapons that they were building and what they did and and represented them, honestly
[00:24:04] Unknown:
to their users. Right. Like, we are merchants of death, and that's what we do. Right? Like, if you buy this software, it will kill some people. And that's maybe that's a terrible thing. Maybe it's a good thing. Maybe that that's really called for. You know, that's it's more about, like and, of course, you know, Hacking Team had other issues. Like, if you're a company called Hacking Team, maybe you wanna make sure you don't get hacked. Like, that that that kind of stuff, I think, might put them into a grayer area with the, you know, ethical development standards. Whenever we start talking about ethical software development, it immediately goes to security.
And it's there's also kind of an operational security aspect of security, which has nothing to do with the software that you've developed and has more to do with, like, how it's deployed, how it's run, how it's monitored. So, like, there's a there's a big gray spectrum that we need to make CRISPR and say, like, whose responsibility is this really? Because you can have breaches on totally secure systems. Right? Social engineering will get you into a system more easily than technical exploits, 9 times out of 10. So, you know, at that point, it it becomes a much trickier proposition to say, like, well, is the software at fault? Right? Well, it probably should have afforded some better password reset mechanism or something.
It it's it's not straightforward that it's always the software's fault when there's security issues.
[00:25:25] Unknown:
Yeah. That goes back to your point about separating the intent or purpose of the software from the way in which it was created, you know, that's definitely a very important delineation to make during any aspect of conversation around ethics is that the purpose and intent of what is being created isn't necessarily inextricably linked with how it was created. And the particular subject matter that we're discussing here is making sure that our process of creation is ethical and adheres to a particular set of standards. And the actual purpose of that software is entirely dependent on the business case, and that goes into a whole another realm of ethics that is dependent on making sure that businesses and governments are producing things that are ethical in the broader scope of humanity and society and trying to encompass all of that within a code that is specifically targeting software just really it goes it it goes completely out of scope.
[00:26:37] Unknown:
Right. And this is where I, you know, I mentioned in my talk, we have existing professional organizations. They have existing codes of ethics. But most, I mean, most of the audience have not even heard of them. Like, some of them were very, you know, vociferously came up to me after the talk. And I'm like, did you know, you know, this code of ethics is already accepted by so many people? The thing is that the IEEE and ACM codes of ethics are both basically, like, you should develop your software in accordance with the user's interests and in accordance with the public interest and in accordance with the law and you should be a good person and you should do what your employer wants. It's like, okay, we have, like, 10 constraints here, and you can't satisfy all of them. So, do they have a priority order? Like, if I have to sacrifice 1, which is it? Or if I have to sacrifice any 1 of them, do am I morally obligated to just walk away? Like, if every project required that every ethical developer on it, you know, just walk away if it was not in the maximal public interest and their employer's interest and the user's interest and their own interest and, like, if you if you try to maximize all those simultaneously, you're obviously not going to be able to do it. But, so that's why I think a lot of these are insufficient is that they they they tried hard to just say, well, software is confusing, you should be a good person.
And, 1 of the things that we also have to, like, remember is we have as a civilization very different ideas of what it means to be a good person. And if we try to cram all of those in this code of ethics, we will never have any
[00:28:07] Unknown:
consensus. Exactly. The different cultural norms and the different value, you know, value systems of people dependent on whether it's their nationality or their belief system or whatever might be the case. As you said, it's just too large of a scope to even begin trying to think about encompassing within a ethical code that is targeting a professional activity.
[00:28:30] Unknown:
Right. And we and we have to be looking at, you know, to the extent that we need an ethical code to target that professional activity, it's what's special and different about that professional activity than about everyday life. And really, the moral problem that I'm interested in preventing is using our position of privilege in terms of, like, superior knowledge of these systems' ability to control computer systems to provide this kind of social advantage. And, like, 1 example I could give you there is if you have just the entire way the economy of the Internet works, advertisements.
The entire notion of an ad driven software economy means that the way that software is paid for is you have a tiny piece of the software always visible that is not under the user's control. Right? They can't, they're, they're not allowed to hide it. They can't make it go away. And so their computer is which they paid a lot of good money for, is constantly doing stuff to annoy them and to distract them and to try to get them to spend their money, which, you know, in a good advertising ecosystem, maybe there's a net benefit. You know, maybe the ads are showing them things that they want to buy and that will improve their lives. But, you know, we've all used the Internet. We know that's not true. So long term of course, right now, I'm not I'm not trying to call out advertisers and say, like, they're, you know, they're evil or this is unethical behavior. But it's more like long term, if that's the way computers work, if we accepted practices, well, you have the piece of code which is running on the user's computer that they want there, and then you have the tiny piece of malware which is actually facilitating the business model.
Is it okay to escalate that from advertising to bitcoin mining on people's computers? If it's okay to escalate to bitcoin mining, is it okay to escalate beyond that and try to exploit their computers just steal their credit card directly? Like okay at some point it becomes obvious across the line but yeah I think that the line is is fuzzier than a lot of people think, especially because you can definitely see the continuum in the advertising world between, like, people like Project Wonderful, who are, like, you know, we're we're targeting a specific market. We only wanna show you ads that we know that because you're already on this type of site, you probably want to see anyway and we're going to make sure they're unobtrusive and meet with the theme of the site that we're advertising on all the way to those weird, you know, around the web type ads, which look like they're part of the site, but are actually a paid advertisement. It should take you to this other thing that looks like it's got buttons that are faking, trying to be dialog boxes. And so there is this continuum already. And I think it's pretty clear that 1 of the things that the whole advertising economy raises, which is thing this thing I like from other industries or other professions is if you become a professional client of someone, you become a lawyer's client, like when money exchanges when money changes hands, that sort of creates this relationship which is inviolate.
I I wish we had something like that for software.
[00:31:44] Unknown:
Yeah. There's definitely a lot of discussion around how can we improve sort of the de facto business model from being advertising based to something that is more sustainable both from a user and a business perspective because as you said, it really puts the user in a position where they are they no longer are the direct customer. And so the business and the engineering team behind it isn't directly working in their particular interest for the software that they're using. And instead, they are trying to commoditize the user information in order to be able to run the business and provide the user with something that they actually want. There are certainly businesses that do have a model where they take payment in exchange for a service. And in those cases, they generally do away with the advertising revenue or the advertisements.
But there are a wealth of systems where a consumer isn't necessarily going to be interested in paying directly to the company. And so we need to sort of figure out a better middle ground for those particular situations.
[00:32:58] Unknown:
Yeah. And, I mean, I think that it's important to keep the focus on the sort of, like, what exactly should be like, what is okay and what is not okay? Like, where is the ethical line and not just so much on the advertising business model because for example, you know, I love to take examples from other industries wherever we can because most of the problems we're facing are actually not all that novel. But in case of like magazines for example, when you buy a magazine in print, quite often, there are ads in it. And that's fine. The person who's buying it knows that there are gonna be ads in it. They're still paying money for that subscription. And that's kind of acceptable negotiation between the person paying for it and the the magazine putting it out.
I think we're probably still gonna have some ad like, even in my perfect ideal ethical world, we would totally have some ad driven Internet businesses. I think that that's a fine way to run things. We just have a clear idea of, like, what the trade off was.
[00:33:53] Unknown:
Yeah. That that absolutely makes sense because there's definitely a lot of denial on the part of users about just exactly what either you're paid for the product or you are the product means, I think.
[00:34:05] Unknown:
Right. Yeah.
[00:34:07] Unknown:
So where does accessibility of the software we make fit into a code of ethics? And do you think there should be a minimum level of support for technology such as screen readers or audio captioning in the software that we build?
[00:34:19] Unknown:
So I think that that's an interesting that is definitely an interesting area for exploration. So I'm gonna give you my some some of these questions I have, like I really think that there needs to be some fixed parameters around how we discuss it. Like for example, let's not bring in all of being a good human being into this discussion. But in this case, I'm not really sure. I think certainly the accessibility argument does not generalize. And what I mean by that is some software is visual art, right? And many games are not playable by the blind. Some many games are not playable by the hard of hearing. And I don't think that we need to necessarily say that those are unethical pieces of software. I think it's fine to have things for that are sensory experiences for people with that particular sense. Similarly, there are games that are not playable by people with a physical disability and are, you know, require you to be fully able-bodied. They're basically exercise.
And so I think probably it's just a general blanket statement about everything must be accessible to everyone incorporating specific technologies like screen readers is probably an overreach. At the same time, I do think that we have a some kind of responsibility to make things accessible to the broadest audience within their scope. So for example, there are already regulations about, you know, if you have a website that provides government form, for example, I forget the exact section in law. There's a abbreviation which is frequently used in the accessibility business that I'm forgetting at the moment. But, like, if you make a government website that people need to fill out forms on, you have to make it accessible. If you make if you're a contractor in certain capacities or you run certain types of public infrastructure, there are already regulations about that. I think in many cases, they're sufficient, or at least the process that we have in place to put those requirements on software is sufficient. So I'm not saying that all of the sites that need to be accessible already are, but that we've clearly made some progress towards making things accessible, and making requiring companies to spend the money to do the accessibility work.
So so that this is 1 of those places where there's, like, a trade off. Like, with security, Right? It's it's possible to pour a pretty much infinite amount of money and time into making your system secure even if it does almost nothing. And, while accessibility is a little bit less infinite, like there is a point at which all of your functionality is accessible to everyone that it possibly could be and you're pretty much done. It is still an expensive process that you have to go and do, and so the question is, like, are there other things that your software needs to do which are more important and can you justify not doing that work Like if you're making a system for a specific company, let's say. You're doing inventory on trucks and the people who you have to evaluate the inventory need to be cited because that's just that's part of their job is to be looking over all of the packages and making sure that they're they look right. It's hypothetically possible you can make that an accessible system. You could incorporate support for screen readers. You could make it possible for a blind person to do part of that job, but at the same time, probably you have much more pressing concerns around making sure the software is accurate, making sure you don't have any bugs. And at that point, you probably need to invest more effort in things that are not accessibility.
So, I don't know. So, this is a very complicated series of trade offs. I think that where we are right now in the industry is we should generally have a higher level of investment in accessibility. I think we're generally excluding people from our products that we don't need to be, but I think that's much more of a case by case kind of decision than a general ethical
[00:38:24] Unknown:
rule. Absolutely. You know, I'm I'm partially blind myself, and I definitely see your point with regards to you can't make visual art totally accessible for blind users. You just you just can't do it because of the nature of the content. But I also feel like there is room in an ethical standard, as if, for instance, that if you are designing a website for, say, a restaurant and online ordering, right, I think it's reasonable to say, okay. Don't make the whole site be so dependent upon Flash or JavaScript or something that if they turn it off or if someone's using a screen reader, they can't use your site at all. They can't order food. That seems to me to be ethically questionable.
[00:39:06] Unknown:
Not that you have a specific example in mind, right here. I'm sure all of our websites are super accessible and not maybe giant bobs of flash. And actually, that that sort of brings brings up, another interesting thing, which is that I think there are a lot of problems with the way that restaurant websites are developed. They're actually kind of an interesting case study on how to do everything wrong. I would almost say this is more of a curriculum issue that like we should be striving as much as possible to educate developers about accessibility rather than making it rather than a requirement issue, like, trying to say that it needs to be done. Because 1 of the things that you learn when you start to learn about accessibility mechanisms is you you find out that they're actually useful for everyone. Right? Like the ability for a screen reader to accurately capture restaurant's page also lets everyone else just do text searches on the page, right? If you but if you bake it all into this blob of inscrutable binary assets, then nobody can find the directions to the restaurant. Nobody can place an order.
And it gets into a whole breakdown of all these different audiences that we didn't think about. All of the young, you know, hipster software developers out here where I am in San Francisco are generally, you know, sort of stereotypically able-bodied white men in their early 20s which means that they don't realize things like for some people, for a large, large population of people, it's kind of hard to move a mouse and particularly hard to accurately target your Web 2.0 designer widgets that have tiny, tiny little click targets. When you start thinking about accessibility and you learn things about it, you realize, oh, you know what? It's kind of a pain for me to click on that too. Like, I can do it, but it'd be much nicer if it were, like, a hot key or a much bigger click target or, you know, I could do something other than just clicking to bring this up. Right? Like, you you I think that learning about accessibility makes you a much better developer.
But again, this is a but and and we should also mention this, this is all like all of this stuff is in UI. This is a particular discipline within software development. There are many developers who don't really do UI in their jobs much at all, so it does depend on what you do. But if you if you do user interfaces at all, I think not knowing about accessibility is a serious flaw that inhibits your ability to do your job well. And, and similarly, like, there there are lots of other things which I think we ought to be able to put into some kind of curriculum, not have, like, a official sanctioned accreditation for university programs, but like, you know, you should know a thing or 2 about accessibility, you should know a thing or 2 about security. And knowing and, like, actually having a list of what those things are and what is kind of an acceptable minimum level of knowledge might be another interesting kind of ethical code thing. Like, you need to familiarize yourself with at least this level of technology.
There is a consensus level of knowledge that we have established in the industry. Here is what it is. And having clear pointers, you know and obviously, that is not a fixed kind of immutable ethical principle for all time. That's like that would have to be updated every couple of years to say, like, in much the same way that, you know, a doctor who gets who passes their boards has to do that again periodically in order to stay current And, like, you lose your license if you can't recertify every so often. I think we might wanna have something similar where software developers have to maintain a some minimal level of current, current body of knowledge, and not get a degree or a certification or an accreditation necessarily, but be able to recertify periodically and say, like, I am still an ethical competent software.
[00:43:05] Unknown:
Absolutely. So in your talk, you mentioned how Rackspace's stance on user support matches the ideals you previously laid out. Can you flesh that out a bit for us? What does that mean to individual Rackers in their day to day work lives?
[00:43:18] Unknown:
Okay. Here's the part of the podcast where I say that I do not speak for my employer. I am not authorized to do so. I think it's pretty cool that they let me talk about this at all. So but but just wanna make sure that that this is qualified. Absolutely. I do think that, it varies a huge amount because, Rackspace is a is a big diverse company, and it's diverse to the specific question, it's diverse specifically in terms of roles. The way that the the point I was trying to make in my talk was really in Rackspace, you know, there's those of us in software development are still kind of working on interpretation of what does fanatical support mean for software development, where you have relatively little interaction with customers, you're mostly working on, you know, just regular software projects.
So how does that translate? The interesting thing about Rackspace is not so much that you have, like, 1 specific thing that that means, but that what it means in the day to day life of of every I would say is that this is a conversation happening around you all the time. Like, how can we be fanatical about this? And what does that mean? Is, you know, at other companies I've worked at, there there is, there's a conversation at every company that's happening all the time. And in some companies, luckily not really many that I've worked at, but certainly in start ups, it can that conversation can often kind of decay into how are we going to make a lot of money.
And at Rackspace, that conversation is how can we support our customers fanatically? For support rackers, it means something quite specific, and there are established kind of cultural norms around what it means to support a customer fanatically. And the interesting thing about it from the perspective of the discussion of ethics is, like, looking at what support folks who are really good at their jobs do and how they support customers and how they care about customers and put the customer first is a good way for us to think about users. And, again, I I I have to draw that distinction because we don't have that magical money changing hands kind of establishing a relationship in the software industry thing to anchor our relationship, it's our relationship to our user. And our user may or may not be the customer, but we still have to care about the user. And making user the customer is a really easy way to do that. I don't totally buy the whole, like, anti our advertising argument that, you know, you are the product or you you are the customer. I think that people who make advertising based software often do care about the user. And 1 way to make them care about the user more would be make the user the customer. But another way is really just to formalize that relationship and say, this is exactly how much you need to care about the user. So so yeah. So at Rackspace, specifically, it is it's really the the constant question of how can we support our users fanatically that makes it a different kind of place. And and that means that there's a lot of variation in terms of how that gets interpreted and what exactly happens as the outcome. As I said in my talk, I didn't want to turn this discussion into an ad for Rackspace, and so I don't want to make any, like, broad, sweeping claims about the fantastic results that we achieve because, you know, again, I can't speak for my employer and I I don't want to be seen as advertising, but that is the good thing about this. You just by constantly refocusing on that question, I think you get a really healthy culture of, like, what are we trying to do here? We're trying to, like, make this experience as good as we can.
[00:46:55] Unknown:
In your talk, you mentioned that availability of the software source should be mandatory for compliance with a properly defined ethical framework. What mechanisms for providing that access do you think would be acceptable, and should there be a central repository for housing and providing access to that source? And 1 place where that particularly comes into play is for a company or individual who is no longer maintaining the source for that project, making sure that access to that source is maintained for somebody who is still using that product.
[00:47:27] Unknown:
Well, I mean, we already have a government mandated place where all source code has to be stored. GitHub. So, yeah, I mean, I'm kidding, I hope. So I think speaking specifically for the US, it might make sense for the federal government to operate 1 just like they operate the Library of Congress. They operate the, you know, the patent registry. But I think speaking at the that's, like, kind of specific to 1 area and 1 potential instantiation of this. I think that's probably actually premature. If we're talking at the universal codes of ethics, then the mechanism of providing that availability is actually kind of irrelevant, especially because it would be relevant if it were a really serious technical challenge that we had to invent some new way of dealing with copying code around.
But there, given the practical reality that we face which is that on the internet there are like hundreds and hundreds of places you can host code and revision control. There are tons of, you know, the cheaper free ways to do it. I think that it would just be kind of getting bogged down to think about the exact mechanics of it. What's more important is to think about, like, what is access practically? And this is what most most things that require that you provide access to things or I'm blanking on a specific example, but when the government says you have to provide this thing, like, you have to provide a warranty to your customers, you have to reimburse them, right? The exact mechanism of that is enforced at the level of case law, like, you have some mechanism for getting support or getting a warranty or getting a replacement, and, you fail to do it in your customer's eyes, and then they sue you. And then they kind of figure out, like, okay, well, I guess that wasn't, you know, like that that level of support wasn't adequate.
And so I think at the level of, like, providing source to to customers, the retrieval mechanism, like, the new GPL actually sort of outlines this a bit. And it also refrains from making a specific requirement about how exactly you have to get code. But when in their licenses, they say, you know, you can write to, you know, you can send a postal letter to the Free Software Foundation and they will send you the source code. And, so I think the way we would adjudicate that in my hypothetical ideal reality here where you have some kind of organization that, enforces stuff that provides that registry of that that knows who's been bad and good.
They could just have a form you could fill out where a customer or a user could say, I tried to get source code from this company and they say they're hosting it somewhere but linked didn't work or, you know, I couldn't find it or I don't think this is sufficiently accessible to qualify. You should revoke your seal of approval from them. And then, you know, then maybe the company fixes the problem and it's not an issue. So does that sort of answer your questions?
[00:50:28] Unknown:
Yes. So another interesting aspect about providing access to source is what the potential requirements should be around having maintaining previous versions of the source and maintaining access to that, particularly thinking of cases where somebody is running an older version of that particular piece of software and either is unable or unwilling to update it whether it's for financial or technical reasons.
[00:51:02] Unknown:
Yeah. I mean, so updates are a tricky area I think that eliminating the financial Like, if we could just eliminate those barriers, I think that the rest of that becomes more Like, if we can just eliminate those barriers, I think that the rest of that becomes more or less irrelevant. And supporting people on old versions of things forever, I feel like is both an potentially infinite cost because you have lots and lots of old versions of things. And also not like as a software developer, there is literally nothing you can do to support an old version of software, right? What you do, if you are developing software, is you develop new software. Now, you could become something else as a support person, as an operations person, you could potentially do something to support the old version, but that's a different role, if not a different person. And so I kind of feel like that the software developer obligation is considered deployment and updates and everything as a core part of the software and make sure that it's easy and cheap. And, I would recommend in that vein, I would recommend folks follow the work of Timothy Fitts, and his Continuous Deployment crusade. He's trying to get everybody to deploy all code to all computers instantly.
It's, he has some really useful thoughts in that area.
[00:52:26] Unknown:
Yeah. The question of transparency and availability of source is especially interesting in light of a number of stories that have come out recently about patients who have been provided with prosthesis and other medical devices. And in a number of cases, shortly after having received the device, the company who made it abruptly goes out of business, and they are left with no way to support the the device or receive any sort of updates or bug fixes. And having availability and access to that source could potentially provide the patient with a way to either take the source to someone else and have it updated or to
[00:53:02] Unknown:
update it themselves and, you know, continue to actually take advantage of that medical device that could be potentially life saving or at least enabling a higher quality of life? I think that having the source available is an important way to kind of legally allow for that, but I do think that we have to be careful taking that as kind of a catch all, like, oh, well, they have the source, so problem solved. There this is a a big failure, I think, of of the the Free Software Foundation and their whole, like, philosophy of how operating systems should work because, you know, you do technically speaking have the code to Linux that you could go and modify, if you wanted to, but you probably can't. You probably can't for economic reasons, you probably can't for technical reasons, you probably can't for social reasons, right, like you if you go and modify your operating system, you are now off on a branch where you're living by yourself. And if you can't afford to put in the effort to socialize that change and get it maintained and integrated upstream and tested along with everything else, then you ultimately end up on your own unsupported branch. And similarly, there are dozens of companies that have been that have produced critical pieces of infrastructure for various free software desktops and then disappeared, and then that stuff has just languished or stopped working all the time.
And so especially for people with you know, a medical need for software to keep working, I think that that that touches on the sort of scope of ethics outside of software development that we were talking about before which is that, you know, my my own personal political viewpoint is that we need government to come in and pay for that stuff because it's going to cost some money to have somebody develop that software. And we can't expect someone whose health is potentially already compromised by the fact that software is not getting updated and not getting obtained to go and fix the whole thing themselves or to spend 1, 000, 000 of dollars to develop fixes and updates. So, well, yes, I think that, like, the availability of source is necessary.
I don't think that it's necessarily sufficient. In fact, I think it's necessary but not sufficient and insufficient to it for an extreme degree where we have to be very careful about thinking it even gets close to solving the problem. Also, I guess I should move back a little bit to say, like, what I what I said in my talk was that this was kind of a a hypothetical proposal for how 1 might start to enforce some type of ethical standards. I do think that, like, what I would really like to see, where I would really like to see this happen, where I think it would be a something that might not be completely impossible is right now the way the patent system is supposed to work is you're supposed to invent a thing, submit a specification that would allow someone else to build it accurately, And then, you only get patent protection if your diagrams are accurate and work and are novel.
And, something strange about the way that translated to software means that we submit these strange, sort of nontechnical descriptions of how a piece of software might work to the patent office, rather than what I think would make sense is, but the the radical part of this proposal is just get rid of copyright protection for software. It's not. This is this is not how it works. That part's probably not gonna fly. But to the extent that you want patent protection for software, and certainly lots of companies do, I think a workable patent system for software would be sure, submit a working Git repository, like, open, you know, and it's this is not open source. This is source availability, right? It's like you you put your code out there, you still could have copyright protection on it, you'd still have licensing issues with it, but if you want patent protection, you can you have to submit a working bill. And then I also have some thinking around mandatory licensing because since the Patent Office is is granting you a monopoly anyway, they're already destroying the ability of the market to generate an accurate price, so they should be able to mandate pricing, in exchange for that monopoly, and I think that having, like, mandatory licensing combined with mandatory source disclosure would mean that the Patent Office would then become a useful resource for software developers instead of this, like, boogeyman that nobody wants to get near. I mean, I don't know if you're familiar with this. 1 of my favorite things about Rackspace's work is my my colleague, Van Lindberg, who's a lawyer for Rackspace, has killed numerous patents and patent trolls kinda know not to come after us now because we've just had this stance of, kind of aggressive defense where we we try to invalidate patents rather than pay any kind of extortion fees around them. And this is this is not at all specific to Rackspace. Right? That's patent extortion is the business model for patents and software right now.
So which and if we're talking about ethics, making extortion into your business model is probably on the low end of that scale, just generally. So, so, yeah. So I think I I think that source availability and mandatory source availability and and that sort of stuff would be a great replacement for the patent system and would I'm still not a huge fan of government mandated monopolies on things, but at least that would have some semblance of fairness. At least it would perform the purported role of the patent system, which is to, in exchange for not keeping your stuff secret, you get the benefit of some level of protection to be able to make money off of it. Effectively, most of the softwares that gets released nowadays, or sorry, the software that gets fattened nowadays is effectively still secret. They sort of vaguely they describe that they have a secret, but they still won't tell you what it is.
So I I think that that's the first place that this kind of mandatory source availability might actually it might be politically tractors to make it happen. I still think it's a kind of ridiculous long shot. But
[00:59:17] Unknown:
Yeah. And your point about how the patent system doesn't seem to adhere as tightly to its ideals when it comes to software points again to the failing of general literacy of software and how it works within government and governmental bodies. And that brings up an interesting aspect of the trust equation that you talked about in your talk and its relevance to the need for ethical code because what we do as software engineers is effectively viewed as sorcery by a vast majority of the public, and they must therefore wholly place their trust in us as part of using the products that we create. And as you mentioned with the demise of the scribe, with the rise of literacy, increasing the overall awareness of how software works at a basic level partially reduces that dependency of trust. So at what level of aptitude do you think our relationship with our users becomes more equitable?
And how does the concept of source availability play into that topic of general education?
[01:00:13] Unknown:
Well, I think, we've talked about a a a bunch of those. We've touched on a bunch of those points already, but I think that the the main question you got there is how educate how literate does the general public need to be before we really stop meeting specialized ethics around this? And I think it has to do I I think that the the the standard, the the litmus test, if you will, for whether or not we need special ethics has to do with whether or not we have special access. So, right now, in order to, for example, develop an Ios application, you have to be kind of a licensed and bonded developer with Apple, your your app has to be approved. And similarly with Google Play and other, less popular app stores. Right?
So for those platforms, there is a stark division between what constitutes a user and what constitutes a developer. And so that's, like, that's at the far end of the spectrum where like you really, really need to have some, strict ethical standards because, you know, people can't you have no control whatsoever. And oddly enough the cloud is sort of the same way in that users, if they are using a cloud based service, they can't reach into the backend and change the way that it works. They can't even see the way that it works. Even if you release source, there's no way that you can know that the source compiles down to the binary that you've got on your iPhone. There's no way that you could know that that source is really what's running on the back end. And given the way that, like, deployments configuration files work, it probably is. You're probably running some slightly modified version even if you're trying to stick as close as possible to upstream.
So, I think that it, now of course your question is about literacy and like how literacy plays and how people's understanding of the code plays into that. I think that literacy, the reason that the systems have grown up in the way they have, where there is the start division, is that people don't have an interest in learning, people don't have the ability to write the code themselves, so there's really no point in preserving this kind of false equivalence between developers and users because developers really are the audience. So I think that the the point at which it becomes a viable to say that users are literate enough that we don't need to have a strict code of ethics is the point at which users are sufficiently literate at programming that the lack of ability to program your system becomes a business concern.
So like the fact you're running a cloud service and users are like sending you patches that you can't integrate because you like all of your users are programmers and they want your system to behave we we start to get to the point where once the users practically understand systems well enough to start trying to modify them even if they don't have source, right, even if they're hacking it through web API requests or Greasemonkey scripts or whatever it is. If there's a large population of new users who understand your system well enough to write code against it, I think that we've achieved that level of for whoever's audience that is, they've achieved that level of parity. And I think that for society at large, like, once we see there be some tension on that, like, not hype, not the hypothetical tension that, you know, free software folks like to posit might exist where, you know, we might have lots of programmers who might want to modify their operating system, but like everybody is actually trying to modify the applications that they use on a day to day basis.
I think that we we should be aiming for that. We should be trying to educate people to the point where they can do that, which means that we need, you know, when we do sort of volunteer learn to code type events and things, we really need to focus on that practical aspect of, like, how are you gonna use this? If you're not gonna change careers, you're not gonna become a software developer, this isn't like a boot camp to get you into the industry, how can people use software to do things in their day to day lives and how can they, like, affect their computing environment, whether it's JavaScript to, you know, script web page front ends or some kind of API requests to hit API back ends or just scripts thrown on their local computers, like what is what does that look like? Getting getting people to understand what that looks like and do that for themselves is what I think practically literacy would mean for this kind of public stuff.
[01:04:54] Unknown:
What can the Python community in particular do to start the ball rolling towards defining a set of professional ethics and what has it already done in this area?
[01:05:03] Unknown:
Well, I think that the, 1 thing that it's done that's been great, which is tangentially related but still important is the code of conduct in the Python community. Now I think that the the PSF code of conduct actually does not go go far enough, but other areas of the IPAC community like the the Django code of conduct is actually a bit more specific and I think that it moves in a good direction But the fact that the community is actively discussing these things and having these demonstrably very hard conversations that a lot of people have been very upset about it, it's been, you know, change is difficult. I think that the fact that we've come through that conversation and we're starting to achieve some consensus around it means that we, could potentially do the same thing around some kind of Python programmer's code of ethics.
And like I said, this is gonna be a slow process. It's gonna take some time. And again, it the 1 of the problems with PSF as an organ for this is the PSF is a membership organization mostly for individuals. Now we do have corporate members, and that would be great if we can get some of them involved in the discussion. But I think companies are rightly somewhat cagey to get involved in something that might turn into a legally binding agreement at some point. So, we need to find a way for them to to get involved without fear, you know, shutting out the possibility of discussion, which is tough, and I I recognize that. So I I think that the I think that the PSF is kind of the right organization, and I think that it's probably but probably it's going to come out of smaller projects, perhaps an interest group, a sig that starts discussing this somehow. I mean, I I really I I guess I really should have put this closer to the the top of the show, but I am not really an expert in ethical reasoning. I've been thinking about this for a long time. I don't think I mentioned this, earlier, but I I, you know, this this post this this talk came out of a blog post I wrote in 2, 000 5. I actually had to go look it up to remember when it came from. So I've been thinking about it for a long time, but, like, I don't have any particular credential or special understanding in this area. So I I my main interest is in getting the discussion started and getting other people thinking about, like, what how they disagree with me, really. And that so that discussion group, I think I I don't think I'm the right person to lead it or to try to, like you know, and I don't necessarily know that the Python community specifically is the right place to have that discussion, but I think that the PSF has demonstrated that it's at least sort of possible to have that kind of discussion within its, under its aegis, and that might be a good place to try.
[01:07:43] Unknown:
I was gonna ask if there is anything that we didn't ask you that we you think we should have asked or anything that you wanted to bring up, that you wanted to share with our listeners before we move to the PIX? Well, it was I guess the the question that I sort of just
[01:07:56] Unknown:
half answered was the, how did I get into this question, which I I don't think you, you got to earlier. Because, I was actually thinking about that quite a lot before we talked. And the although it was a sequel to a blog post I read in 2005, I I guess I probably started thinking about this, when I was a kid because I grew up in a Star Trek The Next Generation household. And the so the the foundation of my ethical awareness, was when captain Descartes first said the first duty of every starfleet officer is to the truth, whether it's scientific truth or historical truth or personal truth. And if you can't find within yourself to stand up and tell the truth about what happened, you don't deserve to wear that uniform. That's kind of how I feel like software developers. We don't we don't have a uniform though, so it threads through all of my thinking about this and what I really hope that the main kind of guiding principle that other other folks take forward into other discussions that I will subsequently disagree with I'm sure is really this notion that, the important aspect of being an outstanding professional is truth, is correctly representing yourself accurately in a way that your customers or your users, your clients understand.
And I think that there are a whole bunch of ways that we've done that very poorly in the software community. And 1 of the reasons that I think the Python community is a great place to start this conversation is that Python has always been, you know, just technologically very much in favor, almost even more so than like, than something like JavaScript in giving you the giving the user all of the source code. Right? So not not on the in the sense of like, you know, if you're a customer of a web service. But if you're running the software in some software in Python it's always sort of been possible to just execute bytecode, but nobody really does that. Like a couple of things have, you know, binaries have come out over the years that have done that. Certainly, you know, people have confidentiality concerns or intellectual property issues and they where they they don't wanna be able they don't wanna be shipping source. But, for the most part, the cultural norms in the pipeline community are just always ship source to your users. And the reason I say more so than in JavaScript is in JavaScript you end up running against these technical limitations where you're, like, you really want to minify your source, you really want to kinda crush it down for productions that it doesn't take up more room on the wire than it needs to. But Python didn't have that concern, and so, yeah, we just share source freely. There's been there's been talk about computer programming for everyone since the very beginning of the language's history. So that that's a that's a good mindset to have.
[01:10:47] Unknown:
It's a good starting point. Well, I think that is a very interesting conversation about ethics, and I'm sure it gives our listeners a lot to think about and a lot to comment on. So with that, we'll move into the picks. And for my first pick, I'm gonna choose the Philips SHP 95 100 Headphones. I recently had the headphones that I was using start to malfunction and stop working properly. So I went ahead and got myself a new set of headphones. And rather than just go with the 5 or $10 pair that I usually get, I decided to get some decent ones and did a little bit of research and came across these. And I have been very pleased. They've got a really wide frequency range from 12 hertz up to 35 kilohertz, which is pretty wild. And they're open back design, so when you're wearing them, you can actually still hear everything that's going on around you rather than most headphones where they just shut you out shut everything out. And I just really enjoy that. It also makes it a lot more pleasant when I'm wearing them and talking to somebody on the phone because I can actually hear myself talk instead of having it be really muffled and weird like you get with noise cancelling or earbud headphones.
For my next pick, I'm gonna choose keybase. Io, which is a website and sort of a social network around GPG keys and giving a different way of establishing trust than the traditional, you know, I'll sign your key, you sign my key, and having to meet in person to validate that trust. They give you a way of socially validating your identity through Twitter and DNS records and whatever other networks you might wanna employ. So I've been enjoying using that. And for anybody who's interested, I've actually got some invites that I can give out. So to any listener who wants to get an invite, send a tweet to podcastinnit on Twitter, and let us know what your favorite part is about the show.
And I will choose some people to give an invite to. My last pick is Paul Blart Mall Cop 2, which is a movie with Kevin James that I watched recently. And it is 1 of the funniest movies I have watched in a long time. So I'm just gonna leave it at that. Definitely recommend watching it. So check it out if you're so inclined. Chris, what do you have?
[01:13:12] Unknown:
So the first 1 is an indie game that has been around on a PC for a long time, but I don't play I don't own Windows, and I don't play Windows games generally. So when it came to the iPad, I was really excited and gave it a shot. It's called Don't Starve, and it's this really demented game with a Lovecraftian horror twist that's sort of like you trying to survive out in the wilds and has a very simple mechanic. It's very hard. You will die a lot. It's a lot of fun. The the translation to a tablet is brilliant. It's their controls are perfect. It's it just it really works. It's it's a great example of how the iPad is a first class gaming platform for those of us who do most of our gaming on the go or on the trading or whatever the case may be.
My next pick, shockingly enough for me, is a Python pick. I don't usually pick a lot of Python picks because I'm still a Python noob, and most of the time, other people have more interesting things to say. But, Reuben Lerner, who is a guest of our show, previous guest, published a blog post this last week on list comprehensions. And if you're like me and you know how to use list comprehensions, but they're not you know, you're still not entirely sure or or you could use more depth, this is a blog post for you. It it uses the analogy of a SQL query or an Excel spreadsheet, and I I think it's it's just it's beautifully done. You should definitely check it out.
And I know I learned a lot from reading it. And my last pick is Bar Hill Gin. It's a Vermont made gin that's brewed from honey. And so it's very unusual in that it's a little sweeter and a little has a little different flavor character than a lot of genes that I've encountered. It's really delicious for certain kinds of mixed drinks. I wouldn't use it for gin and tonic. It's great just straight, just having a a finger or 2 in a glass over some ice. Very refreshing. It's, it's great stuff.
[01:15:11] Unknown:
And that's it for me. I'll kick it over to Glyph. What do you have for us? I'll just interrupt quickly and say that I definitely support your pick of Bar Hill Gin, and I've had their gin. And they also do a vodka that is made entirely from honey. And they also do 1 called Tomcat, which is a modern version of old Tom, which is, you know, from old England. And they are all very excellent. So I recommend anybody who is within their distribution range to check them out.
[01:15:41] Unknown:
Okay. I guess, this is this is new to me. So, but for me, I guess, I will, for for my first pick, I guess, for I'll stick with the theme of a Python pick, which is, py to app. And, more I guess in the the general ecosystem of Py2app, pyobjectivec. I actually wrote a big blog post about this recently so you can read about it in more detail, at lif. Twistedmatrix.com, but I find that a lot of Python developers kinda do a lot of back end stuff, don't know that they can, you know, that they they all we have Macs, Macs are super popular. We don't know that you can just kinda hack together a Mac application really easily, a little obscure, it's not quite as well documented as it could be, but that's what my little intro on my blog was about, is to help people get kick started with how to get PyObjectivec going inside, like, GUI.
But, yeah, you can you can, there there has been a a team of folks, very small team, thanklessly working for years making sure that it's possible to make and distribute Mac apps and customize the Mac GUI and do all kinds of cool stuff basically call any API on the whole platform, so check out it's, Py2, the number 2 app, and Pyobjc, OBJ C, both great projects that can let you do all kinds of fun hacks on your own development machine while you're working on your web back end stuff. And, my my second pick will is the, Sensair, Sous Vide Immersion Circulator, which is just a fantastic way to prepare any kind of meat or protein that you want.
I it is surprisingly inexpensive, immersion circulators used to be like $5, 000 and this is only, I think it's 199. You might be able to get it cheaper than that. But, it is a, you just take a big hunk of meat, you put it into a into a vacuum sealed bag, you put it in the immersion circulator for as little as 2 or as many as, 72 hours and it comes out unbelievably tender and juicy and delicious. You can, you can basically take a chunk of chuck roast and end up with something that's like prime rib, so it's kind of amazing. And, I don't know, those 2 picks really kind of sum up the universe for me, so I'm gonna I'll leave it at that.
[01:18:17] Unknown:
Great. Well, we really appreciate you taking the time to join us and we appreciate your employer for letting letting you take the time. For anybody who is interested in keeping in touch with you or contacting you to talk further about this issue, what would be the best way for them to reach you or follow what you're up to? You can reach me on, Twitter. I'm Glyph.
[01:18:36] Unknown:
You can reach me on Keybase. I'm Glyph. I'm at glyph.twistedmatrix.com. I am Glyph on freenode.net. Basically, just type Glyph into the Internet and, and I'll come up.
[01:18:50] Unknown:
Well, again, thank you very much. We really appreciate it, and I'm sure our listeners do too.
Introduction and Host Welcome
Interview with Glyph Lefkowitz
Implications of Unethical Software
Ethics in Software Development
Accessibility and Ethical Standards
Source Code Availability and Ethics
Python Community and Professional Ethics
Conclusion and Final Thoughts