Summary
Do you wish that you had a self-driving car of your own? With Donkey you can make that dream a reality. This week Will Roscoe shares the story of how he got involved in the arena of self-driving car hobbyists and ended up building a Python library to act as his pilot. We talked about the hardware involved, how he has evolved the code to meet unexpected challenges, and how he plans to improve it in the future. So go build your own self driving car and take it for a spin!
Preface
- Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
- I would like to thank everyone who supports us on Patreon. Your contributions help to make the show sustainable.
- When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at podastinit.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your awesome app. And now you can deliver your work to your users even faster with the newly upgraded 200 GBit network in all of their datacenters.
- If you’re tired of cobbling together your deployment pipeline then it’s time to try out GoCD, the open source continuous delivery platform built by the people at ThoughtWorks who wrote the book about it. With GoCD you get complete visibility into the life-cycle of your software from one location. To download it now go to podcatinit.com/gocd. Professional support and enterprise plugins are available for added piece of mind.
- Visit the site to subscribe to the show, sign up for the newsletter, and read the show notes. And if you have any questions, comments, or suggestions I would love to hear them. You can reach me on Twitter at @Podcast__init__ or email hosts@podcastinit.com)
- To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
- Your host as usual is Tobias Macey and today I’m interviewing Will Roscoe about Donkey, a python library for building DIY self driving cars.
Interview
- Introductions
- How did you get introduced to Python?
- What is Donkey and what was your reason for creating it?
- What is the story behind the name?
- What was your reason for choosing Python as the language for implementing Donkey and if you were to start over today would you make the same choice?
- How is Donkey implemented and how has its software architecture evolved?
- Is the library built in a way that you can process inputs from additional sensor types, such as proximity detectors or LIDAR?
- For training the autopilot what are the input features that the model is testing against for the input data, and is it possible to change the features that it will try to detect?
- Do you have plans to incorporate any negative reinforcement techniques for training the pilot models so that errors in data collection can be identified as undesirable outcomes?
- What have been some of the most interesting or humorous successes and failures while testing your cars?
- What are some of the challenges involved with getting such a sophisticated stack of software running on a Raspberry Pi?
- What are some of the improvements or new features that you have planned for the future of Donkey?
Media
Keep In Touch
Picks
- Tobias
- Will
Links
- Donkey Car
- DIY Robocars
- Tornado
- [Tornado on Podcast.init](https://www.pythonpodcast.com/episode-40-ben-darnell-on-tornado/?utm_source=rss&utm_medium=rss
- Raspberry Pi
- TensorFlow
- Convolutional Neural Network
- Adafruit
- LIDAR
- ROS (Robot Operating System)
- Unity
- Udacity self driving car nano-degree
- SparkFun
- Beagleboard
- Adam Conway
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Hello, and welcome to podcast dot in it, the podcast about Python and the people who make it great. I would like to thank everyone who supports the show on Patreon. Your contributions help to make the show sustainable. When you're ready to launch your next project, you'll need somewhere to deploy it, so you should check out linode@podcastinit.com/linode and get a $20 credit to try out their fast and reliable Linux virtual servers for running your app. Now you can deliver your work to your users even faster with the newly upgraded 200 gigabit network in all of their data centers. If you're tired of cobbling together your deployment pipeline, then it's time to try out GoCD, the open source continuous delivery platform built by the people at Thoughtworks who wrote the book about it. With GoCD, you get complete visibility into the life cycle of your software from 1 location. To download it now, go to podcast in it.com/gocd. Professional support and enterprise plug ins are available for added peace of mind. You can visit the site at podcastthenit.com to subscribe to the show, sign up for the newsletter, and read the show notes. And if you have any questions, comments, or suggestions, I would love to hear them. You can reach me on Twitter at podcastinnit or email me at host@podcastinnit.com.
To help other people find the show, please leave a review on Itunes or Google Play Music. Tell your friends and coworkers and share it on social media. Your host as usual is Tobias Macy. And today, I'm interviewing Will Roscoe about Donkey, a library for building self driving cars with Python. So, Will, could you please introduce yourself?
[00:01:30] Unknown:
Sure. Yeah. My name is Will Roscoe. I live in Oakland, and I am a engineer for a precision agriculture company, to help, farmers.
[00:01:40] Unknown:
And do you remember how you first got introduced to Python?
[00:01:43] Unknown:
I think I was looking for a modern framework to build a website maybe 10 years ago.
[00:01:51] Unknown:
So, I'm wondering if you can briefly give an overview about what the donkey project is and what was your reason for creating it in the first place?
[00:02:02] Unknown:
So it, started about a year ago. I actually ran for the local sort of subway director seats with the, San Francisco subway system is the BART system, and I thought it was really crummy and so I wanted to, modernize it and replace it with self driving buses which I think would increase the capacity and let people get places faster. And what I learned was people didn't believe that the technology existed to make it happen and I didn't win the race and so I wanted to prove just on a small scale that the technology actually exists. So, I pitched, racing cars racing small scale self driving cars with a couple of people, and there was some interest.
And Chris Anderson from DIY Drone, has started organizing these monthly basis, you know, first 1 being in November. And I went there without a car and sat down next to Adam Conway who had the simplest car of all all of them. Everyone else had these cars with wires plugging into, Arduinos. And Adam just had a a single Raspberry Pi plugged into a motor controller. And so I sat down with him and started working on the software. And from from that point, we just built a software package that let you drive the car through the web, as well as train these neural network, autopilot to to drive the car by itself.
So it's been,
[00:03:38] Unknown:
10 months now since we started working on this. Well, it's actually a lot younger than I was led to believe just reading through the documentation and a lot of the resources you have behind it and how advanced it is for the as far as the capabilities. So I'm impressed at what you've been able to achieve in that short of a time.
[00:03:54] Unknown:
Yeah. Yeah. Well, only recently, it's become a lot better because some people from the San Diego robotics club and there's some people in Australia now working on it, that have helped just do sort of the heavy lifting of documenting and, making it more robust.
[00:04:11] Unknown:
Yeah. Many hands makes light work as they say. So what's the story behind the name? What was the inspiration for choosing the name donkey for this kind of a project?
[00:04:20] Unknown:
Yeah. It was just a, I don't know, minute decision, but, I really wanna make something useful and figured it's sort of the early days of making something useful. And so I was thinking about, like, building these cars and sort of like domesticating animals. I was looking up, what some of the earliest domesticated animals were, and I think the first 1 was an ox, but that really didn't sort of fit the nature of the project and, 1 of the second ones was a donkey and that seemed to fit because it was, useful, you know, sometimes super stubborn but also, kid safe.
[00:05:00] Unknown:
Yeah. It's definitely, very evocative name, so it definitely sort of brings up a lot of imagery that goes along with it. And like you said, it's early days and the the car doesn't always do necessarily what you want it to do, so it can be a little unpredictable like a donkey. Yeah. It also sets a really sort of low threshold,
[00:05:17] Unknown:
for people's expectations. So if it goes and can follow a lion, they're impressed.
[00:05:22] Unknown:
Absolutely. So what was your reason for choosing Python as the language for implementing it? And if you were to start over today, do you think you'd make the same choice?
[00:05:30] Unknown:
Yeah. I I love Python, and it just has all the libraries that seem to work, and it really helps that the academic community is using it extensively. And so if you need to do something, there's likely a a library written by someone smarter than you. So I yeah. I think it was a no brainer.
[00:05:51] Unknown:
So what are some of the libraries that you leveraged to be able to build the control system and the neural networks and the, you know, what what does the overall architecture of it look like?
[00:06:03] Unknown:
So the architecture is it uses a tornado web server to communicate between, the car. The tornado web server runs on the Raspberry Pi, and then you use your phone or your computer to access the web page being served from the Raspberry Pi. And then, there's some other glue code that, takes the inputs that you give through Tornado and runs it through this thing, we call our our drive loop. So just 30 frames a second, it's going through these same, sequences executing. 1st reading the user input, second reading the camera, you know, and then you're doing some logic to figure out whether or not you should run the autopilot or not. If you are running the autopilot, you're using the TensorFlow library to input a single image and then it runs a convolution neural network and outputs a steering and a throttle, and then it will go down to the the steering and the throttle actuators.
And that that library is 1 written by Adafruit. And then and then we have a sort of a data store that is just a sort of extended dictionary that also records to the file system. And it just runs that drive loop over and over again. And if we made the neural network small enough that it can run at 30 frames a second and seems to to work fine.
[00:07:31] Unknown:
And you've got some reference hardware on the website that describes, you know, all the different pieces necessary to build the car that you've been building and testing with. And in that reference architecture, the only input is the camera. So I'm wondering if you built the library in a way that it's possible to add support for additional sensor types, such as proximity detectors or lidar, or if that would require some sort of rearchitecting?
[00:07:57] Unknown:
Current answer, yes. But, just, about 3 months into this project, we were we realized that everyone was solving the same problems relating to hardware. And so we we put in a good, sort of months work months worth of weekends just documenting how to get these things assembled so that people didn't have to solve those same hardware problems and the first version of the software, it was not possible to add any sensors or anything because it was just hard coded for throttle steering and camera. But we we did a refactor and now everything is organized into parts. So borrowing a lot of inspiration from the way ROS was designed but, this is sort of native Python.
You can write your own part that would that would read LIDAR and there's actually some LIDAR there is a LIDAR part available, but it's not used in the default car template. And then people are also using wheel encoders as well as instead of using the web interface part, they've hooked up a PlayStation joystick. So it's pretty easy if you can, you know, just write Python code to access, a different sensor. It's pretty easy to write a sort of wrapper part that will fit into the donkey ecosystem.
[00:09:22] Unknown:
And so it sounds like 1 of the sort of evolutionary aspects of the project has been to add that additional sensor input capabilities. Are there any other evolutions of the project that came about just by, meeting a particular need that wasn't considered at the beginning?
[00:09:37] Unknown:
Yeah. Well, we're kinda at a we're kinda at a junction right now where we, you know, spent 6 months getting these in Indian neural network autopilots working. So right now, most autopilots just take it a single or each frame in and then output steering the throttle, and they're really hard to debug and sort of your only recourse for fixing the autopilot is to either get more training data or to tweak the model a little bit. And I think I think the interesting things are gonna come when we're able to create specialty networks and leverage open CV to, you know, do some visual odometry stuff. So, yeah, I think the evolution right now is moving away from the Indian neural network, which is a great place to start and toward, a more robust architecture that will, you know, recover from, going off the track or, recover from running into a wall and also, you know, give some more awareness of what's coming up and where you are on the track and are there other cars around.
[00:10:44] Unknown:
And so the way it's implemented right now, is there a particular hard coded set of features that the neural network is training itself against from the visual input that it's receiving? And is there a way to change the set of features that it's trying to detect as it's being trained?
[00:11:01] Unknown:
Yeah. So there's not really. And, some people have written some sort of, sort of like analysis visualizations to show sort of what pixels the neural network is actually being triggered by and it's not random, but there's just a ton of things that the neural network is looking at and not all of them are useful. So I I think I think we need to get more scientific about, you know, what we actually feed into the network. Should we do some filtering before we send in the image? Right now, we're literally just taking each frame and not changing it at all and then plugging it into the neural network that runs 4 layers of convolutions and then a single dense layer, and then it outputs a steering angle and a throttle value.
And so you're it it's just doing behavioral cloning. There's sort of no, manual adjustment that you that you do.
[00:12:03] Unknown:
So right now, it's just a purely unsupervised system where it just sees the visual input from running the test runs throughout through the track and just uses that purely to, you know, make its own inference as to what the desired outcome is. Is that correct?
[00:12:19] Unknown:
Yeah. Well, it's not programmed, but I think I think it is considered supervised because you're you're giving it the you're giving it the throttle and the steering values that you want it to output by first driving it around and collecting, you know, 10 or 20, 000 frames and steering angle and throttle pairs. And then with that data, you train the autopilot.
[00:12:43] Unknown:
Okay. Yeah. So do you have any plans to incorporate the ability to do sort of negative reinforcement techniques for the network so that if you, for instance, go off the track during your training run, you can then go back and label the input data that has the errors where you accidentally went outside the bounds of the track so that it can then use that as a means to determine when it has sort of left the boundaries of where it should be and then be able to recover from that? Yeah. So right now we just delete that data so that it's not trained on it. I'm I'm not entirely sure how we would do negative reinforcement on that but, we we'd,
[00:13:16] Unknown:
1 of the guys down in San Diego, Tom Kramer, wrote a simulator in unity for the car and so you can run the donkey code on your computer with the simulator running, very similar to the way, the Udacity course does it and, test your autopilots there. And I think in a simulator, it would make it would make sense to do some reinforcement learning where, you know, you reward it for staying on the race line that you want it to stay on. And then if it's going off the race line, you could, you could penalize it or just but take away the reward a little bit. I think the sim the simulator is we've just got going maybe a week ago and I think that's going to really improve our pilot and we will be able to train some depth nets. So just creating a network that will output the estimated depth of each pixel so that we can get some sense of,
[00:14:11] Unknown:
of objects and where they are. Yeah. So the way it's written right now, there's not really any way to incorporate object avoidance. It's purely just stay within these 2 lines and, you know, go around the track multiple times.
[00:14:23] Unknown:
Yeah. It's just try to drive how you were driven before. Mhmm.
[00:14:28] Unknown:
And when I was looking at some of the, videos on the website that was showing some of the test runs of the different self driving vehicles on their tracks, 1 of the things that was interesting was seeing how the cars were interacting with each other when there were multiples on the track and how it seemed to sort of confuse each other because it was unexpected input that it didn't quite know how to handle. Yeah. These races are really fun and,
[00:14:50] Unknown:
the group that Chris Anderson started is DIY Robocars, and we just are. There are a couple across the country in DC and Austin and San Diego. But in Oakland, we were just meeting every month in this rundown warehouse and racing and the the normal schedule of these events are we'll get there in the morning, the track might need to be repainted because the warehouse is used for other things, and then we start collecting training data And the events have become popular enough that there are maybe, I don't know, 10, probably 20 cars on the track at a time. And so it's hard to get sort of pure training data. And some of the the problems with that are the training data you collect show you slowing down when you're approaching a car and so when you are racing and a car gets in front of you the car will slow down.
Or if you train without seeing other cars it has no idea that what other cars are and it will just run into them and and won't won't know what's going on. Yeah. 1 of the videos that I thought was quite humorous was,
[00:16:00] Unknown:
the cars were going around the track, and then 1 of them happened to go outside the bounds of the track, but it was to the inside of a loop. So that was then bounded by a white line, and it was just sat there staring at the white line, not quite knowing what to do because it was trained. Don't go outside of the white lines, but it didn't know how to reenter the track to keep going. Yep. So aside from that, it would have been some of the other most interesting or humorous successes and failures that you've come across while testing your cars or other people's cars with Donkey?
[00:16:29] Unknown:
We haven't really done object avoidance before but, this past weekend I went to the Denver Maker Faire and the SparkFun ABC race, autonomous vehicle competition race. I was there, and they had some big red barrels that you had to avoid and the the donkey car was the smoothest car going through the barrels. We didn't get through the whole lap during race time but most other cars would either their approach to get around the cars were to run into the car and then back up and try a different route or using lidar to try to find their way through. But the donkey just sort of kept its speed and could go go around them learning from the way that I had driven. But I think demonstrating some of these capabilities like object avoidance, lane keeping, passing was recently shown to be possible at the San Diego Maker Faire and I I really want to try to get some more outputs from the image.
So right now we don't have any gauge of speed we just have a throttle output and so that that'll change your speed depending on how full your battery is. So I want to get more of a feedback loop from the camera to actually estimate what the car is actually doing. So using some visual odometry techniques to figure out figure out where the car is going and has been and maybe even create a map. That that's the fun thing that I'm working on this year.
[00:17:57] Unknown:
Yeah. And the mapping aspect is definitely interesting because from what you were saying earlier, it sounds like each time you go to 1 of these events, you have to retrain the, donkey car on the track that it's going to be running. So I'm wondering if there's any possibility of being able to save those different training runs and either recall them for when you're reusing the same track or be able to sort of create some hybridized model based on past experiences of different tracks and be able to then build some sort of evolutionary capability of the network based on past training sets.
[00:18:29] Unknown:
So I were just starting to do that, and actually the next race in Oakland is is this weekend, and I'm gonna stay home behind my desktop because I'm just way more productive here as opposed to the races. And I'm offering anyone at the race that I'll train models for them. And so, hopefully, I'll be able to aggregate, everyone every donkey's data from the race and use it to train a better model as a whole rather than each individual person having to collect enough data to make a robust model. It's unproven and so we gotta see how it works. But I think if we do get more scientific about this, we'll we'll learn if it makes sense to borrow the used training data from a completely different course if the car actually still learns something about, recognizing fines and and things like that even though it's a completely different course.
Those are things we just need to test and try to quantify. But 1 of 1 of the fun things that I'm enjoying with this project is the currently, the, sort of commercial self driving projects are they've invested so much time and money that there are these papers coming out that you can read about and then implement in a much smaller scale and since these pictures we're collecting are only a 120 by 160 pixels you can actually train these driving models that's on your laptop. So it's really really fun way to experiment and I think if we get the right people involved in this project we'll be able to contribute back to the self driving technologies of cutting edge so that, you know, we can realize the self driving future faster.
[00:20:16] Unknown:
Yeah. 1 of the things that I definitely thought was interesting when I came across this project and considered, inviting you on the show was just like you said that feedback cycle of because of the fact that self driving vehicles are so much in the, you know, research phase and testing phase, and there are so many different large companies and even smaller organizations who are trying to play a role in bringing that future forward. It's interesting to see that there's also a, movement of hobbyists who are getting involved just to play around and see, you know, what kinds of things are possible on even more limited hardware and seeing how that might play back into the implementations that these companies are doing by maybe pulling in efficiencies that are developed because of the more limited capabilities of the hardware and software that you're running it on and seeing how that can potentially improve the overall capabilities of the, you know, the Googles and the Ubers and the other companies that are involved in trying to build these self driving vehicles. Yeah. Yeah. No. I'm I'm really enjoying the constraint of just having to run everything on the pie. You know, you're just forced to
[00:21:20] Unknown:
sort of boil everything down to its essence and throw out the things that are unnecessary which is why we haven't really added many sensors. There's just it's another thing that can break and it all there's the fast iteration speed and also the the lack of regulation and sort of safety requirements
[00:21:38] Unknown:
Just let you experiment on whatever you want. It's it's really fun. Yeah. No. It definitely is it's it's always fun to see what sorts of innovation and ideas can come about because of those imposed constraints. Because, you know, if you have all the money in the world, you're less inclined to try and find those efficiencies or cut corners to be able to get to the end result. Because if you have all the resources, then you might as well use them. But by being forced into those constraints, it causes you to use other ways of thinking that you might not otherwise. Mhmm. So it's definitely interesting feedback cycles between the 2, sort of scales of operation. And curious, what are some of the challenges that you have come across in order to get the full stack running on a Raspberry Pi? Because particularly things like TensorFlow and Keras and, running Tornado as well. I imagine that it it starts to consume a fair amount of resources being able to process the images live. And I'm curious, what are some of the ways that you've been able to overcome some of the limitations of the platform that you're running on?
[00:22:39] Unknown:
Yeah. Well, I mean, the first first obstacle you you realize is just that installing things on Raspberry Pi is pretty tricky. The Raspberry Pi Foundation has done a really awesome job of creating binaries for each each release and those are great but if you want to install open CV you know it takes 4 hours to compile and luckily someone has has created binaries for tensorflow or raspberry pi but that can easily take a a day if you don't know what you're doing, which is why we made a disk image that you can copy, so you don't have to do all that stuff. But that's sort of the 1 problem getting all these libraries on the raspberry pi.
But I think the the second part is is they're just trade offs. So, if you know that your model can only run 7 frames a second, then you don't have any bandwidth to do any other computer vision stuff. And it also means you can't move as fast. So what what we did was just sort of almost run a brute force search on trying to find the right size model. And we we went from having a model that was 24 megabytes and 20 main parameters to to a model that is, just under 2 megabytes and, like, 200, 000 parameters. That that can run at least at 30 frames a second, and we could probably even get it, get it lower than another team, the carpenter team has done some tests on images that are only 32 by 32 pixels, and they seem to do almost as well as as the larger images.
So, yeah, I think just trying to trying to squeeze out the things that are unnecessary is is how we're going to be able to build some robust and creative movements out of the donkey. 1 1 inspiration is the fruit fly, just the thing that you see in a kitchen around your fruit, has only 200, 000 neurons. And then they're not the same as the artificial neurons that we're using, but until we're able to get something as smart as a fruit fly with a raspberry pi, I think we haven't really understood how we don't really understand what we're doing but that's a little motivation for just working with constrained hardware.
[00:24:56] Unknown:
Yeah. Definitely. Are there any of the other, embedded computation pieces of hardware that you've looked at such as BeagleBone
[00:25:03] Unknown:
or some of the other, you know, sort of single board computers that you've considered trying to run this on? I haven't. We didn't talk about adding a Teensy just for sort of integrating some of the live sensors, but I really just work on the software and defer all the hardware decisions to, the others who are cofounder of this project, Adam McConway. We just try to keep it as simple as possible. Some some people have used donkey on like tx2 or some of the NVIDIA platforms, but, they're they're just since they're not as widely used as Raspberry Pi, people haven't solved the software bugs as though it can take a long time to set them up. So, I mean, for the foreseeable future, we're just gonna stick to that real fast.
[00:25:49] Unknown:
And what are some of the improvements or new features that you've been considering for the future of Donkey, both from the hardware and software perspectives?
[00:25:58] Unknown:
Yeah. Well, from the software perspective, some sort of visual odometry and mapping. Once you have a map, you can anticipate and go a lot faster. And, I think that will help us win some more races. And from the hardware perspective, I think maybe some real encoders. There's a guy who has written or has designed a Raspberry Pi hat that will has, sort of like a figures out the powers, like a power regulator that lets you take power from your car battery rather than requiring a USB and also has an IMU and accelerometer that that type of stuff so that we can integrate that to get more feedback into what, yeah, what the car is actually doing. Yeah. And
[00:26:50] Unknown:
So after having spent the past several months working on Donkey and, improving the capabilities of autonomous driving, does it give you a new appreciation for some of the difficulties involved in trying to roll out an autonomous fleet for mass transit? And, have have you considered new ways that that might be implemented at a city scale to improve the transportation capabilities?
[00:27:14] Unknown:
Yeah. I mean, it definitely gives you a huge appreciation for driving, like just driving a car. When you're driving you're thinking how am I doing this and then how can I how can I make the Raspberry Pi Donkey thing do this? So it makes driving more enjoyable but on a city on a city perspective, I think we could have some especially with all the money that Tesla and Google and, everyone is throwing at this. We could have self driving cars if we didn't have human drivers, and we just sort of designed the infrastructure around this new technology, but and I think that's the opportunity that these municipalities have that have right of ways that are free of cars like the subway rails. I I don't think there's any investment in into this area.
And so I think I think if we're able to demonstrate what this can look like, it would just excite people that they could, you know, have the efficiency and throughput of a train, but actually the flexibility of of a small bus to to stop at only the stops you need to go to and to to deliver you the last mile. So I I my goal is unchanged. I am more I appreciate the complexities of, designing a system more now, but I I think it's definitely possible.
[00:28:39] Unknown:
And 1 of the things that I was just thinking of as you were saying that is 1 of the reasons that a full scale autonomous vehicle network would be safer and more reliable than human drivers is partly because of the ability for those vehicles to communicate with each other. So I'm curious if there's any room for potentially creating a vehicle mesh network of, you know, in the you know, of donkey vehicles to be able to do something like a relay race where, you know, potentially, 1 of the vehicles is running a particular section and then is able to relay some of the information to another vehicle of some of the, you know, potential hazards involved in the course.
[00:29:19] Unknown:
Yeah. Yeah. And then next I mean, 1 of the next steps I forgot to mention was, getting these cars to platoon. So we're gonna put some fiduciary tags. I'm not saying that right. Just like those QR code looking things that computer vision can tell the orientation and the distance of them. So with that, I think we'll be able to have the cars follow each other, and that would be the first step. But in terms of communicating, things with each other, it's tough because you you can never rely on that just like you can't rely on the driver in front of you to, like, have brake lights. It's tough to rely on that, but it definitely could be an added level of security or something like that. But, yeah, the next step is figuring out how we get these cars to platoon and then turn off on their own. So what are some of the,
[00:30:10] Unknown:
areas that you are looking for the most help on the project and, hoping for people to get involved in?
[00:30:16] Unknown:
Yeah. Well, I mean, anyone with a with a lot of experience with computer vision, there are tons of opportunities to automate the calibration of the wheels, to to design to to make the car sort of make its own map. Also just in a or from a web sort of client side web perspective, we have a website that is essentially the remote control for the car but it's totally hacked together and not very nice so that would be an awesome improvement if someone with some design intuition could help us with that. From the architecture standpoint, if you have built sort of live real time systems before and worked with ROS and appreciate what ROS has done, but also understand that ROS isn't as easy for everyone as it is for someone who's used it before.
It'd be a great opportunity to sort of to build a better ROS. Yeah. There's tons. There's there's, like, big projects for architecture related things as well as small little pieces if you wanted just to design a a single part that other people could do. So just we have a really active Slack channel. And so if you have an idea, you can just throw it up there and people will give you their feedback.
[00:31:29] Unknown:
Are there any other topics that you think we should talk about before we start to close out the show or any of the, aspects of the project that I overlooked? No. I was pretty confident. Alright. Well, for anybody who wants to get in touch and contribute to the project or follow the the things that you're up to, I'll have you add your preferred contact information to the show note. Okay. And so with that, I'll move us to the picks. And my pick this week is, Android application called Org. Ly. In past episodes, I've mentioned that I've been using Org Mode a lot for keeping track of different notes and to dos. And, Orgizly has actually been really great for being able to synchronize those to my phone and be able to write down notes on my tablet and sync them to my computer. So for anybody who's interested in Org Mode but doesn't necessarily want to, you know, dive headfirst into eMax, Orgley is a good mobile aspect of it, and there are also implementations for Sublime, Versus Code, and Vim, which I have linked in the show notes. So definitely recommend taking a look at that. And so, Will, do you have any picks for us this week?
[00:32:26] Unknown:
I've really been enjoying listening to the, book on tape, Algorithms to Live By, and it just talks about the science and math that we've learned in computer science and applying it to your daily life, whether it's organizing or picking a mate or it's really fun and well well
[00:32:46] Unknown:
well written. Yeah. I've definitely heard a lot of, recommendations for that. It's what I'll have to put in my reading list. I've heard nothing but good things, so it's good good to hear it from someone else. So I appreciate you taking the time to join me today and share the work that you've been doing with donkey. It's definitely an interesting project and 1 that I hope to find the time to get involved in and maybe, hack on that with my children. It's definitely great that there's such an approachable platform for being able to experiment with something that's potentially so revolutionary. So I appreciate your time for that, and I hope you enjoy the rest of your day. Great. Yeah. Thanks a lot for doing this.
Introduction to Will Roscoe and Donkey
The Genesis of Donkey
Technical Details and Software Architecture
Sensor Integration and Evolution
Challenges and Innovations in Training
Future Directions and Collaborative Efforts
Reflections on Autonomous Driving and City Transit
Community Involvement and Contributions