Summary
Are you looking for a backend as a service offering where you have full control of your data? Look no further than Kinto! This week Alexis Metaireau and Mathieu Leplatre share the story of how Kinto was created, how it works under the covers, and some of the ways that it is being used at Mozilla and around the web.
Brief Introduction
- Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
- I would like to thank everyone who has donated to the show. Your contributions help us make the show sustainable.
- When you’re ready to launch your next project you’ll need somewhere to deploy it. Check out Linode at linode.com/podcastinit and get a $20 credit to try out their fast and reliable Linux virtual servers for running your awesome app.
- You’ll want to make sure that your users don’t have to put up with bugs, so you should use Rollbar for tracking and aggregating your application errors to find and fix the bugs in your application before your users notice they exist. Use the link rollbar.com/podcastinit to get 90 days and 300,000 errors for free on their bootstrap plan.
- Visit our site to subscribe to our show, sign up for our newsletter, read the show notes, and get in touch.
- To help other people find the show you can leave a review on iTunes, or Google Play Music, and tell your friends and co-workers
- Join our community! Visit discourse.pythonpodcast.com for your opportunity to find out about upcoming guests, suggest questions, and propose show ideas.
- Your host as usual is Tobias Macey and today I’m interviewing Alexis Metaireau and Mathieu Leplatre about Kinto
Interview with Alexis and Mathieu
- Introductions
- How did you get introduced to Python?
- What is Kinto and how did it get started?
- What does the internal architecture of Kinto look like?
- Given that the primary data format being stored is JSON, why did you choose PostGreSQL as your storage backend instead of a NoSQL document database such as CouchDB?
- Synchronization of transactions from multiple users, including offline first support, is a difficult problem. How have you approached that in Kinto and what are some of the alternate solutions that were considered?
- Designing usable APIs is a complicated subject. What features did you prioritize while creating the interfaces to Kinto?
- What are some of the most innovative uses of Kinto that you have seen?
- What are some of the biggest challenges that you have faced while building Kinto?
- What do you have planned for the future of Kinto?
Keep In Touch
- Kinto
- Alexis
- Mathieu
Picks
- Tobias
- Alexis
- Mathieu
Links
- CouchDB
- OpenAPI
- WebCrypto
- Formbuilder
- Firebase
- Kinto Comparison Table
- Mozilla Persona
- Portier
The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA
Hello, and welcome to Podcast Dot in It, the podcast about Python and the people who make it great. I would like to thank everyone who has donated to the show. Your contributions help us make the show sustainable. When you're ready to launch your next project, you'll need somewhere to deploy it, so you should check out linode@linode.com/podcastanet and get a $20 credit to try out their fast and reliable Linux virtual servers for running your next app. You'll want to make sure that your users don't have to put up with bugs, so you should use Rollbar for tracking and aggregating your application errors to find and fix the bugs in your application before your users notice they exist. Use the link rollbar.com/podcastinit to get 90 days and 300,000 errors tracked for free on their bootstrap plan. You can also visit our site at podcastinitdot com to sign up for our newsletter, read the show notes, and get in touch. And to help other people find the show, you can leave a review on iTunes or Google Play Music and tell your friends and coworkers. You can also join the community at discourse.pythonpodcast.com to find out about upcoming guests, suggest questions, and propose show ideas. Your host as usual is Tobias Macy, and today I'm interviewing Alexis Metteau and Mathieu Le Placche about Quinto. So, Alexis, could you start by introducing yourself? Yeah. Sure. So I'm
[00:01:15] Unknown:
I'm in Rennes. I'm French guy, living in the northwest of France. I was a Python hacker for Mozilla for 4 years and half, where we created Pintos that we will talk about. I moved to a different position. I'm now starting a brewing ID, a brewing project, with some friends, on I'm volunteering on some open source projects. I believe that's it.
[00:01:41] Unknown:
And, Matthew, how about you? So, yeah, I'm I'm Matt. I mean, if Matthew is too hard, I'm I'm okay with Matt. So
[00:01:47] Unknown:
Okay.
[00:01:49] Unknown:
So hi, everyone. I'm I'm also French as you may have noticed already. I but I live in Barcelona in Spain, and I'm working remotely for the cloud services of Mozilla.
[00:02:01] Unknown:
And how did you each get introduced to Python? Alexis, how about you go first? Yeah. Sure.
[00:02:07] Unknown:
So I was, I was doing software development for a few years before I was doing like PHP and stuff. I was an intern in a company, in a small company named Machinacorpus in the south of France. I started hacking on a carpooling application, that was built on top of Django. So that's how I got started. That was around, I think, 2,009. On a few years after actually the project that really got me started with Python is Pelican, the block generation tool on here. I believe that's the story on Zen. I I joined Mozilla, started hacking full time on Python.
[00:02:46] Unknown:
And, Matt, how did you get introduced to Python? Well, a similar story. It's mainly via Django as well, maybe around 2,000 and 8. And I was using Java first, like, in early 2000 and then lots of PHP. And when I switched to Django and Python, I was feeling very productive compared to to the other programming language. So I started to to use Python in every new projects I was doing. So let's say, thanks, Django, for that. That's that's how I, I came to it.
[00:03:22] Unknown:
And so today, we're talking about the Kinto project, which you've both contributed to. And I'm wondering if you can describe what Kinto is and how the project got started.
[00:03:32] Unknown:
So Kinto is a is a service. It's a it's a web service. And, basically, it's a it's a generic JSON document store where you can store data and share and synchronize them across several devices. So, basically, it's a way for our application authors to store and and synchronize data for their application and let the users be in control of the location where they want to store their user data. And for the story behind it, it's it's it's quite a long story. So there are well, we can I can let Alex, tell a part of it, and then I can I can complete if you if you want? Alex? Yeah. Okay. So I think it it all started, like, a a few years
[00:04:25] Unknown:
ago. So I started working for Mozilla. I was involved in a bunch of, other project for a nonprofit organizations, in my spare time. A few friends asked me, like, for 1 specific need that I didn't found any any project to to cover easily. They wanted to have something really simple. They wanted to have a map, they could edit, and they could they could, like, store data in the map on filter. Like let's say, I have different kinds of events in different parts of of France. They wanted to be able to store all this data and find it back later. Some project existed, like Google forms on this kind of stuff, but they were not letting the users in control of their own data. So after some research, we decided to like I started thinking about a generic tool for this. And I contacted Matt on a few of my ex colleagues, on of the company, where I was a trainee a few years a few years earlier. And we started to discuss this using a a PAD online. The ID the main ID that came out was the idea of a generic HTTP backend where you can actually send data and validate this data, depending on the schema. So you define a schema first, and then you send data, and it validates against this schema. So we had this idea in mind, a few months after, I think we met at a local event, and we created the project, and we named it Daybed at this time. And it was something built on top of CouchDB on Elasticsearch.
We we hacked it, on it during our spare time for a year or 2. But the the project had many cool features, but, it never had its momentum. It was doing too much, I think. So, yeah. So that's, I think, basically the stories of the story of the the first half, like what we did with David on Zen, a few years after I was working on a team at Mozilla where we were interested in the concepts of that little generic back end. Matthew, joined us at this time, and we started doing it again in a better way, but maybe you can explain this part, Matt.
[00:06:40] Unknown:
Yeah. As you say, like, the the original idea was explored, 6 6 6 years ago or something like that. So, I was working in that services company, and every new project started with something like, hey. We need an API to store these and to store that. And, you know, as good engineers, we know it was relevant to reuse some code from 1 project to another. But, we were I was I was thinking, like, but what if we could reuse the whole, HTTP service? So when when Alexi talked to me about that idea of a reusable storage back end, I was like, oh, okay. But that's that's that's an idea. I I was already thinking about it. So so we had to to gather around on the first thing. And and and then at at Mozilla, Alex, he was in the the services, teams and the cloud services team, and, obviously, they also had many similarities between different projects. So slowly, the idea for raise you know, reusable back end came on the table, and I had the chance to join them. I had a chance to join the team, so we started to work on a first project. It was the reading list project where, basically, it was gonna be the alternative or an alternative to to pocket.
Well, the project was not continued, but we kept, the ID and and we rebooted the the the Daybed, the original Daybed project from scratch, changing its name to Kinto. And we changed the approach. Like, in Daybed, every time we had the new ID, we were implementing the new feature. But in Quinto, we were doing it for our employer, and we have to be wise. We have to be wise and reasonable. So we were only waiting for real world use case, like, real need before implementing or writing features. So we avoid the software to be, like, bloated or, like, we used to be with David. So we rebooted the thing, and and and Kinto was born. That was maybe 2015, like, maybe May 2016, something like that. So in a year and a half. I don't know if it was clear enough.
[00:08:50] Unknown:
No. That was good. So can you describe a bit about how Quinto is architected and some of the layers that it uses to allow for such a generic API back end?
[00:09:00] Unknown:
Yeah. Sure. So Kintos, the idea behind Kintos is actually pretty simple. It's an HTTP API that you put on top of a database. So the idea is to have a generic HTTP API to store data. The way it's architectured is kind of the same thing. So on the web part, we are using the pyramid framework. We store the data in different backends. Like by default, it's shipped with PostgreSQL, but you can actually plug other back ends if you want. And thanks to the way Pyramid is built, it's really simple. It was really simple for us to add ways to extend Kinto in several ways. So we have a bunch of plugins that allow to add new features when you need them. Even in the core of Kinto, you actually have plugins, like core plugins for the history management, for instance. So that's about how Kinto is architectured in general, so you get the full grasp of it. On Zen, in general, the IDB and Kinto is to separate the back end from the front end. So we have an administration interface, it is actually fully, JavaScript. It's a web application. It relies on on the HTTP API to interact with the Quinto database directly. So that's how we interface the application in general.
[00:10:24] Unknown:
And for being able to add new plugins, is it taking advantage of the zope. Interface library so that there's a specified set of entry points into the core Kinto for somebody else to be able to extend it?
[00:10:39] Unknown:
Oh, it's not it's not as complicated as that. We we Okay. We basically rely on the event, subscriptions and, request and response handlers that are provided by Pyramid. Of course, you can use, like, the ZOPA in interface if you want, like, the the but it's not, prerequisite. The plug ins we have usually work using events. So when a data is changed, for example, you can receive you can listen and receive an event and react on that event to, I don't know, store something somewhere else or handle the requests, some aspects of the request that you wanna handle. So that's how we did, plugins to store attachments on the records or plugins to track history of changes or introducing digital signatures for guarantee of integrity and stuff like that.
[00:11:33] Unknown:
And given that the primary format of the data that's being stored is JSON, I'm curious why you ended up choosing postgre c Postgres as the default database to ship with instead of something like CouchDB like you originally went with with the first version of the project.
[00:11:51] Unknown:
That's actually an interesting question. So I think Matt will be able to talk a bit more about the why PostgreSQL is a good choice. But But I think it's interesting to, to point that we were actually using CouchDB at first in the David in the David project. And our first idea was to add validation features directly to to CouchDB, to enable our use cases. I remember that we had a quick discussion that went a few years ago, so I don't remember exactly the specifics. But with 1 of the project authors of CouchDB, we found that the CouchDB architecture wasn't enabling us to tweak it in order to do validation using data that was stored directly in CouchDB because we were in need to actually push a schema first and then use this schema to validate the the data, that was sent to, to the server. So we actually, we actually tried this at first, but like since Kintou is pluggable, it's completely possible to add a CouchDB backend now. So, you know, as usual, if anyone has a use case for it, don't hesitate to to contribute. So, yeah, so that's that's possible. We actually have a contributor, Boris, who's working on a on a back end driver for MongoDB.
And we also have a readily available Redis implementation. So we can use different back ends, but by default we use PostgreSQL. I think Matthew can can explain better that because he he convinced us all.
[00:13:23] Unknown:
Oh, well, actually, I I don't I am not, I'm not responsible for every every choice we made around that. You you know, we have, we have some constraints, like, for this production constraints. And, for example, 1 of the reason is that, PostgreSQL is a lot easier to host, maintain, and deploy. So the the sysadmin@mazilla, for example, would expect us to provide very strong arguments be for deploying exotic, tech like, I don't know, rethink DB or cost DB. So, post PostgreSQL is, like, a very good old piece of software and very robust, and the ecosystem is very rich. So, it it it it's a natural solution for for for storing data in a in a permanent and solid, way.
And in terms of performance, the JSONB feature, which arrived around version 9 dot 4. It's very efficient, and it allows us to query any kind of data, whatever the schema. Like, we don't have to do preliminary indexing like we used to have to do in CouchDB. So, you know, it's like a very simple and convenient way to store data because it's, it's very known, and it's very, common, and it's very performant. So that was not your choice.
[00:14:49] Unknown:
Yeah. I can definitely attest to the fact that Postgres is a pretty solid piece of tech having used it at most of my jobs. And, yeah, the fact that it does support JSON out of the box these days is pretty useful. And also the query and capabilities are pretty powerful, being able to index into specific key paths and then return
[00:15:10] Unknown:
nested data directly from the contents of the JSON without necessarily having to retrieve the entire JSON file out into memory in the application and then parse it there before shipping it back up to the front end? Yeah. Which is 1 of the limitations we have with the Redis back end, for instance. When we need to query some specific things, we sometimes need to unpack the data on memory before returning it. So yeah. Exactly.
[00:15:36] Unknown:
So given the fact that there is that variability in how the different data back ends can operate? Is there also variability in the amount of features that can be supported when you're using those different back ends?
[00:15:48] Unknown:
No. So, basically, we have a a subset of like, we have a number of features that a back end needs to implement by default if that has not changed since I looked at it. But, I believe, you need to actually support a bunch of operations.
[00:16:08] Unknown:
Well, mainly transaction transactions, actually. So, yeah, we have, like, a a transaction per request. So during the request, set of a set of operations are performed on the storage back end. And if something wrongs if something wrong happens, then we want to be able to roll back anything that has happened, since the beginning of the request. Some yeah. The back end has to has to have, like, kind of transactional support. And and, otherwise, the operation we have, you know, are are very simple. It's like a a a crud, you know, create we create a date, delete, and then we fetch the least or individual records. So if if the back end supports, custom, like, filtering or any any filter on any column without cheap schema, and then it's very it's a lot more efficient as you said because you would let the back end do the filtering. But, otherwise, which is the case of Redis, for example, would fetch the whole list whole list of the collection and then perform the the field featuring in in memory. It's acceptable depending on the size of the datasets.
Usually, we deal with, small collections of data because we use we deal with user data, and I don't know. Maybe you are a very a very strong user of, I don't know, bookmarks. But if you have, like, 10,000 bookmarks, it's still acceptable. I mean, it's not, like, 10,000,000.
[00:17:37] Unknown:
So that's just 1 point. I mean Yeah. It it really depends your needs.
[00:17:43] Unknown:
So 1 of the core tenants of the project is the fact that it supports synchronization between different clients. And given that synchronization is such a difficult problem in computer science, I'm wondering how you've approached that and, what are some of the alternate solutions that you that were considered and ultimately discarded before you settled on the approach that you're using now.
[00:18:04] Unknown:
I can I can try to answer that? So, yeah, it's a very hard topic indeed. I mean, it's it's an academic, challenge, and I to but we made several concessions, of course. And the concession we've made is that, first, we, like, we don't store, the divergences, like the history of each record, like CouchDB would do. We only store the last version of a record, and we don't try to do magic. So, basically, the idea is, there is a simple vector clock, like, something like every time a write operation is done, a time stamp is incremented. So we can we guarantee that, every write operation will bump, the value of a time stamp. And it's a linear history, line. It's not a tree. So every time a client comes and say, hey. My last version of the data is that time stamp. The server is able to give every new, records since that time stamp. And if, if there are some conflicts, then we don't try to solve them ourselves. We let the developer handle the the 3 way merge and think things like that. And usually, the developers are in the best position to make decisions. And we don't try to be smart. We just make sure it remains very simple for most cases, which is, like, all most cases are, like, either client wins, like, the data on the client is gonna override the data on the server, or the server wins, which means, the date the server is the source of truth, and we discard the client data. And for manual resolution of conflicts, we let the developer make some choices, decisions based on his domain specific, stuff. And when a data is deleted, we store something called that we call a tombstone.
It's basically an empty record that says that record was deleted at that time stamp. It's a way to track deleted things so that the client can apply the deletions, locally. So, is the idea is very simple, and we were we inherited from the the tech the algorithm when the Mozilla was using in Firefox, sync. So it's very simple and also, like, robust and tested.
[00:20:26] Unknown:
And the tracking of time stamps can be complicated by the fact that Kinto has a strong focus on offline first support for clients. So the time stamp that gets sent back to Kinto, is that the time stamp of when the record was modified locally on the device or the time stamp that Kinto actually received it when the client comes back online if they were disconnected?
[00:20:48] Unknown:
When the when the client comes back online. Actually, the, in the the client does not assign any time stamps. So the time stamps are assigned only when there is a communication with the server. So the server is the source of truth for timestamps. That's basically also a very, very big simplification of the thing. There is, well, it's a it's like, it's a negotiation between the server and the client when it comes to merging the changes on fields. But when it comes to time stamps, the source of truth is the server, and the client does not generate new time stamps. So Yeah. That's the thing. So so so big difference, for instance, with projects like CouchDB
[00:21:32] Unknown:
is the fact that, they provide really useful master master replication system, and we don't try to do this. Like, we we don't have the use case for it. So we are we are in a in a simple, server on client relationship that simplifies a lot of things. And that's also why we don't need to store the divergence tree. Because if you are master master, you actually you're not forced to do it, but it simplifies a bit your the way to handle the the marriage.
[00:21:59] Unknown:
So is the typical scale of use for Kendo largely for personal use for people to have their own copy of data for a given application? Or is there also sort of an equal or greater number of people who are using this in a production context with multiple users? Because I can see how gradations along that scale would have different requirements in terms of how it implements a synchronization algorithm. Because if it's just 1 user who's using Kintof to store notes that they take across different devices, I can see how occasional conflict of data synchronizing or clobbering something would be much slower impact than if somebody were using this as a component of a production system where they have multiple users using the system and possibly even sharing data where having the data get overwritten or needing to implement a more complicated conflict resolution algorithm would be more necessary?
[00:22:57] Unknown:
I can try to answer that, but I am not sure I have a good answer. So I I think that the way we do it with with, Quinto is really, simplistic in many ways. Like, we have different use cases, like at Mozilla, they are using Kintos to store for instance the certificate revocation lists. I'm not sure where they are with with this project, but they are also using the the Kinto project to store, the web extension data like Firefox plugins, they can store data on on on they're using Kintl for that. I'm not sure if that's still the case, Matt. You can Oh, yeah. Yeah. It's still the case. Yeah. Okay. Yeah. And I think that, the, like, depending your use case, like, actually, only thing that differs is the way you deploy the the Kinto architecture.
Because in terms of code, that's exactly the same thing for us. But if you have a large number of users and they are doing a lot of writes, for instance, the way to to do it in order to scale to shards the data. So you try to not have the users write to the same collections directly. And then in your deployment, you shard the the database and that solves the problem. And for small projects, you can have everything on 1 machine, that works pretty well as well. So I don't know if you have anything else to to tell about that. Well, yeah, I think,
[00:24:20] Unknown:
it's right to distinguish the use cases. So, I mean, it's a little side note of your question because, like, you were mainly mentioning many users, trying to write on the same collection of data. So I would just gonna complete that and say, yeah, we have also lots of use cases where Quinto is used as a way to to deliver data to the clients. So, Alexi Alexi was mentioning the block lists. So it's basically a list of certificates, SSL certificate, revocation, and that's 1 or 2 administrators at Mozilla that, edit that list by Akinto, and then it's delivered to 100 of millions of clients.
And I think on production, we have, like, yeah, several millions of requests per hour, but they're all read requests. So and that can that can also be a case for, your use case of Kinta if you use Quinto as a way to deliver, I don't know, some settings, some translations, some, I don't know, game, levels or some content, any generic content for your applications. Like, you can use Quinto as a way to deliver read only data as well. As for, the concurrency and scaling of concurrent write operations, indeed, if if, lots of users are gonna are going to edit the same record, then it's gonna be very hard to con reconciliate because since the reconciliations have happened on the client, when the client will is gonna is gonna try to upload this new merge version, the server is gonna say, oh, no. But somebody else already uploaded something here. So you have to reconsider it again. So it can be like a endless loop. So, yeah, if if if that's the way to put it, yeah, if thousands of users are editing the same recall, then Quinto is not is probably not the good solution. But, for in many use cases, it's not the it's not the case.
[00:26:30] Unknown:
And going back to the subject of offline first support, I'm wondering if you can talk a bit to why that was a design consideration and something that you want to have in the project and also why that's an important feature to have?
[00:26:45] Unknown:
I think that's just a matter of anticipation maybe. So I think there there are different reasons for that. 1 of 1 of them is that at this time, a lot of people were talking about offline firsts. And I think it brings our attention on it bringing it bring our attention on this. And then, it's true that a lot of people are actually using devices whereas they don't have connection or they have limited connection. We thought it it will be important for us to actually store the data locally. But in so it's possible to do it with Kinto. And we have, like, a client that in a JavaScript client that does it for you. But it's also true that a lot of use cases of Kinto are actually not using the local, local first approach.
So like we have a bunch of demos which are actually just using Kinto as an HTTP API, and they are using it to store the data and to retrieve the data. They are not storing the content locally, but it's possible to do it if you want. So it, you know, it's a matter of what your needs are. Yeah. We were yeah. We wanted Quinto to be a mobile app back end, and I think being able to use your application
[00:28:00] Unknown:
in in the plane or in the train, it's it's it's a very cool feature. So we tried to we made it we made, our best to provide abstraction on top of, IndexedDB for the browser that, you can like, as an application developer, you just use that library. You shred your data locally. And when you when the application comes back online, you just call a sync method, and the client is gonna handle the reconciliation, like the synchronization for you. And but as as Alexi said, it's not a prerequisite. Like, you can also have applications that don't rely on local copy of the data. So it's a cool property, I think, in those days.
[00:28:47] Unknown:
Yeah. Being able to use an application even when you don't have an Internet connection in the same manner as when you are able to interact in real time with the data source is very useful. You know, for instance, 1 example that a lot of people will probably have experience with is being able to use their Gmail client while disconnected to draft an email and then schedule it for sending. And then when they do get connected back to the Internet, it will just go ahead and send transparently, and they don't have to worry about it. So experiences like that definitely increase the usability of a program. And as you said, as mobile becomes more ubiquitous, it becomes APIs, I can speak from experience that building usable APIs is complicated and has a lot of different considerations.
So I'm wondering what are the features that you prioritized while you were building out the interfaces to Kinto to try and make sure that they were sufficiently generic and reusable. And continuing on that a little bit, I know that making APIs discoverable is important as well. So I'm wondering what you did on that front. So,
[00:29:59] Unknown:
yes. The thing is we wanted the API to be, a storage API, so it becomes, natural to provide, like, a rest, API. Our main focus at first was to respect every aspect or as many aspects as possible from the HTTP specifications. So when it comes to status codes and headers and make sure we don't reinvent anything, and and it's it's a bit it's a more it's more complicated than it looks like. For example, the patch, the patch method is, has many has many variants so variants. So you can have, different ways to patch your data. So there are, for example, the JSON merge, RF specification, RFC.
You have the JSON batch, which is another 1. And so we always made sure that we respect the the conventions out there. At some point, we were thinking of using, the the JSON API specification. So, Steve, 1 of the authors, is is a colleague, so we had some discussions with him. And, actually, we quickly realized that, the specification was addressing too much. So it was too much things to introduce, too too many too many concepts introduced, so we decided to keep it simple. It's a it's a REST JSON API. We respect everything we we know about, status cards, headers, and mechanisms that are, that are standout. And it's a hierarchy of, so it's a storage hierarchy. So you have, like, a bucket in which you have collections, groups of users, then you have collections of records. So everything is a it's like a storage tree, and every endpoints, is, every resource endpoint is made the same way. So it it's basically the same code running behind this the the the HTTP endpoint.
So every endpoint behaves the same way as as the other. So there is no surprise, I would say. It's all, like, consistent. And that that makes that makes it very usable because there is no once you know how works 1 endpoints, you probably know already how the others would work too. So that's 1 part of the of the answer. And I don't know, Alex, if you want to talk about, discoverability.
[00:32:28] Unknown:
Well, we don't have a lot of things made about discoverability. We know that there are a lot of different ways to do it, but we didn't got any use case for it. So it's basically the zones where we have for a lot of different topics. It's like, okay, what problem does it solve exactly? Like, do you have this problem right now? If we don't have it, we don't try to solve it. Actually, that changed recently, like, very recently. The quality assurance team, Mozilla, they want to have, like, a consistent way to make sure an API behaves,
[00:33:03] Unknown:
properly. So they are introducing, swagger as well, open. It's gonna now call open API, but, they're introducing swaggers and some services just to make sure they can, like, scan the endpoints and think about ways to automatically detect anomalies in the APIs. So, we have an outreach program, which is basically an intern internship program, and we'll have some interns, working on that precise, topic, well, verification of the API.
[00:33:36] Unknown:
So what are some of the most innovative uses of Kinto that you guys have seen?
[00:33:40] Unknown:
We have, Remy, which is a a guy working on Kinta as well, and he did a a proof of concept where Firefox translations could be updated remotely unsynced to the different Firefoxes, like, very easily, like, without the need of riding the trains, like we say so we like you cool actually directly push an update with the new translations to all the Firefox if you if you need. That was pretty cool because well, that was a demo. Right? But, it was pretty neat to see that it was possible to do this with, like, a day of hacking. So that was nice. Other examples, like we have Matt and also Matt, which is hacking around, Internet of Things, and he is currently building, like, using Kinto as an API to, to store the data related to this these physical objects. So you got a a box, which is named Nelson. I didn't know about that at this time. So it's just a like a toy where you can, specify your position.
It it moves, a stick that's inside the the thing to the right position. All the data was stored directly in Quinto. He also was acting on, like, a generic info box. So some things that will actually grab data about, like, where when will your bus what is the hours of your bus? Like, you know, will will will you miss your train? Like, what kind of weather do you have currently, etcetera? And he was storing the data in in Quinto while he was thinking about doing this. That's 1 of that that's, like, 2 demos that I think about.
[00:35:32] Unknown:
Then we have also yeah. We have also cool demos that usually have a good impact, like, you know, web maps, where you can add and share some markers on the map in real time or even, like, end to end client side encryption where users can encrypt data locally and then sync it via Quinto. So, as a very safe way to store data remotely because anyone without the keys cannot, read the data. So that was that was cool because it was using, web crypto, which is pretty innovative. I
[00:36:07] Unknown:
like it. Yeah. In terms of innovations, that's what I can think about a lot more. We have other projects which are using Quinto, like a project named the form builder. And it's actually, like a way to create forms on sending send a URL to your users and ask them to enter information in the form, and then you have an administrative interface. Exactly like like what Google Forms does, but in a way that lets you own your data if you want. And we have a lot of other projects using Kinta.
[00:36:40] Unknown:
For somebody who's starting a new project, what are some of the competing projects that somebody might use in place of Kinto and why would they choose Kinto over them? I know 1 example would be something like Firebase.
[00:36:52] Unknown:
So, like, there are a lot of different projects which actually kind of do the same things that we do, but in different ways. We have actually a a table of comparison with a bunch of projects on our website, well, on our documentation. But like talking about Firebase, we do not try to solve exactly the same, the same things. So like the biggest difference with Firebase we have is the fact that you can actually own your data. You can host it yourself, on you, like, you might not want to actually let your data be stored by a third party. So that's 1 of the main difference, but it's a political thing. It's not a technical difference. In terms of features, I don't have the the table in front of me, but, I think that we, like, we do not do like, I see Firebase as a way to synchronize the data, like, in terms of real time synchronization.
Like, you push something and bam, it's there. Right? It's like a way to synchronize directly with the different clients. Where with Quinto, it's more like you wanna store the data and get it back on your, other device. So it's it's kind of different. I can try to actually find it back on on the website because I don't remember all the differences. Because we like, a lot of people were asking us about the differences. Right? So it's very interesting. So, yeah, so the other differences we have with Firebase is the fact that, for instance, we do file storage. We have a a way to store files via an extension that they don't provide. Like 1 of the other big differences is what if you need to actually have a different authentication, like you want to reuse your authentication mechanisms that you are that you are currently using at your company or whatever. Like, currently, you cannot extend the the software that's that's running on Firebase, let's say. And then, like, you can actually go to another, another topic which is CouchDB.
Because I think that CouchDB and CouchDB are solving a lot of different use cases, and they are a really cool tech that you might want to use, to solve the kind of the same thing that you will solve using Kinta. The biggest difference with CouchDB is that we have, like we have 2 differences, 2 main differences. The first 1 is, we provide what we call fine grained permissions. And fine grained permissions are like Matt said, you can store your data in a bucket, which contains the collections, which contains records.
At each different level, you can decide what kind of permissions you want to have. So for instance, you can say, all the data that's stored in this bucket, I want to share it with, Tobias, but not this specific record, but not with this specific collection, or you can do it the other way around. You can say, okay, I wanna keep all the data in my my bucket, my own, and I don't want to share it except if I explicitly say so. This is not really, really easy to do with CouchDB, due to how it's architecture. Another big difference we have with with with CouchDB is, is the fact that CouchDB uses a MapReduce as a query mechanism.
It's not really easy to understand for newcomers. We try to avoid the same, and we we provide a easy to use query mechanism, which is using just filters. Like, it's it's really, like, a matter of reading the documentation once and then you're done. Like, you you put the name of the field and you say, get me all the recurs where the name of this field equals this, you have all the data back. So that's the main differences. The other 1 is the fact that we provide applicable storage. So if you want to store your data in your preferred database, you might want to use Kintos instead of CodesDB for instance.
[00:40:57] Unknown:
Yeah. I can definitely see where that would be useful. For instance, if you have all of your Kinto data in either a table within a broader schema or even in its own schema, but in the same physical database so that you could potentially join that information with other data from a separate application to integrate it at the data level. So what are some of the biggest challenges that the content team has faced while building it?
[00:41:21] Unknown:
Well, I think as we said earlier, we made lots of effort to design the FTP API so that it remains, simple and and stand out. But since it's the main contract of the service, we have to guarantee a certain backward compatibility. So we design many things to ease the transition of clients when we when we would introduce a breaking change in the API. The clients would be aware of it via some specific headers and deprecation, features. So we still make a lot of efforts to maintain, the backward compatibility, But also a good challenge is to make sure the transition of clients is gonna be smooth the day we introduce a breaking a breaking change, like a major or version implementation. So that's that's 1 of the big challenge, and that's that's, like, the technical 1. But the the the main challenge was currently, like, I would say, it's, since the variety of use cases is almost infinite with Pinto, it was very hard for us to communicate about the project and its goals, especially if it consisted in narrowing down the list of possibilities. Like, it was very frustrating to to limit the possibilities just in order to communicate better. Because we're like we we love our project, so we want to list everything it does, and it's actually counterproductive when it comes to communication and key key selling points and stuff like that. So 1 of the biggest biggest challenge is, communicating on the project.
And, of course, we didn't have, like, all the graphical skills and to build shiny front end apps or an effective landing page for the project. So, we do our best, but some of our demos are very technical. So we wish someone would like to make them beautiful, I mean, visually beautiful because, end users don't look at the code. And also, I mean, building a community is hard and and especially when you're not good at communicating about the project. So it's like, yeah, building
[00:43:39] Unknown:
like, having clear communication about the features and stuff is is hard. Yeah. And it's also, it's not a an end user tool. Right? Like the the difference when building a community when you have a a tool that does something that, you know, your users will use directly is different as building something for for the developers. Because what we're providing is a tool for developers. So building a community around that is is is not, it's not the same as what I I knew, for instance. So, yeah, it's a it's a big difference.
[00:44:14] Unknown:
Yeah. All all the hardest problems in computing are people. And so for going back a little bit to the client applications, you mentioned that there's a JavaScript client library for Kintos. Is that something that you would pull in and embed into a larger application so that you could integrate it with something like, you know, React or Ember framework, for instance? Yeah. Exactly. Yeah. That's the way we do. And and yeah. For example, the like, yeah, most of our demos use,
[00:44:45] Unknown:
React, but, some people in the community use Amber. I'm not aware of any, use case of Angular, but I know there are people interested because they Yeah. Yeah. There are people Yeah. Actually sorry.
[00:44:58] Unknown:
Yeah. There are people using, like, on Angular to store data. Like, for a that's the guy I I'm brewing with, and he's doing an application to observe recipes of beers. And he stores everything in Kinto. He is a Angular developer. So, yeah, that's so so we have, yeah, a use for it.
[00:45:18] Unknown:
And what's what's planned for the future of Kinta? What are some of the features that people can look forward to as it continues building out?
[00:45:27] Unknown:
Okay. The the I think the biggest focus is polishing the product presentation. So we started with changing a bit the the landing page. Nico did that, like, 2 weeks ago, on working on the documentation as well. Like we have a I think we have a pretty good documentation, but it's it's complicated. Like you need to update it all the time to be sure that it's very easy to understand for newcomers, etcetera. So that's where we we need to improve, make sure the project is well introduced. So that's 1 of the big area. And other than that, we have a push notification system that relies on pusher.com, and we plan to actually use web push instead, which is standard.
It's it's used in in all the browsers now. We had an Outreachy intern working on this. So thank you, Deepsha, for the work you did on on Webpage. That's 1 of of the next features that we'll land. And we also have the 1 click installer. So we have a bunch of them already. So it's possible to, for instance, deploy Quinto on Heroku. Very easy, like you just need to click a button on if you already have a Heroku account, it's done. And we try to other cloud providers in order to have this. So that's what is planned. We also have a lot of ideas of stuff we want to do. But that are not planned yet.
[00:46:54] Unknown:
And since the the stack is currently it's it's it's on production right now at Mozilla, it's it's we're gonna, like, make sure it's, very stable. So we're probably in the next few weeks or months going to polish everything and, probably find some bottlenecks or better ways to implement things that are already there. Like, we probably not provide many new features, but make sure they work very, very well, and they remain simple. So that can imply some work on the client's side as well. Like, the clients should be, very intuitive and behave very consistently with the API. So apart from the push notification standardization, I would say, there's no big thing coming. Probably some improvement around the documentation authentication.
Because with Pyramid, the framework, it's very easy to do, pluggable authentication. But for the end user, it's, like, very confusing as well because then they ask, like, but yeah. But how do you authenticate? And if we answer, like, only, maybe you plug any authentication that you want, it's very confusing. So we'll probably work on some built in providers, authentication providers, but we don't want to tie the project to Google, Facebook, or whatever. But that's even though that's the front end, what's the front end developers usually need. Like, they say, oh, my users want to authenticate with their Twitter account, but we don't want to to bind Twitter into Clinto. So we have to work on a better authentication configuration
[00:48:33] Unknown:
story, so we'll probably improve that. Everything is available already, but improve the experience. I'm talking about this specific thing. 1 thing that will have been really useful is you might have heard about persona, like, the the persona authentication mechanism, which was a a way to compete with, authentication providers like Facebook and Google on Twitter, etcetera. We had a, like, persona is now kind of dead, but, we found out that some of the people behind the idea of of persona, like Dan, I think, is currently working on another project, which is named Portier. It's, it's pretty cool because the web really needs this thing. Like, we don't want to have Facebook on on Twitter, etcetera, like, to to be able to authenticate to a service. So it's nice. Like, we already like, what it does is reuse your email address because everyone got an email address. So so, yeah, that might be a a cool direction for, for the integration of Quinto. We have not decided if we wanna do that exactly or not.
[00:49:39] Unknown:
And for somebody who's looking to contribute to the Quinto project, are there any areas in particular that you're looking for help or, any sort of associated work that could be done, for instance, building out some demo applications that are a little bit more visually appealing as a way to get people involved in Quinto?
[00:49:58] Unknown:
Oh, yeah. We we take well, basically, we have a different kinds of, we will come any help, and we have there are different kinds of areas. Like, we have easy peeks, labels, the GitHub repo. So if you just wanna get into the code and do something, like, small just to to have a look, then, come and see the Ezypiks. And, otherwise, on the front end side, we will come and help to make demos visually appealing, which is not our 1 of our specialties. So, we can also, help, building some demos because that would be helping communicating about the project. If there are some some cool application using Quinto, it's a it's a it's a good help because it makes the the project, more valuable. And on we also have, like, a administration UI, so it's a React application. So if you're more into JavaScript and front end, we welcome any help on on that too. And and some people really like, writing documentations.
So we have a very detailed documentation, and we probably lack some kinds of, summaries or a quick start pages. So we can welcome we would welcome any help on that, like, having, like, 10 points, quick start, page or something like that. And it's really hard for us to do because we have the whole context in mind. So it's very hard for us to have a fresh look at the the project. Like, we know how it works, so we need someone that is new to get in a look and say, okay. But here, you give too much detail. I just wanna get started, for example. So there are other areas that will be interesting, like crypto, for instance.
[00:51:51] Unknown:
You can use Quinto to do end to end crypto with your data. So you can store your data anywhere on any Kinto instance. You will actually encrypt and decrypt it only on the client side. So we have proof of concept for this, but the ecosystem behind this is not strong enough. So that will be 1 area that will be very interesting in in developing. So if anyone has got any competence, on this on on this matter, that will be a very cool addition, I
[00:52:25] Unknown:
think. So are there any other topics that we should cover before we close out the show? I think we're done. So for anybody who wants to follow you or keep in touch with, either of you personally or the project at large, what would be the best ways for them to do that?
[00:52:39] Unknown:
So for the project, I think the the best way to follow what we do is to see the announces, we do on the on the project directly. So we are developing Kinto and GitHub. So just following the project there is simple enough. Otherwise, we have a Kinto mailing list if you're interested in announcements. So we are we're publishing, all the new versions, announcing the new versions there, having discussions around Kintos there. It's a low traffic list. Yeah. I think that's that's the best way. And if you wanna get in touch with us personally, I think for me, the best way is email.
[00:53:16] Unknown:
You can find that on not my ID dot org. That's my domain name. There is my email over there. And for me, I see your Twitter email. It's perfect. Great. So that I'll move us on into the picks. And my pick today is just going to be a discussion topic that 1 of the listeners of the show introduced on the discourse forum recently about what I'm working on with what what I'm working on this week with Python. So just a way for all the listeners of the show to hop in and share, you know, what are some of the projects that you're working on? What are you using Python for as a way to sort of build up more communication as well as expose people to various ways that Python can be used that may not fit within any individual person's typical wheelhouse. So if anybody's interested in hopping in and contributing some feedback on that topic, I think that would be great. I'm definitely interested to see what are some of the ways that people are using Python in their day to day. So with that, I will pass it to you, Alexis. What do you have for us for pics?
[00:54:16] Unknown:
That's a good question. I think that I will I will talk about music. So I actually rediscovered, an album very recently. It's a very well known album by Miles Davis. And I it's it's bitches bro, if you know it. And it's actually very interesting to see that well, actually to hear that this music is not aging. It's exactly like it was done yesterday, but it's I don't know how old is it, but, that's very interesting. So bitch is bro, I think.
[00:54:49] Unknown:
Great. And, Matt, what do you have for pics? Oh my god. I was, like, entering into Benick, to find out something interesting to share. I have some regrets because it's a good idea is gonna come, in 5 minutes when we close-up the show. So, I'm sorry. I love you, everybody. That's
[00:55:08] Unknown:
So I really appreciate the both of you taking some time out of your day to tell us all about Quinto. It's definitely an interesting project and 1 that I've already got a few ideas for how to use it. Definitely gonna be keeping track of it as it goes into the future. And I will definitely let you know if and when I build something with it. So thank you for your time, and I hope you enjoy the rest of your day. Thank you for the invitation as well. Thank you. Thank you, Tobias. Thank you, everyone.
Introduction and Sponsor Messages
Guest Introductions: Alexis Metteau and Mathieu Le Placche
Getting Started with Python
Introduction to Kinto
Kinto's Architecture and Design
Synchronization Challenges and Solutions
Use Cases and Scalability
Offline First Approach
API Design and Discoverability
Innovative Uses of Kinto
Competing Projects and Differentiators
Challenges Faced by the Kinto Team
Client Applications and Integration
Future Plans for Kinto
Contributing to Kinto
Closing Remarks and Contact Information