Jonathan Keane, PhD

I am a leader building engineering and data science teams that develop technology to improve people’s experiences, work, and more. I lead teams to develop the right things to solve the right problems with a commitment and focus on accuracy, actionability, and scalability. I am passionate and committed to the development of open source software as well as to developing teams that build open source software. My training has strengths in software engineering, product-driven design, data science, predictive models, social science, as well as quantitative and statistical reasoning. I also have experience in product management both in dedicated product roles as well as serving as de-facto product leader on engineering teams.

In my spare time I wander the world with a camera or two. I’m always looking for new experiences, cultures, and scenes to explore and learn from. I’m also an inveterate tinkerer and builder.


Some interesting and hopefully helpful tools for others.


I’m a core maintainer of conbench. I’ve been with the project since the early days and have helped it grow and built a team of maintainers around it from one internal use to multiple use cases, including external and unrelated users.

The conbench project is a Language-independent Continuous Benchmarking Framework. The conbench family consists of a (Python) backend and api for storing + serving benchmark results over time as well as benchmarking packages for taking benchmarks from various languages (Python, r , JavaScript, c++ , Go, Rust) and sending them to conbench for monitoring.


I’ve been a committer to Arrow since 2021.

Arrow is a software development platform for building high performance applications that process and transport large data sets. It is designed to both improve the performance of analytical algorithms and the efficiency of moving data from one system (or programming language to another). My specific contributions are in the r and Python implementations (including some c++ binding work used by both) along with systems for macro and micro benchmarking of the broader project, improvements to CI and user experience.


I’m the creator and maintainer of dittodb.

dittodb is a package that makes testing against databases easy. When writing code that relies on interactions with databases, testing has been difficult without recreating test databases in your ci environment, or resorting to using sqlite databases instead of the database engines you have in production. Both have their downsides: recreating database infrastructure is slow, error prone, and hard to iterate with. Using sqlite works well, right up until you use a feature (like a full outer join) or has quirks that might differ from your production database. dittodb solves this by recording database interactions, saving them as mocks, and then replaying them seamlessly during testing. This means that if you can get a query from your database, you can record the response and reliably reproduce that response in tests.