“Not just for notebooks: JupyterHub in 2025”
with Yuvi4:00 pm ET
JupyterHub: A multi-user server for Jupyter notebooks
This is how JupyterHub was described when it was announced in 2015, 10 years ago. The focus was on bringing Jupyter Notebooks to multiple users on shared infrastructure. The Jupyter Notebooks focus was so strong that Jupyter was even in the name of the project! Fast forward 10 years, & this is still the most common perception of JupyterHub.
However, this has not been true for a long time now. Instead of setting
up 5 different kinds of infrastructure to support your users based on what
kind of interface they like to use (JupyterLab, RStudio, Linux Desktop
tools like qgis or Napari, Visual Studio Code, full ssh
(!?), etc) for
their interactive computing, you can set up a JupyterHub that supports all
of those! Meet your users where they are, rather than force them to conform
to using only a specific set of tools.
Come to this talk to: 1. See cool demos of various popular applications running on JupyterHub seamlessly 2. Understand the security model of JupyterHub & how that enables these cool demos 3. Learn how you can set up your own application to run in JupyterHub 4. Influence the future of how JupyterHub is marketed
Meet Yuvi

Yuvi has been a JupyterHub core team member for close to a decade, solving user problems with empathy by reducing accidental complexity. He is a co-founder of 2i2c.org, a non-profit serving users with interactive computing needs through open infrastructure. He is the co-creator of z2jh, kubespawner, TLJH, mybinder, jupyter-server-proxy, & many other JupyterHub projects. He spends most of his time working on emotional regulation, building local community and riding his motorcycle. Ex-wikimedia.
Recent Events
#38 Up-scale Python Functions for High-Performance Computing with executorlib
Jan Janssen — July 30, 2025
Recording Coming Soon!
With the rise of machine-learned interatomic potentials in the atomistic simulation community in Materials Science, the complexity of simulation workflows changed from directed acyclic graph (DAG) based simulation workflows with typically just a single simulation code to simulation workflows coupling simulation codes at different length and time scales and thousands of individual simulations. To orchestrate these workflows a number of simulation frameworks were developed. Still as part of the Exascale Computing Project we realized these were limited in their flexibility and scalability.
So, we developed executorlib[1] based on the concurrent futures Executor interface in the Python standard library and with the goal to distribute Python functions over hundreds of compute nodes. Internally, executorlib leverages the Simple Linux Utility for Resource Management (SLURM) and the flux framework from Lawrence Livermore National Laboratory[2] to up-scale simulation workflows from a workstation or traditional high-performance computers (HPC) to the latest generation of Exascale machines. In contrast to previous solutions, it does not require any daemon process or database but instead directly interfaces with the job manager to maximize computational efficiency. At the same time, it is designed with a focus on debugging capabilities to minimize the overhead of migrating workflows to the Exascale machines.
In this presentation I introduced executorlib, highlighting the lessons learned from development of the pyiron atomistic simulation suite which led to the development of executorlib as minimalistic workflow manager. Finally, I highlight the general applicability of executorlib to distribute python functions of any scientific domain on HPC clusters of all sizes.
[1]: Janssen et al., JOSS, 10(108), 7782, (2025).
[2]: Dong H. Ahn et al., Fut. Gen. Comp. Sys., 110, (2020).
#37 Diving into Parquet Read Optimization
Gijs Burghoorn — June 25, 2025
Parquet is an important file-format used in data science world. It provides many opportunities for query optimization and data pruning. Based on our experience optimizing the Polars Parquet reader, we look through how Parquet stores data, its uses and rediscover many of the reader optimizations.
Watch on Youtube
#36
Developments in the scikit-learn Ecosystem: Going Beyond model.fit(X, y).predict(X)
Guillaume Lemaitre — April 30, 2025
Scikit-learn is one of the de facto libraries when it comes to predictive modeling with tabular data. For over a decade, it has provided traditional and reliable algorithms to address data science problems. While it excels at model fitting and prediction, these stages represent only a small portion of a data science project and are relatively well-defined. Many data scientists are familiar with the notion that 90% of their time is spent on preprocessing, while the modeling stage takes up only 10% of their efforts. Additionally, tracking and organizing experiments, as well as transitioning from experimentation to production, can be challenging.
This exchange aims to shed light on recent developments and efforts within the scikit-learn ecosystem. We will provide an overview of the following tools through a series of short notebook demos.
Watch on YoutubeAbout Us
At Don’t Use This Code, we want to create a unique opportunity to see Python succeed and thrive within the National Labs! We propose creating a new resource for scientists, researchers, and technical staff to support their use of Python and to build a strong, lasting community for Python users within the Department of Energy National Labs. Disclaimer: The Python Exchange is an independent group of Python enthusiasts who wish to see the use of Python and open-source computing thrive within the National Lab system. This group is not sponsored by or affiliated with the Department of Energy.