GitHub - henryiii/python-performance-minicourse: Mini-course at Princeton on High Performance Python (original) (raw)
High Performance Python
Princeton mini-course
By Henry Schreiner, with Jim Pivarski
Installation
Binder
In the minicourse, if you haven't prepared beforehand, please use this link to run online via Binder:
Codespaces
GitHub provides 120 core-hours (60 real-time hours if you use the smallest (2-core) setting) of CodeSpaces usage every month. You can run this in a codespace:
Note that you should currently start jupyter lab
manually from the VSCode terminal once it's built (3-5 minutes after starting it for the first time).
Local install:
If you are reading this at least 10 minutes before the course starts or you have anaconda or miniconda installed, you will probably be best off installing miniconda. This way you will keep local edits and will have an environment to play with.
Get the repository:
git clone https://github.com/henryiii/python-performance-minicourse.git cd python-performance-minicourse
Download and installminiconda. On macOS with homebrew, just run brew cask install miniconda
(see my recommendations).
Run:
from this directory. This will create an environment performance-minicourse
. To use:
conda activate performance-minicourse ./check.py # Check to see if you've installed this correctly jupyter lab
And, to disable:
or restart your terminal.
If you want to add a package, modify
environment.yml
then run:
Lessons
- 00 Intro: The introduction
- 01 Fractal accelerate: A look at a fractal computation, and ways to accelerate it with NumPy changes, numexpr, and numba.
- 01b Fractal interactive: An interactive example using Numba.
- 02 Temperatures: A look at reading files and array manipulation in NumPy and Pandas.
- 03 MCMC: A Marco Chain Monte Carlo generator (and metropolis generator) in Python and Numba, with a focus on profiling.
- 04 Runge-Kutta: Implementing a popular integration algorithm in NumPy and Numba.
- 05 Distributed: An exploration of ways to break up code (fractal) into chunks for multithreading, multiproccessing, and Dask distribution.
- 06 Tensorflow: A look at implementing a Negative Log Likelihood function (used for unbinned fitting) in NumPy and Google's Tensorflow.
- 07 Callables: A look at Scipy's LowLevelCallable, and how to implement one with Numba.
Class participants: please complete the survey that will be posted.