My computer is crawling… did I fry it, or is it Jupyter?
- silviamazzoni
- Dec 4
- 7 min read
I asked ChatGPT.... I guess I did fry my computer!!!
As usual, my objective is to leave you with more questions than when you arrived, so take this content with a grain of salt, it was ALL GENERATED BY CHATGPT, WITH MY PROMPTING AND CONTRIBUTION. SO IT MAY BE WRONG!!!!
How Jupyter Notebooks Manage Memory
Short answer:✔ Memory is managed on the server side — by the machine where the kernel is running.❌ It is not tied to the client/browser.
Your browser is only a “window” into the session. It displays results but does not execute code and does not store your RAM state.
Detailed Breakdown
1. Two components of a Jupyter environment
A. Client (your browser)
Sends your code to the kernel.
Receives outputs (text, images, JSON, HTML, plots).
Stores only the displayed outputs (temporarily) in the browser page.
Does not allocate memory for variables, matrices, arrays, etc.
The client is just a UI. If you close the tab, the kernel continues running on the server until shut down.
B. Server (where the kernel runs)
This is the important part.
The “server” could be:
Your laptop (local Jupyter)
A remote VM
DesignSafe JupyterHub
An HPC compute node
The IPython kernel (or Julia, R, etc.) lives here.All memory (RAM) usage comes from this machine.
Variables, arrays, models, OpenSees objects, big NumPy arrays, plots → all stored in the server RAM.
So if you load a 10 GB dataset into a notebook running on a Stampede3 compute node, the memory usage is on that node.
Do keep in mind, though, that if you are serving the Jupyter notebook from your own desktop, you are both the server and the client!
Memory Flow (accurate model)
[Browser] <--> [Jupyter Server] <--> [Kernel Process (Python)]
(runs on server machine)Memory is consumed here → [Kernel Process].
2. How Outputs Affect Memory
In Notebook
Output images or large HTML objects are stored in the notebook document.
These take browser memory only when rendering.
These do not increase server RAM usage.
In Kernel
Creating a large figure (e.g., Matplotlib/Plotly) creates objects in RAM.
Clearing or deleting them only frees memory if Python’s garbage collector releases the objects.
3. In DesignSafe / JupyterHub the story is:
✔ Memory usage is on the allocated Jupyter server VM, not on your laptop.✔ You typically get a container or VM with some RAM limit (4–16 GB depending on the environment).✔ Closing the browser does not kill the kernel.
4. What Happens on HPC Nodes?
If you launch a notebook kernel on:
Compute Node (via batch job)
Kernel memory usage counts against your job’s RAM.
If you exceed memory → your HPC job gets killed (OOM).
Login Node (NOT for compute)
Notebooks are usually not allowed to run compute kernels on login nodes (DesignSafe restricts this).But if they did:
You could overload the login node → administrators kill the kernel.
5. How Jupyter manages memory
Jupyter itself does not manage memory.Memory is managed by:
✔ The IPython kernel (Python interpreter)
✔ The OS process limits (ulimit, cgroups, container limits)
✔ Python’s memory allocator & garbage collector
✔ Any external libraries (NumPy, Pandas, OpenSeesPy, etc.)
Jupyter is only the communication layer.
TL;DR
Memory usage in Jupyter is always on the server running the kernel — not on your browser. The browser doesn’t compute or store variables; it only displays output.
If you want, I can also explain:
✅ How to check memory usage in a Jupyter session✅ How to kill runaway kernels✅ How memory leaks happen in notebooks✅ How DesignSafe specifically isolates memory between users✅ How OpenSeesPy models store memory and when memory is freed
Just tell me which angle you want next.
Of course I couldn't turn down ChatGPT's offer for more useful content.....
---
How Memory Leaks Happen in Jupyter Notebooks
Memory leaks in notebooks are common — especially in long sessions, repeated cell runs, or complex libraries (NumPy, Pandas, OpenSeesPy, Plotly). The leaks come from both Python-level behavior and notebook workflow habits.
A. You re-run cells that recreate large objects
Example:
big_array = np.random.rand(20000, 20000)Every time the cell runs:
Python allocates a new 3.2 GB array
Old arrays may not be freed immediately
Memory usage increases until OOM
B. Objects remain referenced in the kernel namespace
Even if you do:
del big_arrayMemory is NOT freed unless:
Every reference is gone
Python’s garbage collector runs
The underlying C-allocated buffers are freed
NumPy arrays especially hold large C-level buffers that persist longer than expected.
C. Notebooks store output history
Jupyter stores:
Variables in In[] / Out[]
Hidden references in , _, ___
Large outputs (e.g., giant DataFrames printed to screen) create hidden references that keep objects alive.
D. Libraries that allocate memory outside Python
Most common in:
OpenSeesPy (C++ domain + FE objects)
Matplotlib (figure objects)
PyVista / VTK
TensorFlow/PyTorch
NumPy MKL/BLAS buffers
These do NOT follow Python’s garbage collection rules.
Key solution
Restarting the kernel is the only 100% reliable way to clear memory.
How OpenSeesPy Stores Memory
(and how it gets Freed)
OpenSeesPy is a CPython wrapper around the C++ OpenSees domain.
This has important implications:
A. Where memory lives
When you create a model:
from openseespy.opensees import *
model('Basic', '-ndm', 3, '-ndf', 6)
node(1, 0, 0, 0)
element('elasticBeamColumn', ...)Memory is allocated in three places:
1. Python interpreter
Python objects, arguments, temporary lists
Usually small overhead
2. C++ OpenSees Domain (most memory!)
This includes:
Node objects
Element objects
Material models
Integrators
Solvers
Tangent matrices
State vectors (displacements, velocities, accelerations)
Internal arrays used during iterations
These are stored in native heap memory outside the Python GC.
3. BLAS / LAPACK / MKL allocations
solver memory, stiffness matrices, factorizations, thread-local buffers.
B. Why memory does NOT free automatically
OpenSeesPy creates a global singleton Domain object.
Even if you do:
wipe()Two things persist until the kernel dies:
1. Python still holds references to wrapper objects
e.g., a “tag” stored in your namespace.
2. The underlying C++ objects may not be destructed
OpenSees’ internal cleanup is partial for some components.
Net result:
Memory footprint does not return to baseline until you restart the kernel.
C. How to fully free OpenSees memory
Most reliable method
✔ Restart the Jupyter kernel
Jupyter → Kernel → Restart Kernel
This kills the entire Python process → OS frees 100% of RAM.
Some partial mitigation
wipe()
Avoid re-running full model creation repeatedly
Re-use a model by resetting geometry or analysis instead of recreating
Break your workflow across separate notebooks
Run large analyses in Tapis Apps or command line scripts, not inside Jupyter
D. OpenSeesMP vs OpenSeesPy
OpenSeesMP:
Multiple MPI ranks
Memory allocated per-rank
Much larger footprint
Jupyter is NOT the right place to run it (use Tapis)
OpenSeesPy:
Still heavy for big models
Easier to leak because Python will retain wrappers
Memory-Safe Jupyter Workflow Template for OpenSeesPy
This workflow prevents memory buildup, runaway kernels, and Domain residue. It applies to anywhere OpenSeesPy runs inside Jupyter.
A. Start each analysis with a fresh kernel (recommended for large models)
Step 1 — Restart kernel
Jupyter → Kernel → Restart Kernel
Step 2 — Import only what you need
Avoid wildcard imports — they keep too many references alive.
from openseespy.opensees import model, node, element, wipe
Step 3 — Build model exactly once
Do NOT place your model setup inside a cell that you frequently re-run.
✔ Good:
# Run once
model('Basic', '-ndm', 3, '-ndf', 6)
# define nodes & elements
❌ Bad:
# Re-running this cell wipes/recreates the domain repeatedly → memory leak
wipe()
model('Basic', ... )
Each wipe+rebuild consumes extra C++ heap that is not fully returned.
B. Keep analysis loops separate from model creation
Example pattern:
# Cell 1 — create model (run ONCE)
...
# Cell 2 — define analysis parameters (change safely)
...
# Cell 3 — run analysis loop
for step in range(nSteps):
analyze(1)
# process results incrementally or save to fileYou can rerun Cell 3, but NOT Cell 1.
C. Clear large Python objects explicitly
For NumPy arrays, FE result matrices, plots, etc.:
del big_array
gc.collect()For Matplotlib:
plt.close('all')D. Save results to disk, not RAM
To avoid holding arrays in runtime memory:
✔ write to CSV✔ write to JSON✔ write to HDF5✔ for time histories, append incrementally
E. When memory usage grows → RESTART KERNEL
This is normal and unavoidable for OpenSeesPy heavy models.
3. How to Detect Memory Leaks in OpenSeesPy
A. Use Python’s process memory monitor
Create this small helper function:
import psutil, os
def mem():
p = psutil.Process(os.getpid())
print(f"Memory: {p.memory_info().rss/1e9:.3f} GB"Use it after each major block:
mem() # after domain creation
mem() # after element creation
mem() # after analysiIf memory keeps increasing each time you re-run cells, you have a leak.
B. Track OpenSeesPy residue after wipe()
Run:
wipe()
mem()If the memory does NOT drop back to baseline, then:
the Domain was not fully deleted
Python still has references
the C++ heap retained solver buffers
This is normal. Restart kernel to fix.
C. Check for hidden notebook references
Run:
who_lsor:
%whosLook for:
leftover arrays
plot objects
large lists or dicts
OpenSeesPy wrapper instances
If they remain, they keep C++ memory alive indirectly.
D. Check Matplotlib plot objects
Plot objects leak memory unless you close them:
plt.close('all')Alternately, use:
fig, ax = plt.subplots()
plt.close(fig)E. Detect leaks via repeated analyses
If memory increases every time you run analyze loops:
your model is being recreated multiplicatively
element objects accumulate
solver objects accumulate
internal state vectors accumulate
This confirms a C++ level leak or misuse pattern.
Best Practices for Running Large FE Models in Notebooks
A. When NOT to use Jupyter
Avoid notebooks for:
OpenSeesMP
OpenSeesSP
200k DOF models
Long-running analyses (>5 minutes)
Parametric sweeps
Monte Carlo loops
Hazard/risk workflows with repetitions
Any job requiring HPC cores or large RAM
Use:
Tapis Apps
Batch scripts on Stampede3
OpenSeesMP via SLURM
B. When notebooks ARE suitable
Debugging small models
Teaching & demonstrations
Preprocessing
Simple OpenSeesPy prototypes
Post-processing results
Visualizing outputs
C. Recommended notebook structure for large-ish models
Use this 3-notebook workflow:
Notebook 1: Preprocessing
build geometry
generate nodes and elements
write input files to disk
DO NOT run analysis here
Notebook 2: Execution (optional)
for small models, OK to run locally
for anything significant → submit via Tapis
store time histories & results in disk files
Notebook 3: Post-processing
read results from files
plot
animate
compute derived quantities
This prevents large C++ memory from sitting inside the notebook kernel.
D. Reduce memory load inside notebooks
Avoid storing all results in Python lists:
u_hist.append(nodeDisp(10,1))
Instead, write incrementally:
with open("disp.csv","a") as f: f.write(f"{step},{nodeDisp(10,1)}\n")
Break models into subcomponents and run separately.
Use HDF5 or NPZ instead of storing arrays live in memory.
E. Always restart kernel after these events
rebuilding a domain
creating more than 50k elements
running large time histories
running nonlinear dynamic analysis
running multiple analyses in the same notebook
plotting more than ~20 figures
loading multiple large files
F. Summary Table
Task | Recommended? | Why |
Small static analyses | ✔ Yes | Minimal memory footprint |
Small dynamic analyses | ✔ Yes | Fine for teaching/debug |
Mid-size models (<50k DOF) | ✔ With caution | Restart kernel often |
Large models (>200k DOF) | ❌ No | C++ domain too heavy |
OpenSeesMP in notebook | ❌ Strong no | Multiple MPI ranks → huge memory |
Parametric sweeps | ❌ No | Memory ballooning |
Hazard analysis loops | ❌ No | Storage + memory issues |
Post-processing | ✔ Ideal | Very memory-light |

Comments