Skip to content

Commit 3dc8bba

Browse files
committed
Add JOSS paper draft
1 parent 64696dc commit 3dc8bba

File tree

7 files changed

+361
-0
lines changed

7 files changed

+361
-0
lines changed

.github/workflows/draft-pdf.yml

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
name: Draft PDF
2+
on:
3+
push:
4+
paths:
5+
- joss/**
6+
- .github/workflows/draft-pdf.yml
7+
permissions:
8+
contents: write
9+
jobs:
10+
paper:
11+
runs-on: ubuntu-latest
12+
name: Paper Draft
13+
steps:
14+
- name: Checkout
15+
uses: actions/checkout@v5
16+
- name: Build draft PDF
17+
uses: openjournals/openjournals-draft-action@v1.0
18+
with:
19+
journal: joss
20+
paper-path: joss/paper.md
21+
- name: Upload
22+
uses: actions/upload-artifact@v4
23+
with:
24+
name: paper
25+
path: joss/paper.pdf
26+
- name: Commit PDF to repository
27+
uses: EndBug/add-and-commit@v9
28+
with:
29+
message: '(auto) Paper PDF Draft'
30+
add: 'joss/paper.pdf'

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,9 @@ docs/site/
2323
# environment.
2424
Manifest.toml
2525

26+
# Local notes
27+
**/temp
28+
2629
# Local metadata/configuration files
2730
**/.DS_Store
2831
.dist/

joss/assets/A.png

14.5 KB
Loading

joss/assets/A_min.png

14.4 KB
Loading

joss/assets/A_rec.png

14.5 KB
Loading

joss/paper.bib

Lines changed: 159 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,159 @@
1+
@misc{BvMM+19,
2+
author = {Balog, Matej and van Merriënboer, Bart and Moitra, Subhodeep and Li, Yujia and Tarlow, Daniel},
3+
title = {{Fast Training of Sparse Graph Neural Networks on Dense Hardware}},
4+
year = {2019},
5+
eprint = {1906.11786},
6+
archivePrefix = {arXiv},
7+
primaryClass = {stat.ML},
8+
url = {https://doi.org/10.48550/arXiv.1906.11786}
9+
}
10+
11+
@inproceedings{CM69,
12+
author = {Cuthill, E. and McKee, J.},
13+
booktitle = {Proceedings of the 24th National Conference of the ACM},
14+
title = {{Reducing the bandwidth of sparse symmetric matrices}},
15+
year = {1969},
16+
pages = {157--72},
17+
publisher = {Brandon Systems Press},
18+
doi = {10.1145/800195.805928}
19+
}
20+
21+
@article{CS05,
22+
author = {Caprara, Alberto and Salazar-González, Juan-José},
23+
journal = {INFORMS Journal on Computing},
24+
number = {3},
25+
title = {{Laying Out Sparse Graphs with Provably Minimum Bandwidth}},
26+
volume = {17},
27+
year = {2005},
28+
pages = {356-73},
29+
doi = {10.1287/ijoc.1040.0083}
30+
}
31+
32+
@article{DM99,
33+
author = {Del Corso, G. M. and Manzini, G.},
34+
journal = {Computing},
35+
title = {{Finding Exact Solutions to the Bandwidth Minimization Problem}},
36+
volume = {62},
37+
year = {1999},
38+
pages = {189--203},
39+
doi = {10.1007/s006070050002}
40+
}
41+
42+
@phdthesis{Geo71,
43+
author = {George, J. Alan},
44+
school = {Department of Computer Science, Stanford University},
45+
title = {{Computer Implementation of the Finite Element Method}},
46+
year = {1971},
47+
url = {https://apps.dtic.mil/sti/tr/pdf/AD0726171.pdf}
48+
}
49+
50+
@article{GPS76,
51+
author = {Gibbs, Norman E. and Poole, Jr., William G. and Stockmeyer, Paul K.},
52+
journal = {SIAM Journal on Numerical Analysis},
53+
number = {2},
54+
title = {{An Algorithm for Reducing the Bandwidth and Profile of a Sparse Matrix}},
55+
volume = {13},
56+
year = {1976},
57+
pages = {236--50},
58+
doi = {10.1137/0713023}
59+
}
60+
61+
@article{GS84,
62+
author = {Gurari, Eitan M. and Sudborough, Ival Hal},
63+
journal = {Journal of Algorithms},
64+
number = {4},
65+
title = {{Improved dynamic programming algorithms for bandwidth minimization and the
66+
MinCut Linear Arrangement problem}},
67+
volume = {5},
68+
year = {1984},
69+
pages = {531--46},
70+
doi = {10.1016/0196-6774(84)90006-3}
71+
}
72+
73+
@article{JMP25,
74+
author = {Johnston, Nathaniel and Moein, Shirin and Plosker, Sarah},
75+
journal = {Linear Algebra and its Applications},
76+
title = {{The factor width rank of a matrix}},
77+
volume = {716},
78+
year = {2025},
79+
pages = {32--59},
80+
doi = {10.1016/j.laa.2025.03.016}
81+
}
82+
83+
@article{JP25,
84+
author = {Johnston, Nathaniel and Plosker, Sarah},
85+
journal = {Linear Algebra and its Applications},
86+
title = {{Laplacian \{−1,0,1\}- and \{−1,1\}-diagonalizable graphs}},
87+
volume = {704},
88+
year = {2025},
89+
pages = {309--39},
90+
doi = {10.1016/j.laa.2024.10.016}
91+
}
92+
93+
@misc{Krys20,
94+
author = {Krysl, Petr},
95+
howpublished = {GitHub},
96+
title = {{SymRCM: Reverse Cuthill-McKee node-renumbering algorithm for sparse matrices}},
97+
year = {2020},
98+
url = {https://github.com/PetrKryslUCSD/SymRCM.jl}
99+
}
100+
101+
@misc{LLS+01,
102+
author = {Lumsdaine, Andrew and Lee, Lie-Quan and Siek, Jeremy G. and Gregor, Doug and
103+
McGrath, Kevin D.},
104+
howpublished = {Boost v1.37.0 documentation},
105+
title = {{example/cuthill_mckee_ordering.cpp}},
106+
year = {2001},
107+
url = {https://www.boost.org/doc/libs/1_37_0/libs/graph/example/cuthill_mckee_ordering.cpp},
108+
note = {Accessed: 2025-06-10}
109+
}
110+
111+
@misc{MAT25,
112+
author = {{MATLAB Developers}},
113+
howpublished = {MATLAB R2025b documentation},
114+
title = {{symrcm - Sparse reverse Cuthill-McKee ordering - MATLAB}},
115+
year = {2025},
116+
url = {https://www.mathworks.com/help/matlab/ref/symrcm.html},
117+
note = {Accessed: 2025-09-19}
118+
}
119+
120+
@article{Maf14,
121+
author = {Mafteiu-Scai, Liviu Octavian},
122+
journal = {Annals of West University of Timisoara - Mathematics and Computer Science},
123+
number = {2},
124+
title = {{The Bandwidths of a Matrix. A Survey of Algorithms}},
125+
volume = {52},
126+
year = {2014},
127+
pages = {183--223},
128+
doi = {10.2478/awutm-2014-0019}
129+
}
130+
131+
@misc{Net25,
132+
author = {{NetworkX Developers}},
133+
howpublished = {NetworkX v3.5 documentation},
134+
title = {{Source code for networkx.utils.rcm}},
135+
year = {2025},
136+
url = {https://networkx.org/documentation/stable/_modules/networkx/utils/rcm.html},
137+
note = {Accessed: 2025-06-11}
138+
}
139+
140+
@article{Sax80,
141+
author = {Saxe, James B.},
142+
journal = {SIAM Journal on Algebraic and Discrete Methods},
143+
number = {4},
144+
title = {{Dynamic-Programming Algorithms for Recognizing Small-Bandwidth Graphs in
145+
Polynomial Time}},
146+
volume = {1},
147+
pages = {363--69},
148+
year = {1980},
149+
doi = {10.1137/0601042}
150+
}
151+
152+
@misc{VJP25,
153+
author = {Varona, Luis M. B. and Johnston, Nathaniel and Plosker, Sarah},
154+
howpublished = {GitHub},
155+
title = {{SDiagonalizability: A dynamic algorithm to minimize or recognize the
156+
\emph{S}-bandwidth of an undirected graph}},
157+
year = {2025},
158+
url = {https://github.com/GraphQuantum/SDiagonalizability.jl}
159+
}

joss/paper.md

Lines changed: 169 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,169 @@
1+
---
2+
title: "MatrixBandwidth.jl: Fast algorithms for matrix bandwidth minimization and recognition"
3+
tags:
4+
- matrix bandwidth
5+
- sparse matrices
6+
- optimization
7+
- scientific computing
8+
- Julia
9+
authors:
10+
- name: Luis M. B. Varona
11+
orcid: 0009-0003-7784-5415
12+
affiliation: "1,2,3"
13+
affiliations:
14+
- name: Department of Politics and International Relations, Mount Allison University
15+
index: 1
16+
- name: Department of Mathematics and Computer Science, Mount Allison University
17+
index: 2
18+
- name: Department of Economics, Mount Allison University
19+
index: 3
20+
date: 25 September 2025
21+
bibliography: paper.bib
22+
---
23+
24+
# Summary
25+
26+
The *bandwidth* of an $n \times n$ matrix $A$ is the minimum non-negative integer $k \in
27+
\{0, 1, \ldots, n - 1\}$ such that $A_{i,j} = 0$ whenever $\lvert i - j \rvert > k$. Reordering the
28+
rows and columns of a matrix to reduce its bandwidth has many practical applications in engineering
29+
and scientific computing: it can improve performance when solving linear systems, approximating
30+
partial differential equations, optimizing circuit layout, and more [@Maf14]. There are two variants
31+
of this problem: *minimization*, which involves finding a permutation matrix $P$ such that the
32+
bandwidth of $PAP^\mathsf{T}$ is minimized, and *recognition*, which entails determining whether
33+
there exists a permutation matrix $P$ such that the bandwidth of $PAP^\mathsf{T}$ is less than or
34+
equal to some fixed non-negative integer (an optimal permutation that fully minimizes the bandwidth
35+
of $A$ is not required). Accordingly,
36+
[MatrixBandwidth.jl](https://github.com/Luis-Varona/MatrixBandwidth.jl) offers fast algorithms for
37+
matrix bandwidth minimization and recognition, written in Julia.
38+
39+
## Example
40+
41+
Consider the following $60 \times 60$ sparse matrix with initial bandwidth $51$:
42+
43+
\begin{figure}[H]
44+
\centering
45+
\includegraphics[height=2in]{assets/A.png}
46+
\caption{Original $60 \times 60$ matrix with bandwidth $51$}
47+
\label{fig:A}
48+
\end{figure}
49+
50+
MatrixBandwidth.jl can both recognize whether the minimum bandwidth of $A$ is less than or equal to
51+
some fixed integer (\autoref{fig:A_rec}) and actually minimize the bandwidth of $A$
52+
(\autoref{fig:A_min}):
53+
54+
\begin{figure}[H]
55+
\begin{minipage}[b]{.475\textwidth}
56+
\centering
57+
\includegraphics[height=1.5in]{assets/A_rec.png}
58+
\caption{The matrix with bandwidth recognized as $\le 6$ via the Del Corso--Manzini algorithm}
59+
\label{fig:A_rec}
60+
\end{minipage}\hfill
61+
\begin{minipage}[b]{.475\textwidth}
62+
\centering
63+
\includegraphics[height=1.5in]{assets/A_min.png}
64+
\caption{The matrix with bandwidth minimized to $5$ via the Gibbs--Poole--Stockmeyer algorithm}
65+
\label{fig:A_min}
66+
\end{minipage}
67+
\end{figure}
68+
69+
(Note that since Gibbs–Poole–Stockmeyer is a heuristic algorithm, $5$ may not be the
70+
*true* minimum bandwidth of $A$, but it is likely close.)
71+
72+
## Algorithms
73+
74+
As of version 0.2.1, the following matrix bandwidth reduction algorithms are available:
75+
76+
- Minimization
77+
- Exact
78+
- Caprara–Salazar-González [@CS05]
79+
- Del Corso–Manzini [@DM99]
80+
- Del Corso–Manzini with perimeter search [@DM99]
81+
- Saxe–Gurari–Sudborough [@Sax80; @GS84]
82+
- Brute-force search
83+
- Heuristic
84+
- Gibbs–Poole–Stockmeyer [@GPS76]
85+
- Cuthill–McKee [@CM69]
86+
- Reverse Cuthill–McKee [@CM69; @Geo71]
87+
- Recognition
88+
- Caprara–Salazar-González [@CS05]
89+
- Del Corso–Manzini [@DM99]
90+
- Del Corso–Manzini with perimeter search [@DM99]
91+
- Saxe–Gurari–Sudborough [@Sax80; @GS84]
92+
- Brute-force search
93+
94+
Recognition algorithms determine whether any row-and-column permutation of a matrix induces
95+
bandwidth less than or equal to some fixed integer. Exact minimization algorithms always guarantee
96+
optimal orderings to minimize bandwidth, while heuristic minimization algorithms produce
97+
near-optimal solutions more quickly. Metaheuristic minimization algorithms employ iterative search
98+
frameworks to find better solutions than heuristic methods (albeit more slowly); no such algorithms
99+
are already implemented, but several (e.g., simulated annealing) are currently under development.
100+
101+
# Statement of need
102+
103+
Many matrix bandwidth reduction algorithms exist in the literature, but implementations in the
104+
open-source ecosystem are scarce, with those that do exist primarily tackling older, less efficient
105+
algorithms. The [Boost](https://www.boost.org/) libraries in C++ [@LLS+01], the
106+
[NetworkX](https://networkx.org/) library in Python [@Net25], and the MATLAB standard library
107+
[@MAT25] all only implement the aforementioned reverse Cuthill–McKee algorithm from 1971.
108+
In Julia, the only other relevant package identified by the author is
109+
[SymRCM.jl](https://github.com/PetrKryslUCSD/SymRCM.jl) [@Krys20], which also only implements
110+
reverse Cuthill–McKee.
111+
112+
Furthermore, not enough attention is given to recognition algorithms or exact minimization
113+
algorithms. Although more performant modern alternatives are often neglected, at least reverse
114+
Cuthill–McKee is a widely implemented method of approximating a minimal bandwidth ordering (as
115+
noted above). However, no such functionality for recognition or exact minimization is widely
116+
available, requiring researchers with such needs to fully re-implement these algorithms themselves.
117+
118+
These two gaps in the ecosystem not only make it difficult for researchers to benchmark and compare
119+
new proposed algorithms but also preclude the application of the most performant modern algorithms
120+
in real-life industry settings. MatrixBandwidth.jl aims to bridge this gap by presenting a unified
121+
interface for matrix bandwidth reduction algorithms in Julia.
122+
123+
# Research applications
124+
125+
The author either has used or is using MatrixBandwidth.jl to do the following:
126+
127+
- Develop a new polynomial-time algorithm for "bandwidth $\le k$" recognition efficient for both
128+
small and large $k$, and benchmarking it against other approaches [@Sax80; @GS84]
129+
- Speed up $k$-coherence checks of quantum states in many cases by confirming that the density
130+
matrix's minimum bandwidth is greater than $k$ [@JMP25]
131+
- Compute the spectral graph property of "$S$-bandwidth" [@JP25] via the
132+
[SDiagonalizability.jl](https://github.com/GraphQuantum/SDiagonalizability.jl) package [@VJP25],
133+
which depends critically on MatrixBandwidth.jl for bandwidth recognition
134+
- Investigate the precise performance benefits of reducing the propagation graph's bandwidth when
135+
training a recurrent neural network, building on @BvMM+19
136+
137+
The first three use cases rely on the recognition and exact minimization functionality unique to
138+
MatrixBandwidth.jl (indeed, they largely motivated the package's development). The last (ongoing)
139+
research project *could* be facilitated by SymRCM.jl instead, but the author intends to use more
140+
performant metaheuristic minimization algorithms currently under development when producing the
141+
final computational results, as well as use recognition algorithms to minimize bandwidth to various
142+
target levels when quantifying performance improvements.
143+
144+
# Limitations
145+
146+
Currently, MatrixBandwidth.jl's core functions generically accept any input of the type
147+
`AbstractMatrix{<:Number}`, not behaving any differently when given sparsely stored matrices (e.g.,
148+
from the [SparseArrays.jl](https://github.com/JuliaSparse/SparseArrays.jl) standard library
149+
package). Capabilities for directly handling graph inputs (aiming to reduce the matrix bandwidth of
150+
a graph's adjacency) are also not available. Given that bandwidth reduction is often applied to
151+
sparse matrices and graphs, this will be addressed in future releases.
152+
153+
Moreover, many of the algorithms only apply to structurally symmetric matrices (i.e., those whose
154+
nonzero pattern is symmetric). However, this is a limitation of the algorithms themselves, not the
155+
package's implementation. Future releases with metaheuristic algorithms will include more methods
156+
that accept structurally asymmetric inputs.
157+
158+
# Conflict of interests
159+
160+
The author declares no conflict of interest.
161+
162+
# Acknowledgements
163+
164+
I owe much to my research supervisors&mdash;Nathaniel Johnston, Sarah Plosker, and Craig
165+
Brett&mdash;for supporting and guiding me in my work. I would also like to thank Liam Keliher,
166+
Peter Leli&egrave;vre, and Marco Cognetta for useful discussions. Finally, credit for
167+
MatrixBandwidth.jl's telepathic-cat-and-turtle logo goes to Rebekka Jonasson.
168+
169+
# References

0 commit comments

Comments
 (0)