|
| 1 | +--- |
| 2 | +title: "MatrixBandwidth.jl: Fast algorithms for matrix bandwidth minimization and recognition" |
| 3 | +tags: |
| 4 | + - matrix bandwidth |
| 5 | + - sparse matrices |
| 6 | + - optimization |
| 7 | + - scientific computing |
| 8 | + - Julia |
| 9 | +authors: |
| 10 | + - name: Luis M. B. Varona |
| 11 | + orcid: 0009-0003-7784-5415 |
| 12 | + affiliation: "1,2,3" |
| 13 | +affiliations: |
| 14 | + - name: Department of Politics and International Relations, Mount Allison University |
| 15 | + index: 1 |
| 16 | + - name: Department of Mathematics and Computer Science, Mount Allison University |
| 17 | + index: 2 |
| 18 | + - name: Department of Economics, Mount Allison University |
| 19 | + index: 3 |
| 20 | +date: 25 September 2025 |
| 21 | +bibliography: paper.bib |
| 22 | +--- |
| 23 | + |
| 24 | +# Summary |
| 25 | + |
| 26 | +The *bandwidth* of an $n \times n$ matrix $A$ is the minimum non-negative integer $k \in |
| 27 | +\{0, 1, \ldots, n - 1\}$ such that $A_{i,j} = 0$ whenever $\lvert i - j \rvert > k$. Reordering the |
| 28 | +rows and columns of a matrix to reduce its bandwidth has many practical applications in engineering |
| 29 | +and scientific computing: it can improve performance when solving linear systems, approximating |
| 30 | +partial differential equations, optimizing circuit layout, and more [@Maf14]. There are two variants |
| 31 | +of this problem: *minimization*, which involves finding a permutation matrix $P$ such that the |
| 32 | +bandwidth of $PAP^\mathsf{T}$ is minimized, and *recognition*, which entails determining whether |
| 33 | +there exists a permutation matrix $P$ such that the bandwidth of $PAP^\mathsf{T}$ is less than or |
| 34 | +equal to some fixed non-negative integer (an optimal permutation that fully minimizes the bandwidth |
| 35 | +of $A$ is not required). Accordingly, |
| 36 | +[MatrixBandwidth.jl](https://github.com/Luis-Varona/MatrixBandwidth.jl) offers fast algorithms for |
| 37 | +matrix bandwidth minimization and recognition, written in Julia. |
| 38 | + |
| 39 | +## Example |
| 40 | + |
| 41 | +Consider the following $60 \times 60$ sparse matrix with initial bandwidth $51$: |
| 42 | + |
| 43 | +\begin{figure}[H] |
| 44 | + \centering |
| 45 | + \includegraphics[height=2in]{assets/A.png} |
| 46 | + \caption{Original $60 \times 60$ matrix with bandwidth $51$} |
| 47 | + \label{fig:A} |
| 48 | +\end{figure} |
| 49 | + |
| 50 | +MatrixBandwidth.jl can both recognize whether the minimum bandwidth of $A$ is less than or equal to |
| 51 | +some fixed integer (\autoref{fig:A_rec}) and actually minimize the bandwidth of $A$ |
| 52 | +(\autoref{fig:A_min}): |
| 53 | + |
| 54 | +\begin{figure}[H] |
| 55 | + \begin{minipage}[b]{.475\textwidth} |
| 56 | + \centering |
| 57 | + \includegraphics[height=1.5in]{assets/A_rec.png} |
| 58 | + \caption{The matrix with bandwidth recognized as $\le 6$ via the Del Corso--Manzini algorithm} |
| 59 | + \label{fig:A_rec} |
| 60 | + \end{minipage}\hfill |
| 61 | + \begin{minipage}[b]{.475\textwidth} |
| 62 | + \centering |
| 63 | + \includegraphics[height=1.5in]{assets/A_min.png} |
| 64 | + \caption{The matrix with bandwidth minimized to $5$ via the Gibbs--Poole--Stockmeyer algorithm} |
| 65 | + \label{fig:A_min} |
| 66 | + \end{minipage} |
| 67 | +\end{figure} |
| 68 | + |
| 69 | +(Note that since Gibbs–Poole–Stockmeyer is a heuristic algorithm, $5$ may not be the |
| 70 | +*true* minimum bandwidth of $A$, but it is likely close.) |
| 71 | + |
| 72 | +## Algorithms |
| 73 | + |
| 74 | +As of version 0.2.1, the following matrix bandwidth reduction algorithms are available: |
| 75 | + |
| 76 | +- Minimization |
| 77 | + - Exact |
| 78 | + - Caprara–Salazar-González [@CS05] |
| 79 | + - Del Corso–Manzini [@DM99] |
| 80 | + - Del Corso–Manzini with perimeter search [@DM99] |
| 81 | + - Saxe–Gurari–Sudborough [@Sax80; @GS84] |
| 82 | + - Brute-force search |
| 83 | + - Heuristic |
| 84 | + - Gibbs–Poole–Stockmeyer [@GPS76] |
| 85 | + - Cuthill–McKee [@CM69] |
| 86 | + - Reverse Cuthill–McKee [@CM69; @Geo71] |
| 87 | +- Recognition |
| 88 | + - Caprara–Salazar-González [@CS05] |
| 89 | + - Del Corso–Manzini [@DM99] |
| 90 | + - Del Corso–Manzini with perimeter search [@DM99] |
| 91 | + - Saxe–Gurari–Sudborough [@Sax80; @GS84] |
| 92 | + - Brute-force search |
| 93 | + |
| 94 | +Recognition algorithms determine whether any row-and-column permutation of a matrix induces |
| 95 | +bandwidth less than or equal to some fixed integer. Exact minimization algorithms always guarantee |
| 96 | +optimal orderings to minimize bandwidth, while heuristic minimization algorithms produce |
| 97 | +near-optimal solutions more quickly. Metaheuristic minimization algorithms employ iterative search |
| 98 | +frameworks to find better solutions than heuristic methods (albeit more slowly); no such algorithms |
| 99 | +are already implemented, but several (e.g., simulated annealing) are currently under development. |
| 100 | + |
| 101 | +# Statement of need |
| 102 | + |
| 103 | +Many matrix bandwidth reduction algorithms exist in the literature, but implementations in the |
| 104 | +open-source ecosystem are scarce, with those that do exist primarily tackling older, less efficient |
| 105 | +algorithms. The [Boost](https://www.boost.org/) libraries in C++ [@LLS+01], the |
| 106 | +[NetworkX](https://networkx.org/) library in Python [@Net25], and the MATLAB standard library |
| 107 | +[@MAT25] all only implement the aforementioned reverse Cuthill–McKee algorithm from 1971. |
| 108 | +In Julia, the only other relevant package identified by the author is |
| 109 | +[SymRCM.jl](https://github.com/PetrKryslUCSD/SymRCM.jl) [@Krys20], which also only implements |
| 110 | +reverse Cuthill–McKee. |
| 111 | + |
| 112 | +Furthermore, not enough attention is given to recognition algorithms or exact minimization |
| 113 | +algorithms. Although more performant modern alternatives are often neglected, at least reverse |
| 114 | +Cuthill–McKee is a widely implemented method of approximating a minimal bandwidth ordering (as |
| 115 | +noted above). However, no such functionality for recognition or exact minimization is widely |
| 116 | +available, requiring researchers with such needs to fully re-implement these algorithms themselves. |
| 117 | + |
| 118 | +These two gaps in the ecosystem not only make it difficult for researchers to benchmark and compare |
| 119 | +new proposed algorithms but also preclude the application of the most performant modern algorithms |
| 120 | +in real-life industry settings. MatrixBandwidth.jl aims to bridge this gap by presenting a unified |
| 121 | +interface for matrix bandwidth reduction algorithms in Julia. |
| 122 | + |
| 123 | +# Research applications |
| 124 | + |
| 125 | +The author either has used or is using MatrixBandwidth.jl to do the following: |
| 126 | + |
| 127 | +- Develop a new polynomial-time algorithm for "bandwidth $\le k$" recognition efficient for both |
| 128 | + small and large $k$, and benchmarking it against other approaches [@Sax80; @GS84] |
| 129 | +- Speed up $k$-coherence checks of quantum states in many cases by confirming that the density |
| 130 | + matrix's minimum bandwidth is greater than $k$ [@JMP25] |
| 131 | +- Compute the spectral graph property of "$S$-bandwidth" [@JP25] via the |
| 132 | + [SDiagonalizability.jl](https://github.com/GraphQuantum/SDiagonalizability.jl) package [@VJP25], |
| 133 | + which depends critically on MatrixBandwidth.jl for bandwidth recognition |
| 134 | +- Investigate the precise performance benefits of reducing the propagation graph's bandwidth when |
| 135 | + training a recurrent neural network, building on @BvMM+19 |
| 136 | + |
| 137 | +The first three use cases rely on the recognition and exact minimization functionality unique to |
| 138 | +MatrixBandwidth.jl (indeed, they largely motivated the package's development). The last (ongoing) |
| 139 | +research project *could* be facilitated by SymRCM.jl instead, but the author intends to use more |
| 140 | +performant metaheuristic minimization algorithms currently under development when producing the |
| 141 | +final computational results, as well as use recognition algorithms to minimize bandwidth to various |
| 142 | +target levels when quantifying performance improvements. |
| 143 | + |
| 144 | +# Limitations |
| 145 | + |
| 146 | +Currently, MatrixBandwidth.jl's core functions generically accept any input of the type |
| 147 | +`AbstractMatrix{<:Number}`, not behaving any differently when given sparsely stored matrices (e.g., |
| 148 | +from the [SparseArrays.jl](https://github.com/JuliaSparse/SparseArrays.jl) standard library |
| 149 | +package). Capabilities for directly handling graph inputs (aiming to reduce the matrix bandwidth of |
| 150 | +a graph's adjacency) are also not available. Given that bandwidth reduction is often applied to |
| 151 | +sparse matrices and graphs, this will be addressed in future releases. |
| 152 | + |
| 153 | +Moreover, many of the algorithms only apply to structurally symmetric matrices (i.e., those whose |
| 154 | +nonzero pattern is symmetric). However, this is a limitation of the algorithms themselves, not the |
| 155 | +package's implementation. Future releases with metaheuristic algorithms will include more methods |
| 156 | +that accept structurally asymmetric inputs. |
| 157 | + |
| 158 | +# Conflict of interests |
| 159 | + |
| 160 | +The author declares no conflict of interest. |
| 161 | + |
| 162 | +# Acknowledgements |
| 163 | + |
| 164 | +I owe much to my research supervisors—Nathaniel Johnston, Sarah Plosker, and Craig |
| 165 | +Brett—for supporting and guiding me in my work. I would also like to thank Liam Keliher, |
| 166 | +Peter Lelièvre, and Marco Cognetta for useful discussions. Finally, credit for |
| 167 | +MatrixBandwidth.jl's telepathic-cat-and-turtle logo goes to Rebekka Jonasson. |
| 168 | + |
| 169 | +# References |
0 commit comments