An empirical study of gradient inversion attacks and privacy defence mechanisms on a Federated Learning-based Intrusion Detection System, built using the Flower framework and UNSW-NB15 dataset.
This project extends the base FL-IDS implementation with a full privacy vulnerability analysis — demonstrating that gradient updates leaked during federated training can be exploited to reconstruct private network traffic data, and evaluating three defence strategies against this threat.
| Defence | Parameter | Reconstruction MSE | Privacy Level |
|---|---|---|---|
| No Defence (Baseline) | — | 0.000056 | ❌ Critical Risk |
| DP Noise | σ = 0.1 | 0.670612 | ✅ Strong |
| DP Noise | σ = 2.0 | 2.930902 | ✅✅ Strongest |
| Gradient Clipping | Norm = 0.5 | 1.219867 | ✅✅ Strong |
| Sparse Gradients | Top 25% | 1.004138 | ✅✅ Strong |
Higher MSE = harder to reconstruct = better privacy protection. DP Noise at σ=2.0 provides 52,000x improvement in privacy over the undefended baseline.
KC-TRAP/
├── data/ # Place UNSW-NB15 CSV files here
├── utils/ # Data loading and plot utilities
├── client.py # FL client (Flower)
├── server.py # FL server with FedAvg aggregation
├── simulation.py # Single-command FL simulation
├── gradient_attack.py # Gradient inversion attack
├── defence_dp.py # Differential Privacy noise defence
├── defence_clipping.py # Gradient clipping defence
├── defence_sparse.py # Sparse gradient communication defence
├── docker-compose.yaml # Docker simulation setup
├── Dockerfile.client
├── Dockerfile.server
├── requirements.txt
└── README.md
1. Clone the repository
git clone https://github.com/KushanavoRakshit/KC-TRAP.git
cd KC-TRAP2. Create virtual environment
python -m venv venv
venv\Scripts\activate # Windows
source venv/bin/activate # Mac/Linux3. Install dependencies
pip install -r requirements.txt
pip install -U flwr["simulation"]4. Download the dataset
Download UNSW_NB15_training-set.csv and UNSW_NB15_testing-set.csv from the UNSW-NB15 dataset page and place both files in the data/ folder.
There are 3 options:
Option 1 — Single command (recommended)
python simulation.pyOption 2 — Manual (multiple terminals)
# Terminal 1
python server.py
# Terminal 2, 3, 4 (run each separately)
python client.pyNote: At least 3 clients are needed to satisfy
min_fit_clients,min_evaluate_clients, andmin_available_clients. Default server port is 8080.
Option 3 — Docker
docker-compose up --buildVisualize model architecture
python utils/plot.pyGradient Inversion Attack
python gradient_attack.py > attack_output.txt 2>&1Reconstructs private training data from intercepted gradients using the DLG method.
Differential Privacy Defence
python defence_dp.py > dp_output.txt 2>&1Tests Gaussian noise injection at σ ∈ {0.0, 0.1, 0.5, 1.0, 2.0}.
Gradient Clipping Defence
python defence_clipping.py > clipping_output.txt 2>&1Tests gradient norm clipping at values ∈ {10.0, 1.0, 0.5, 0.1}.
Sparse Gradient Defence
python defence_sparse.py > sparse_output.txt 2>&1Tests top-k% gradient sparsification at k ∈ {50%, 25%, 10%, 1%}.
- Python 3.10
- TensorFlow / Keras — Neural network training
- Flower (flwr) — Federated learning framework
- scikit-learn — Preprocessing
- pandas / numpy — Data handling
- Federated Learning (FedAvg aggregation)
- Gradient Inversion Attacks (Adversarial ML)
- Differential Privacy (Gaussian noise injection)
- Gradient Clipping
- Sparse Gradient Communication
- Privacy-Utility Tradeoff Analysis
- Network Intrusion Detection (UNSW-NB15)
- Personalize datasets for each client instead of using a common sampled dataset
- Evaluate attacks on larger batch sizes to reflect realistic FL deployments
- Combine multiple defences (DP noise + clipping) for stronger guarantees
- Extend to multi-class attack classification
The FL-IDS base code is adapted from oqadiSAK/fl-ids. The gradient inversion attack and all defence scripts are original contributions.
MIT