Skip to content

Quick Start

bwintermann edited this page Mar 4, 2026 · 9 revisions

Quick Start Guide

This guide will get you up and running with FINN+ in under 20 minutes. We'll walk through installation and building your first FPGA accelerator from a simple quantized neural network.

Prerequisites

  • Python 3.10 with pip or Poetry
  • Git for repository management
  • Make and CMake for building components
  • Vitis & Vivado (optional for hardware generation)
  • Basic understanding of neural networks and ONNX format

Step 1: Installation πŸ”§

Caution

We strongly recommend installing FINN+ in a virtual environment, as it pulls many dependencies at specific versions.

Option A: Using pip (Recommended for users)

# Create a virtual environment
python -m venv finn-env
source finn-env/bin/activate

# Install FINN+
pip install finn-plus

Option B: Using Poetry for development

# Clone the repository
git clone https://github.com/eki-project/finn-plus.git
cd finn-plus

# Install with Poetry
poetry install

#Activate the Environment
finn --help # Run your finn commands

Tip

If you're using the direct run method, you can find your Poetry environment location with poetry env info and source it in your shell configuration for convenience.

Option C: Building from source

# Clone the repository
git clone https://github.com/eki-project/finn-plus.git
cd finn-plus

# Build a package to install elsewhere
poetry build

# Install the wheel file
pip install dist/finn_plus-*.whl

This method is useful when you want to install a modified version of FINN+ or distribute it to other systems.

Step 2: Configure FINN+ βš™οΈ

By default, FINN+ will check whether you already have a settings file. If no such file is found, the settings wizard is automatically started. This wizard will guide you through the setup-process.

If you ever want to start the settings wizard again, simply run

finn wizard settings
# or
finn settings create

This creates a settings.yaml file where you can set:

  • finn_deps: Path for dependencies (default: ~/.finn/finn_deps)
  • finn_build_dir: Path for temporary build files

For more configuration options, see the Configuration and Settings page.

Note

If you are sure that you want to use the default settings, simply pass --accept-defaults. This will skip the wizard and start FINN+.

To check that everything worked, you can run

finn check

This will load FINN+'s environment and stop before running any flow. If the command exits normally, you have successfully installed FINN+.

Step 3: Prepare Your Model 🧠

For this tutorial, we'll assume you have a Brevitas-trained model. If you don't have one, you can:

  1. Train with Brevitas: Follow the Brevitas documentation
  2. Example Jupyter Notebooks: Explore Jupyter Notebooks
  3. Use example models: Check out the included example models

FINN Stack

Export your model to ONNX format:

import torch
from brevitas.export import export_qonnx

# Assuming you have a trained Brevitas model
model = YourBrevitasModel()
dummy_input = torch.randn(1, 3, 224, 224)  # Adjust dimensions

# Export to ONNX
export_qonnx(model, dummy_input, "my_model.onnx")

Step 4: Create Build Configuration πŸ“‹

Create a YAML configuration file for your build:

# config.yaml
board: U55C                    # Target FPGA board
shell_flow_type: vitis_alveo   # Build flow type
synth_clk_period_ns: 10.0      # 100 MHz clock
target_fps: 1000               # Target throughput

# What to generate
generate_outputs:
  - estimate_reports           # Resource utilization estimates
  - stitched_ip               # IP core for integration
  - bitfile                   # Complete FPGA bitstream
  - pynq_driver              # Python driver for deployment

# Build directory
output_dir: ./build_output

To get an overview of all the different configuration options available, check out the DataflowBuildConfig Documentation. This documentation is regularly being generated automatically from the source code.

Step 5: Build Your Accelerator πŸš€

Now run the FINN+ build process:

# Build the accelerator
finn build config.yaml my_model.onnx

This will:

  1. Parse your ONNX model
  2. Apply optimizations to the graph
  3. Generate hardware components
  4. Estimate resource usage
  5. Create deployment artifacts (Stitched IP, complete accelerator, driver configuration, etc.)

Tip

To avoid passing the model every time you execute FINN+, or to simply lock a model to a configuration, you can provide model: my_model.onnx in your config.yaml. Now, simply start FINN+ using finn build config.yaml.

Expected Output

During the build, you'll see progress through the steps configured in your build configuration. Keep an eye on the output, since it might contain hints or warnings helping you to improve and troubleshoot your flow!

Step 6: Review Results πŸ“Š

After a successful build, check your output directory:

build_output/
β”œβ”€β”€ estimate_reports/         # Resource utilization reports
β”œβ”€β”€ stitched_ip/             # Generated IP core
β”œβ”€β”€ bitfile/                 # FPGA bitstream (if generated)
β”œβ”€β”€ driver/                  # Python deployment code
└── intermediate_models/     # Intermediate ONNX files

Key Files to Review

  • estimate_reports/estimate_network_performance.json - Performance estimates
  • estimate_reports/estimate_layer_resources.json - Per-layer resource usage
  • driver/driver.py - Python interface for your accelerator

Troubleshooting πŸ”§

Common Issues

❌ "Model not supported"

  • Ensure your model uses only FINN-supported operations
  • Check that quantization is properly applied

❌ "Resource constraints exceeded"

  • Reduce target_fps in your configuration
  • Choose a larger FPGA board
  • Enable more aggressive optimizations

❌ "Build fails during synthesis"

  • Check synth_clk_period_ns isn't too aggressive
  • Review intermediate models in debug output

Getting Help

Next Steps πŸŽ‰

Congratulations! You've built your first FINN+ accelerator. Here's what to explore next:

πŸ”§ Building an Accelerator - Learn to tune your accelerator and build process

🏭 Settings - Configuring FINN+ itself for usage and/or development

πŸ› οΈ Development Setup - Setting FINN+ up for development

Example Projects πŸ’‘

  • Image Classifier: ResNet on edge devices
  • Object Detection: YOLO for real-time processing
  • Audio Processing: Keyword spotting accelerator
  • Time Series: Predictive maintenance models

Ready to dive deeper? Check out our complete documentation or jump to advanced configuration!

Clone this wiki locally