-
Notifications
You must be signed in to change notification settings - Fork 8
Quick Start
This guide will get you up and running with FINN+ in under 20 minutes. We'll walk through installation and building your first FPGA accelerator from a simple quantized neural network.
- Python 3.10 with pip or Poetry
- Git for repository management
- Make and CMake for building components
- Vitis & Vivado (optional for hardware generation)
- Basic understanding of neural networks and ONNX format
Caution
We strongly recommend installing FINN+ in a virtual environment, as it pulls many dependencies at specific versions.
# Create a virtual environment
python -m venv finn-env
source finn-env/bin/activate
# Install FINN+
pip install finn-plus# Clone the repository
git clone https://github.com/eki-project/finn-plus.git
cd finn-plus
# Install with Poetry
poetry install
#Activate the Environment
finn --help # Run your finn commandsTip
If you're using the direct run method, you can find your Poetry environment location with poetry env info and source it in your shell configuration for convenience.
# Clone the repository
git clone https://github.com/eki-project/finn-plus.git
cd finn-plus
# Build a package to install elsewhere
poetry build
# Install the wheel file
pip install dist/finn_plus-*.whlThis method is useful when you want to install a modified version of FINN+ or distribute it to other systems.
By default, FINN+ will check whether you already have a settings file. If no such file is found, the settings wizard is automatically started. This wizard will guide you through the setup-process.
If you ever want to start the settings wizard again, simply run
finn wizard settings
# or
finn settings createThis creates a settings.yaml file where you can set:
-
finn_deps: Path for dependencies (default:~/.finn/finn_deps) -
finn_build_dir: Path for temporary build files
For more configuration options, see the Configuration and Settings page.
Note
If you are sure that you want to use the default settings, simply pass --accept-defaults. This will skip the wizard and start FINN+.
To check that everything worked, you can run
finn checkThis will load FINN+'s environment and stop before running any flow. If the command exits normally, you have successfully installed FINN+.
For this tutorial, we'll assume you have a Brevitas-trained model. If you don't have one, you can:
- Train with Brevitas: Follow the Brevitas documentation
- Example Jupyter Notebooks: Explore Jupyter Notebooks
- Use example models: Check out the included example models
Export your model to ONNX format:
import torch
from brevitas.export import export_qonnx
# Assuming you have a trained Brevitas model
model = YourBrevitasModel()
dummy_input = torch.randn(1, 3, 224, 224) # Adjust dimensions
# Export to ONNX
export_qonnx(model, dummy_input, "my_model.onnx")Create a YAML configuration file for your build:
# config.yaml
board: U55C # Target FPGA board
shell_flow_type: vitis_alveo # Build flow type
synth_clk_period_ns: 10.0 # 100 MHz clock
target_fps: 1000 # Target throughput
# What to generate
generate_outputs:
- estimate_reports # Resource utilization estimates
- stitched_ip # IP core for integration
- bitfile # Complete FPGA bitstream
- pynq_driver # Python driver for deployment
# Build directory
output_dir: ./build_outputTo get an overview of all the different configuration options available, check out the DataflowBuildConfig Documentation. This documentation is regularly being generated automatically from the source code.
Now run the FINN+ build process:
# Build the accelerator
finn build config.yaml my_model.onnxThis will:
- Parse your ONNX model
- Apply optimizations to the graph
- Generate hardware components
- Estimate resource usage
- Create deployment artifacts (Stitched IP, complete accelerator, driver configuration, etc.)
Tip
To avoid passing the model every time you execute FINN+, or to simply lock a model to a configuration, you can provide model: my_model.onnx in your config.yaml. Now, simply start FINN+ using finn build config.yaml.
During the build, you'll see progress through the steps configured in your build configuration. Keep an eye on the output, since it might contain hints or warnings helping you to improve and troubleshoot your flow!
After a successful build, check your output directory:
build_output/
βββ estimate_reports/ # Resource utilization reports
βββ stitched_ip/ # Generated IP core
βββ bitfile/ # FPGA bitstream (if generated)
βββ driver/ # Python deployment code
βββ intermediate_models/ # Intermediate ONNX files
-
estimate_reports/estimate_network_performance.json- Performance estimates -
estimate_reports/estimate_layer_resources.json- Per-layer resource usage -
driver/driver.py- Python interface for your accelerator
β "Model not supported"
- Ensure your model uses only FINN-supported operations
- Check that quantization is properly applied
β "Resource constraints exceeded"
- Reduce
target_fpsin your configuration - Choose a larger FPGA board
- Enable more aggressive optimizations
β "Build fails during synthesis"
- Check
synth_clk_period_nsisn't too aggressive - Review intermediate models in debug output
- π Check the detailed Build Configuration Documentation
- π Search existing issues
- π¬ Ask questions in discussions
Congratulations! You've built your first FINN+ accelerator. Here's what to explore next:
π§ Building an Accelerator - Learn to tune your accelerator and build process
- π Build Flow Configuration Options - An overview of all available build configuration options
π Settings - Configuring FINN+ itself for usage and/or development
π οΈ Development Setup - Setting FINN+ up for development
- π οΈ Custom Transformations - Extending FINN+ with custom transformations and steps
- Image Classifier: ResNet on edge devices
- Object Detection: YOLO for real-time processing
- Audio Processing: Keyword spotting accelerator
- Time Series: Predictive maintenance models
Ready to dive deeper? Check out our complete documentation or jump to advanced configuration!
π Home
- Migration Guide
- Building an Accelerator
- DataflowBuildConfig Documentation
- Example Models
- Build Guides:
- Brevitas - Quantization library
- FINN+ Repository
- Custom Steps Library