I'm having an issue where brulee models are not training on the GPU, despite the environment seemingly being set up correctly. The model training silently falls back to the CPU, while pure torch models do run successfully on the GPU.
Environment Details:
- brulee version: 0.6.0
- torch version: 0.16.0
- tidymodels version: 1.4.1
GPU: NVIDIA GeForce RTX 4070
OS: Windows
Troubleshooting Steps:
- When fitting a brulee model, nvidia-smi shows 0% GPU utilization and the R process does not appear.
- Running torch::cuda_is_available() in the R console returns TRUE.
- Manually creating a tensor and moving it with x$cuda() succeeds.
- A pure torch training loop (with no brulee or tidymodels code) runs successfully on the GPU, with high utilization shown in nvidia-smi.
This brulee code fails to use the GPU:
library(tidymodels)
library(brulee)
# Fails to run on GPU
mlp_spec <- mlp(hidden_units = 500, epochs = 20000) %>%
set_engine("brulee") %>%
set_mode("classification")
mlp_wflow <- workflow() %>%
add_recipe(recipe(Species ~ ., data = iris)) %>%
add_model(mlp_spec)
fit(mlp_wflow, data = iris)
This code uses the GPU:
library(torch)
model <- nn_sequential(
nn_linear(4, 500), nn_relu(), nn_linear(500, 3)
)$to(device = "cuda")
x_train <- as_torch_tensor(as.matrix(iris[, 1:4]), device = "cuda")
y_train <- as_torch_tensor(as.integer(iris$Species), device = "cuda")
optimizer <- optim_adam(model$parameters, lr = 0.01)
loss_fn <- nn_cross_entropy_loss()
for (i in 1:20000) {
optimizer$zero_grad()
output <- model(x_train)
loss <- loss_fn(output, y_train)
loss$backward()
optimizer$step()
}
I'm having an issue where brulee models are not training on the GPU, despite the environment seemingly being set up correctly. The model training silently falls back to the CPU, while pure torch models do run successfully on the GPU.
Environment Details:
GPU: NVIDIA GeForce RTX 4070
OS: Windows
Troubleshooting Steps:
This brulee code fails to use the GPU:
This code uses the GPU: