I'm having trouble with training mlp() specification by brulee. I know that brulee uses torch and I checked my torch & gpu relationship, seems okay. But in the example below, training goes on CPU.
suppressPackageStartupMessages({
library(tidymodels)
library(torch)
})
torch::cuda_is_available()
#> TRUE
torch::cuda_device_count()
#> [1] 1
set.seed(1)
modspec <- mlp(hidden_units = tune(),
penalty = tune(),
epochs = tune(),
activation = tune(),
learn_rate = tune()) %>%
set_mode('classification') %>%
set_engine('brulee')
fk_param <- modspec %>%
extract_parameter_set_dials %>%
grid_max_entropy(size = 50)
spl_obj <- initial_split(iris,.7)
cv_obj <- vfold_cv(training(spl_obj),5)
rcp <- recipe(formula = Species ~
Sepal.Width +
Sepal.Length +
Petal.Width +
Petal.Length,
data = training(spl_obj)) %>%
step_normalize(all_numeric_predictors())
wf <- workflow() %>%
add_model(modspec) %>%
add_recipe(rcp)
cv_fit <- wf %>%
tune_grid(resamples = cv_obj,grid = fk_param)
I'm having trouble with training
mlp()specification bybrulee. I know that brulee uses torch and I checked my torch & gpu relationship, seems okay. But in the example below, training goes on CPU.suppressPackageStartupMessages({ library(tidymodels) library(torch) }) torch::cuda_is_available() #> TRUE torch::cuda_device_count() #> [1] 1 set.seed(1) modspec <- mlp(hidden_units = tune(), penalty = tune(), epochs = tune(), activation = tune(), learn_rate = tune()) %>% set_mode('classification') %>% set_engine('brulee') fk_param <- modspec %>% extract_parameter_set_dials %>% grid_max_entropy(size = 50) spl_obj <- initial_split(iris,.7) cv_obj <- vfold_cv(training(spl_obj),5) rcp <- recipe(formula = Species ~ Sepal.Width + Sepal.Length + Petal.Width + Petal.Length, data = training(spl_obj)) %>% step_normalize(all_numeric_predictors()) wf <- workflow() %>% add_model(modspec) %>% add_recipe(rcp) cv_fit <- wf %>% tune_grid(resamples = cv_obj,grid = fk_param)