Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
50 changes: 49 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,57 @@ This library outstands in terms of execution speed and memory allocation. It is

Some typical applications of OpenNN are business intelligence (customer segmentation, churn prevention...), health care (early diagnosis, microarray analysis,...) and engineering (performance optimization, predictive maitenance...).

## Key Features
OpenNN is a high-performance, open-source library for deep learning in C++. Some of its key features include:

- **Neural Networks**: Supports a wide range of neural network architectures, including multilayer perceptrons, deep networks, and recurrent neural networks.
- **Training Strategies**: Includes advanced training strategies like gradient descent, conjugate gradient, and Levenberg-Marquardt.
- **Data Handling**: Efficient data handling with built-in functions for normalization, scaling, and splitting datasets.
- **Model Selection**: Helps optimize network architectures with built-in methods for model selection and hyperparameter tuning.
- **Testing & Validation**: Offers tools for evaluating the performance of neural networks using a separate testing dataset.
- **Visualization**: Provides tools for visualizing network architecture, training progress, and performance metrics.


#### 2. **Performance Tips & Best Practices**
This section provides useful insights into improving performance when using OpenNN, which is not directly available on the website.

```markdown
## Performance Tips & Best Practices
To maximize the performance of your neural networks using OpenNN, consider these best practices:

- **Data Preprocessing**: Normalize and scale your data to ensure faster training and better convergence. OpenNN includes built-in functions to help with this.
- **Batch Training**: Use batch training with large datasets to reduce memory usage and improve training times.
- **Regularization**: Apply techniques like L2 regularization or dropout to prevent overfitting, especially when working with complex models.
- **Learning Rate**: Start with a lower learning rate and adjust as needed. A dynamic learning rate can help achieve faster convergence.
- **Parallel Computing**: OpenNN supports parallel execution for faster training on systems with multiple cores.

## Choosing the Right Neural Network Model
Depending on the nature of your project, different neural network architectures and training strategies may be more appropriate:

- For Classification Tasks (e.g., image recognition, text classification):
- Use a multilayer perceptron (MLP) with a softmax output layer for multi-class classification tasks.

- For Time Series Forecasting (e.g., stock price prediction, weather forecasting):
- Consider using a recurrent neural network (RNN) or long short-term memory (LSTM) for sequential data and forecasting tasks.

- For Regression Problems (e.g., predicting house prices, continuous outputs):
- Use a fully connected network with linear activation for regression problems, which map inputs to continuous outputs.

- Model Selection Tip: Try multiple architectures and compare performance using OpenNN’s built-in model selection tools.

## Troubleshooting
Here are some common issues you may encounter when using OpenNN and how to resolve them:

- Issue: Compilation errors due to missing dependencies.
- Solution: Ensure that you have installed all required dependencies listed on the OpenNN website. For example, ensure you have a compatible C++ compiler and the necessary libraries.

- Issue: Slow training times with large datasets.
- Solution: Try batch training, or use the parallelization feature to speed up the process by utilizing multiple cores.

The documentation is composed by tutorials and examples to offer a complete overview about the library.

The documentation can be found at the official <a href="http://opennn.net" target="_blank">OpenNN site</a>.
### Note:
This README has been enhanced to provide additional, complementary content that helps developers get started more quickly and understand how to maximize performance when using OpenNN. For more detailed installation instructions and tutorials, please refer to the official [OpenNN website](https://opennn.net). The content added here does not duplicate the information already available on the website, but rather supplements it to improve the developer experience.

CMakeLists.txt are build files for CMake, it is also used by the CLion IDE.

Expand Down
226 changes: 226 additions & 0 deletions Release/opennn.log

Large diffs are not rendered by default.

2 changes: 2 additions & 0 deletions Release/opennn.tlog/opennn.lastbuildstate
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
PlatformToolSet=v142:VCToolArchitecture=Native32Bit:VCToolsVersion=14.29.30133:TargetPlatformVersion=10.0.17763.0:
Release|Win32|C:\Users\Usuario\Documents\opennn\|
Empty file.
33 changes: 33 additions & 0 deletions blank/.qmake.stash
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
QMAKE_CXX.QT_COMPILER_STDCXX = 201402L
QMAKE_CXX.QMAKE_CLANG_MAJOR_VERSION = 14
QMAKE_CXX.QMAKE_CLANG_MINOR_VERSION = 0
QMAKE_CXX.QMAKE_CLANG_PATCH_VERSION = 0
QMAKE_CXX.QMAKE_GCC_MAJOR_VERSION = 4
QMAKE_CXX.QMAKE_GCC_MINOR_VERSION = 2
QMAKE_CXX.QMAKE_GCC_PATCH_VERSION = 1
QMAKE_CXX.COMPILER_MACROS = \
QT_COMPILER_STDCXX \
QMAKE_CLANG_MAJOR_VERSION \
QMAKE_CLANG_MINOR_VERSION \
QMAKE_CLANG_PATCH_VERSION \
QMAKE_GCC_MAJOR_VERSION \
QMAKE_GCC_MINOR_VERSION \
QMAKE_GCC_PATCH_VERSION
QMAKE_CXX.INCDIRS = \
/usr/include/c++/12 \
/usr/include/x86_64-linux-gnu/c++/12 \
/usr/include/c++/12/backward \
/usr/lib/llvm-14/lib/clang/14.0.0/include \
/usr/local/include \
/usr/include/x86_64-linux-gnu \
/usr/include
QMAKE_CXX.LIBDIRS = \
/usr/lib/llvm-14/lib/clang/14.0.0 \
/usr/lib/gcc/x86_64-linux-gnu/12 \
/usr/lib64 \
/lib/x86_64-linux-gnu \
/lib64 \
/usr/lib/x86_64-linux-gnu \
/usr/lib/llvm-14/lib \
/lib \
/usr/lib
66 changes: 40 additions & 26 deletions blank/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -23,42 +23,56 @@
#include <iostream>

using namespace std;
using namespace OpenNN;
using namespace opennn;


//Tensor<type, 2> box_plots_to_tensor(const Tensor<BoxPlot, 1>& box_plots)
//{
// const Index columns_number = box_plots.dimension(0);
int main()
{
try
{
cout << "OpenNN. Simple Function Regression Example." << endl;

// Tensor<type, 2> summary(5, columns_number);
srand(static_cast<unsigned>(time(nullptr)));

// for(Index i = 0; i < columns_number; i++)
// {
// const BoxPlot& box_plot = box_plots(i);
// summary(0, i) = box_plot.minimum;
// summary(1, i) = box_plot.first_quartile;
// summary(2, i) = box_plot.median;
// summary(3, i) = box_plot.third_quartile;
// summary(4, i) = box_plot.maximum;
// }
// Data Set

// //todo
// Eigen::array<Index, 2> new_shape = {1, 5 * columns_number};
// Tensor<type, 2> reshaped_summary = summary.reshape(new_shape);
DataSet data_set("/home/artelnics/Escritorio/gyd_copia.csv", ',', true);

// return reshaped_summary;
//}
const Index input_variables_number = data_set.get_input_variables_number();
const Index target_variables_number = data_set.get_target_variables_number();
const Index hidden_neurons_number = 3;

Tensor<Correlation, 2> result = data_set.calculate_input_target_columns_correlations();

int main()
{
try
{
cout << "Blank\n";
// calculate_input_columns_correlations(const bool& calculate_pearson_correlations, const bool& calculate_spearman_correlations)

srand(static_cast<unsigned>(time(nullptr)));
cout << "==============================" << endl;
cout << "result.size() :: " << result.size() << endl;
cout << "==============================" << endl;
result(0).print();
cout << "==============================" << endl;

// Neural Network

NeuralNetwork neural_network(NeuralNetwork::ProjectType::Approximation,
{input_variables_number, hidden_neurons_number, target_variables_number});


// Training Strategy

//TrainingStrategy training_strategy(&neural_network, &data_set);
//training_strategy.set_optimization_method(TrainingStrategy::OptimizationMethod::QUASI_NEWTON_METHOD);
//training_strategy.set_display_period(1000);
//training_strategy.perform_training();

//Model Selection
//GrowingNeurons gn(&training_strategy);
//gn.perform_neurons_selection();

// Save results
//neural_network.save_expression_python("simple_function_regresion.py");

cout << "Bye!" << endl;
cout << "Bye Simple Function Regression" << endl;//

return 0;
}
Expand Down
33 changes: 33 additions & 0 deletions examples/.qmake.stash
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
QMAKE_CXX.QT_COMPILER_STDCXX = 201402L
QMAKE_CXX.QMAKE_CLANG_MAJOR_VERSION = 14
QMAKE_CXX.QMAKE_CLANG_MINOR_VERSION = 0
QMAKE_CXX.QMAKE_CLANG_PATCH_VERSION = 0
QMAKE_CXX.QMAKE_GCC_MAJOR_VERSION = 4
QMAKE_CXX.QMAKE_GCC_MINOR_VERSION = 2
QMAKE_CXX.QMAKE_GCC_PATCH_VERSION = 1
QMAKE_CXX.COMPILER_MACROS = \
QT_COMPILER_STDCXX \
QMAKE_CLANG_MAJOR_VERSION \
QMAKE_CLANG_MINOR_VERSION \
QMAKE_CLANG_PATCH_VERSION \
QMAKE_GCC_MAJOR_VERSION \
QMAKE_GCC_MINOR_VERSION \
QMAKE_GCC_PATCH_VERSION
QMAKE_CXX.INCDIRS = \
/usr/include/c++/12 \
/usr/include/x86_64-linux-gnu/c++/12 \
/usr/include/c++/12/backward \
/usr/lib/llvm-14/lib/clang/14.0.0/include \
/usr/local/include \
/usr/include/x86_64-linux-gnu \
/usr/include
QMAKE_CXX.LIBDIRS = \
/usr/lib/llvm-14/lib/clang/14.0.0 \
/usr/lib/gcc/x86_64-linux-gnu/12 \
/usr/lib64 \
/lib/x86_64-linux-gnu \
/lib64 \
/usr/lib/x86_64-linux-gnu \
/usr/lib/llvm-14/lib \
/lib \
/usr/lib
Loading