An end-to-end deep learning solution for automated kidney disease classification from CT scan images
Features β’ Demo β’ Installation β’ Usage β’ Architecture β’ Deployment
- Introduction
- Use Cases
- Features
- Demo
- Tech Stack
- Project Architecture
- Dataset
- Installation & Setup
- Usage
- ML Pipeline Stages
- Model Training
- Model Evaluation
- API Documentation
- Docker Deployment
- CI/CD Pipeline
- AWS Deployment
- Project Structure
- Configuration
- Contributing
- License
- Contact
The Kidney Disease Classification project is a production-ready, end-to-end deep learning application designed to automatically classify kidney CT scan images into different categories. This project leverages transfer learning with VGG16 architecture, implements MLOps best practices using DVC and MLflow, and provides a user-friendly web interface for real-time predictions.
This system aims to assist medical professionals in early detection and diagnosis of kidney diseases by providing accurate, fast, and automated analysis of CT scan images.
- Transfer Learning: Utilizes pre-trained VGG16 model fine-tuned on kidney CT scans
- MLOps Integration: Complete experiment tracking with MLflow and version control with DVC
- Production Ready: Dockerized application with CI/CD pipeline using GitHub Actions
- Cloud Deployment: Automated deployment on AWS EC2 with ECR for container registry
- Modular Design: Clean, maintainable code following software engineering best practices
- REST API: Flask-based API for easy integration with other systems
- β Transfer Learning with VGG16: Pre-trained ImageNet weights fine-tuned for kidney disease classification
- β Data Augmentation: Robust training with image augmentation techniques
- β Automated Training Pipeline: End-to-end automated ML pipeline with DVC
- β Experiment Tracking: Complete experiment tracking and model versioning with MLflow
- β Model Evaluation: Comprehensive evaluation metrics and performance monitoring
- β Modular Architecture: Clean separation of concerns with components, entities, and pipelines
- β Configuration Management: YAML-based configuration for easy parameter tuning
- β Logging System: Comprehensive logging for debugging and monitoring
- β Error Handling: Robust error handling and validation
- β Type Hints: Full type annotations for better code quality
- β REST API: Flask-based RESTful API for predictions
- β Web Interface: User-friendly HTML interface for image upload and prediction
- β CORS Support: Cross-origin resource sharing enabled
- β Real-time Predictions: Instant classification results
- β Base64 Image Support: Direct image upload via API
- β Dockerization: Complete Docker containerization for consistent deployments
- β CI/CD Pipeline: Automated testing, building, and deployment with GitHub Actions
- β AWS Integration: Deployment on AWS EC2 with ECR container registry
- β Version Control: Git-based version control with DVC for data and models
- β Environment Management: Secure environment variable management
The application provides an intuitive web interface where users can:
- Upload kidney CT scan images
- Get instant classification results
- View confidence scores
- Access training functionality
# Health check
curl http://localhost:8080/
# Trigger training
curl -X POST http://localhost:8080/train
# Make prediction
curl -X POST http://localhost:8080/predict \
-H "Content-Type: application/json" \
-d '{"image": "base64_encoded_image_string"}'| Technology | Version | Purpose |
|---|---|---|
| Python | 3.8 | Core programming language |
| TensorFlow | 2.12.0 | Deep learning framework |
| Flask | Latest | Web framework for API |
| DVC | Latest | Data version control |
| MLflow | 2.2.2 | Experiment tracking & model registry |
- TensorFlow/Keras: Model building and training
- NumPy: Numerical computations
- Pandas: Data manipulation
- Matplotlib/Seaborn: Visualization
- SciPy: Scientific computing
- Docker: Containerization
- GitHub Actions: CI/CD automation
- AWS EC2: Cloud compute
- AWS ECR: Container registry
- AWS CLI: AWS management
- python-box: Configuration management
- PyYAML: YAML parsing
- python-dotenv: Environment management
- tqdm: Progress bars
- gdown: Google Drive downloads
- Flask-CORS: Cross-origin support
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β User Interface β
β (Web App / API Client) β
βββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Flask REST API β
β (app.py - Port 8080) β
βββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Prediction Pipeline β
β (Real-time Inference Engine) β
βββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Trained ML Model β
β (VGG16 Transfer Learning) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Training Pipeline β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Stage 1: Data Ingestion β Download & Extract Dataset β
β Stage 2: Base Model Prep β Load & Configure VGG16 β
β Stage 3: Model Training β Fine-tune on Kidney Data β
β Stage 4: Model Evaluation β Validate & Log to MLflow β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MLOps Infrastructure β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β DVC: Data & Model Versioning β
β MLflow: Experiment Tracking & Model Registry β
β GitHub Actions: CI/CD Automation β
β Docker: Containerization β
β AWS: Cloud Deployment (EC2 + ECR) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
src/cnnClassifier/
βββ components/ # Core ML components
β βββ data_ingestion.py
β βββ prepare_base_model.py
β βββ model_training.py
β βββ model_evaluation_mlflow.py
βββ config/ # Configuration management
βββ entity/ # Data classes and entities
βββ pipeline/ # Training and prediction pipelines
βββ utils/ # Utility functions
βββ constants/ # Constants and paths
The project uses kidney CT scan images categorized into different classes. The dataset is automatically downloaded from Google Drive during the data ingestion stage.
kidney-ct-scan-image/
βββ Normal/
β βββ image1.jpg
β βββ image2.jpg
β βββ ...
βββ Tumor/
βββ image1.jpg
βββ image2.jpg
βββ ...
- Image Size: 224x224x3 (RGB)
- Classes: 2 (Normal, Tumor)
- Format: JPEG images
- Source: Medical CT scans
Before you begin, ensure you have the following installed:
- Python 3.8 or higher
- Git for version control
- pip package manager
- virtualenv or conda for environment management
- Docker (optional, for containerized deployment)
- AWS CLI (optional, for cloud deployment)
# Clone the repository
git clone https://github.com/Adiaparmar/Kidney-Disease-Classification.git
# Navigate to project directory
cd Kidney-Disease-ClassificationUsing virtualenv:
# Create virtual environment
python -m venv venv
# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activateUsing conda:
# Create conda environment
conda create -n kidney-classifier python=3.8 -y
# Activate environment
conda activate kidney-classifier# Upgrade pip
python -m pip install --upgrade pip
# Install required packages
pip install -r requirements.txtCreate a .env file in the project root:
# .env file
MLFLOW_TRACKING_URI=https://dagshub.com/Adiaparmar/Kidney-Disease-Classification.mlflow
MLFLOW_TRACKING_USERNAME=your_username
MLFLOW_TRACKING_PASSWORD=your_password
AWS_ACCESS_KEY_ID=your_aws_access_key
AWS_SECRET_ACCESS_KEY=your_aws_secret_key
AWS_REGION=us-east-1Note: Never commit the .env file to version control. It's already included in .gitignore.
DVC (Data Version Control) is used for managing datasets and model versions.
# DVC is already initialized in this project
# To verify DVC installation
dvc versionIf you want to use remote storage for DVC:
# Add remote storage (e.g., AWS S3)
dvc remote add -d myremote s3://your-bucket-name/path
# Configure AWS credentials
dvc remote modify myremote access_key_id 'your-access-key'
dvc remote modify myremote secret_access_key 'your-secret-key'# Pull data and models from remote storage
dvc pull# Run the entire ML pipeline
dvc repro
# Run specific stage
dvc repro -s data_ingestion
dvc repro -s prepare_base_model
dvc repro -s training
dvc repro -s evaluation# Check pipeline status
dvc status
# Show pipeline DAG
dvc dag
# Track new data
dvc add data/new_dataset
# Push changes to remote
dvc push
# View metrics
dvc metrics show
# Compare experiments
dvc metrics diffMLflow is used for experiment tracking, model versioning, and model registry.
# Start MLflow UI locally
mlflow ui
# Access at http://localhost:5000This project uses DagsHub for remote MLflow tracking:
-
Create DagsHub Account
- Go to dagshub.com
- Sign up for a free account
-
Create Repository
- Create a new repository or connect existing GitHub repo
- Enable MLflow tracking
-
Configure Credentials
- Get your tracking URI from DagsHub
- Add credentials to
.envfile:
MLFLOW_TRACKING_URI=https://dagshub.com/username/repo.mlflow MLFLOW_TRACKING_USERNAME=your_username MLFLOW_TRACKING_PASSWORD=your_token
-
Verify Connection
# Run evaluation to test MLflow logging python src/cnnClassifier/pipeline/stage_04_model_evaluation_mlflow.py
- Experiment Tracking: Log parameters, metrics, and artifacts
- Model Registry: Version and manage trained models
- Artifact Storage: Store model files and plots
- Metric Visualization: Compare experiments and visualize metrics
# View experiments
mlflow experiments list
# Search runs
mlflow runs list --experiment-id 0
# Serve model
mlflow models serve -m "models:/kidney-classifier/Production" -p 5001
# Compare runs
mlflow ui --backend-store-uri ./mlrunsWindows:
# Download and install from AWS website
# Or use chocolatey
choco install awsclimacOS:
brew install awscliLinux:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install# Configure AWS CLI
aws configure
# Enter your credentials when prompted:
# AWS Access Key ID: your_access_key
# AWS Secret Access Key: your_secret_key
# Default region name: us-east-1
# Default output format: json# Test AWS connection
aws sts get-caller-identity
# List S3 buckets (if you have any)
aws s3 lsCreate ECR Repository:
# Create ECR repository for Docker images
aws ecr create-repository --repository-name kidney-classifier --region us-east-1Create EC2 Instance:
- Go to AWS Console β EC2
- Launch Instance
- Choose Ubuntu Server 22.04 LTS
- Instance type: t2.medium or higher
- Configure security group:
- Allow SSH (port 22)
- Allow HTTP (port 80)
- Allow Custom TCP (port 8080)
- Create or select key pair
- Launch instance
Setup EC2 Instance:
# SSH into EC2 instance
ssh -i your-key.pem ubuntu@your-ec2-public-ip
# Update system
sudo apt-get update
sudo apt-get upgrade -y
# Install Docker
sudo apt-get install docker.io -y
sudo usermod -aG docker ubuntu
newgrp docker
# Install AWS CLI
sudo apt-get install awscli -y
# Configure as self-hosted runner (see CI/CD section)Add the following secrets to your GitHub repository:
AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYAWS_REGIONECR_REPOSITORY_NAMEAWS_ECR_LOGIN_URI
Go to: Repository β Settings β Secrets and variables β Actions β New repository secret
Option A: Using main.py (All stages)
# Run complete training pipeline
python main.pyOption B: Using DVC
# Run pipeline with DVC
dvc reproOption C: Individual stages
# Stage 1: Data Ingestion
python src/cnnClassifier/pipeline/stage_01_data_ingestion.py
# Stage 2: Prepare Base Model
python src/cnnClassifier/pipeline/stage_02_prepare_base_model.py
# Stage 3: Model Training
python src/cnnClassifier/pipeline/stage_03_model_training.py
# Stage 4: Model Evaluation
python src/cnnClassifier/pipeline/stage_04_model_evaluation_mlflow.py# Run Flask application
python app.py
# Application will start at http://localhost:8080Using Web Interface:
- Open browser and go to
http://localhost:8080 - Upload a kidney CT scan image
- Click "Predict"
- View classification results
Using API:
import requests
import base64
# Read and encode image
with open("test_image.jpg", "rb") as f:
image_data = base64.b64encode(f.read()).decode()
# Make prediction request
response = requests.post(
"http://localhost:8080/predict",
json={"image": image_data}
)
print(response.json())Using cURL:
# Encode image to base64
base64 test_image.jpg > encoded_image.txt
# Make prediction
curl -X POST http://localhost:8080/predict \
-H "Content-Type: application/json" \
-d "{\"image\": \"$(cat encoded_image.txt)\"}"Purpose: Download and prepare the dataset
Process:
- Downloads dataset from Google Drive
- Extracts ZIP file
- Organizes data into training structure
Configuration (config/config.yaml):
data_ingestion:
root_dir: artifacts/data_ingestion
source_url: https://drive.google.com/file/d/...
local_data_file: artifacts/data_ingestion/data.zip
unzip_dir: artifacts/data_ingestionRun:
python src/cnnClassifier/pipeline/stage_01_data_ingestion.pyPurpose: Load and configure VGG16 base model
Process:
- Loads pre-trained VGG16 with ImageNet weights
- Removes top layers
- Adds custom classification layers
- Freezes base model layers
- Compiles model with optimizer
Configuration (params.yaml):
IMAGE_SIZE: [224, 224, 3]
INCLUDE_TOP: False
CLASSES: 2
WEIGHTS: imagenet
LEARNING_RATE: 0.02Run:
python src/cnnClassifier/pipeline/stage_02_prepare_base_model.pyPurpose: Train the model on kidney CT scan data
Process:
- Loads prepared base model
- Sets up data generators with augmentation
- Trains model with specified epochs
- Saves trained model
Configuration (params.yaml):
AUGMENTATION: True
EPOCHS: 2
BATCH_SIZE: 16Features:
- Data augmentation (rotation, flip, zoom)
- Batch processing
- Progress tracking
- Model checkpointing
Run:
python src/cnnClassifier/pipeline/stage_03_model_training.pyPurpose: Evaluate model and log metrics to MLflow
Process:
- Loads trained model
- Evaluates on validation set
- Calculates metrics (loss, accuracy)
- Logs to MLflow
- Saves metrics to JSON
Metrics Tracked:
- Loss
- Accuracy
- Model parameters
- Training configuration
Run:
python src/cnnClassifier/pipeline/stage_04_model_evaluation_mlflow.pyView Results:
# View metrics file
cat scores.json
# View in MLflow UI
mlflow uiEdit params.yaml to customize training:
# Image preprocessing
IMAGE_SIZE: [224, 224, 3] # Input image dimensions
# Data augmentation
AUGMENTATION: True # Enable/disable augmentation
# Training parameters
BATCH_SIZE: 16 # Batch size for training
EPOCHS: 2 # Number of training epochs
LEARNING_RATE: 0.02 # Learning rate for optimizer
# Model architecture
INCLUDE_TOP: False # Use VGG16 without top layers
CLASSES: 2 # Number of output classes
WEIGHTS: imagenet # Pre-trained weights# Full training pipeline
python main.py
# Monitor training progress
# Check logs/running_logs.log for detailed logs- Increase Epochs: For better accuracy, increase epochs to 20-50
- Adjust Batch Size: Reduce if running out of memory
- Learning Rate: Tune for optimal convergence
- Data Augmentation: Enable for better generalization
The model is evaluated using:
- Loss: Categorical cross-entropy loss
- Accuracy: Classification accuracy on validation set
1. Scores JSON:
cat scores.json2. MLflow UI:
mlflow ui
# Open http://localhost:50003. DagsHub (if configured): Visit your DagsHub repository to view experiments
Typical performance metrics:
- Training Accuracy: ~95%+
- Validation Accuracy: ~90%+
- Loss: <0.3
GET /
Description: Renders the web interface
Response: HTML page
POST /train
GET /train
Description: Triggers the complete training pipeline
Response:
"Training done successfully"Example:
curl -X POST http://localhost:8080/trainPOST /predict
Description: Classifies uploaded kidney CT scan image
Request Body:
{
"image": "base64_encoded_image_string"
}Response:
{
"prediction": "Normal" // or "Tumor"
}Example:
import requests
import base64
# Encode image
with open("scan.jpg", "rb") as f:
img_base64 = base64.b64encode(f.read()).decode()
# Make request
response = requests.post(
"http://localhost:8080/predict",
json={"image": img_base64}
)
print(response.json())# Build image
docker build -t kidney-classifier:latest .
# Verify image
docker images | grep kidney-classifier# Run container
docker run -d -p 8080:8080 \
-e AWS_ACCESS_KEY_ID=your_key \
-e AWS_SECRET_ACCESS_KEY=your_secret \
-e AWS_REGION=us-east-1 \
--name kidney-app \
kidney-classifier:latest
# Check container status
docker ps
# View logs
docker logs kidney-app
# Stop container
docker stop kidney-app
# Remove container
docker rm kidney-appCreate docker-compose.yml:
version: '3.8'
services:
app:
build: .
ports:
- "8080:8080"
environment:
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_REGION=${AWS_REGION}
volumes:
- ./model:/app/model
- ./artifacts:/app/artifactsRun with:
docker-compose up -dThe project uses GitHub Actions for automated CI/CD with three main jobs:
- Checkout code
- Run linting
- Execute unit tests
- Build Docker image
- Tag image
- Push to AWS ECR
- Pull latest image from ECR
- Deploy to EC2 instance
- Run container
- Clean up old images
Add these secrets in GitHub repository settings:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_REGION
ECR_REPOSITORY_NAME
AWS_ECR_LOGIN_URI
On your EC2 instance:
# Navigate to repository settings β Actions β Runners β New self-hosted runner
# Download runner
mkdir actions-runner && cd actions-runner
curl -o actions-runner-linux-x64-2.311.0.tar.gz -L \
https://github.com/actions/runner/releases/download/v2.311.0/actions-runner-linux-x64-2.311.0.tar.gz
tar xzf ./actions-runner-linux-x64-2.311.0.tar.gz
# Configure runner
./config.sh --url https://github.com/Adiaparmar/Kidney-Disease-Classification \
--token YOUR_TOKEN
# Install and start service
sudo ./svc.sh install
sudo ./svc.sh start# Push to main branch
git add .
git commit -m "Update application"
git push origin main
# Workflow will automatically triggerLocated at .github/workflows/main.yaml
Key features:
- Triggers on push to main branch
- Ignores README.md changes
- Uses AWS credentials from secrets
- Builds and pushes to ECR
- Deploys to self-hosted EC2 runner
GitHub β GitHub Actions β AWS ECR β AWS EC2
# Create repository
aws ecr create-repository \
--repository-name kidney-classifier \
--region us-east-1
# Note the repository URIInstance Specifications:
- AMI: Ubuntu Server 22.04 LTS
- Instance Type: t2.medium (minimum)
- Storage: 20 GB
- Security Group: Allow ports 22, 80, 8080
User Data Script (optional):
#!/bin/bash
apt-get update
apt-get install -y docker.io awscli
usermod -aG docker ubuntu
systemctl enable docker
systemctl start docker# SSH into instance
ssh -i your-key.pem ubuntu@ec2-public-ip
# Install Docker
sudo apt-get update
sudo apt-get install -y docker.io
sudo usermod -aG docker ubuntu
newgrp docker
# Install AWS CLI
sudo apt-get install -y awscli
# Configure AWS
aws configureFollow the self-hosted runner setup instructions from GitHub Actions section.
Manual Deployment:
# Login to ECR
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin your-ecr-uri
# Pull image
docker pull your-ecr-uri/kidney-classifier:latest
# Run container
docker run -d -p 8080:8080 \
-e AWS_ACCESS_KEY_ID=your_key \
-e AWS_SECRET_ACCESS_KEY=your_secret \
-e AWS_REGION=us-east-1 \
--name kidney-app \
your-ecr-uri/kidney-classifier:latestAutomated Deployment: Push to main branch, GitHub Actions will handle deployment.
http://your-ec2-public-ip:8080
# Check container status
docker ps
# View logs
docker logs kidney-app
# Restart container
docker restart kidney-app
# Update application
docker pull your-ecr-uri/kidney-classifier:latest
docker stop kidney-app
docker rm kidney-app
docker run -d -p 8080:8080 --name kidney-app your-ecr-uri/kidney-classifier:latest
# Clean up
docker system prune -fKidney-Disease-Classification/
β
βββ .github/
β βββ workflows/
β βββ main.yaml # CI/CD pipeline configuration
β
βββ artifacts/ # Generated artifacts (gitignored)
β βββ data_ingestion/ # Downloaded and extracted data
β βββ prepare_base_model/ # Base model files
β βββ training/ # Trained model files
β
βββ config/
β βββ config.yaml # Main configuration file
β
βββ logs/
β βββ running_logs.log # Application logs
β
βββ mlruns/ # MLflow experiment tracking data
β
βββ model/ # Final production model
β
βββ research/ # Jupyter notebooks for experimentation
β βββ 01_data_ingestion.ipynb
β βββ 02_prepare_base_model.ipynb
β βββ 03_model_training.ipynb
β βββ 04_model_evaluation.ipynb
β
βββ src/
β βββ cnnClassifier/
β βββ __init__.py
β βββ components/ # Core ML components
β β βββ data_ingestion.py
β β βββ prepare_base_model.py
β β βββ model_training.py
β β βββ model_evaluation_mlflow.py
β βββ config/ # Configuration management
β β βββ configuration.py
β βββ constants/ # Constants and paths
β β βββ __init__.py
β βββ entity/ # Data classes
β β βββ config_entity.py
β βββ pipeline/ # Training and prediction pipelines
β β βββ stage_01_data_ingestion.py
β β βββ stage_02_prepare_base_model.py
β β βββ stage_03_model_training.py
β β βββ stage_04_model_evaluation_mlflow.py
β β βββ prediction.py
β βββ utils/ # Utility functions
β βββ common.py
β
βββ templates/
β βββ index.html # Web interface
β
βββ .dvcignore # DVC ignore file
βββ .env # Environment variables (gitignored)
βββ .gitignore # Git ignore file
βββ app.py # Flask application
βββ Dockerfile # Docker configuration
βββ dvc.lock # DVC pipeline lock file
βββ dvc.yaml # DVC pipeline definition
βββ main.py # Main training script
βββ params.yaml # Model parameters
βββ requirements.txt # Python dependencies
βββ scores.json # Model evaluation scores
βββ setup.py # Package setup
βββ README.md # This file
Main configuration file for pipeline stages:
artifacts_root: artifacts
data_ingestion:
root_dir: artifacts/data_ingestion
source_url: https://drive.google.com/file/d/...
local_data_file: artifacts/data_ingestion/data.zip
unzip_dir: artifacts/data_ingestion
prepare_base_model:
root_dir: artifacts/prepare_base_model
base_model_path: artifacts/prepare_base_model/base_model.h5
updated_base_model_path: artifacts/prepare_base_model/base_model_updated.h5
training:
root_dir: artifacts/training
trained_model_path: artifacts/training/model.h5Contributions are welcome! Please follow these guidelines:
-
Fork the Repository
# Click 'Fork' button on GitHub -
Clone Your Fork
git clone https://github.com/your-username/Kidney-Disease-Classification.git cd Kidney-Disease-Classification -
Create a Branch
git checkout -b feature/your-feature-name
-
Make Changes
- Write clean, documented code
- Follow existing code style
- Add tests if applicable
-
Commit Changes
git add . git commit -m "Add: description of your changes"
-
Push to GitHub
git push origin feature/your-feature-name
-
Create Pull Request
- Go to GitHub and create a pull request
- Describe your changes clearly
- Reference any related issues
- Follow PEP 8 style guide for Python code
- Add docstrings to all functions and classes
- Update documentation for new features
- Ensure all tests pass before submitting PR
- Keep commits atomic and well-described
- π Bug fixes
- β¨ New features
- π Documentation improvements
- π§ͺ Additional tests
- π¨ UI/UX enhancements
- β‘ Performance optimizations
This project is licensed under the MIT License - see below for details:
MIT License
Copyright (c) 2024 Adiaparmar
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Adiaparmar
- π§ Email: adiaparmar@gmail.com
- π GitHub: @Adiaparmar
- π Repository: Kidney-Disease-Classification
For questions, issues, or suggestions:
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: adiaparmar@gmail.com
- TensorFlow Documentation
- MLflow Documentation
- DVC Documentation
- Flask Documentation
- AWS Documentation
- VGG16 Paper