# 1. Generate self-signed TLS certificate
mkdir -p certs
openssl req -x509 -newkey rsa:4096 -keyout certs/server.key -out certs/server.crt \
-days 365 -nodes -subj "/CN=localhost"
# 2. Initialize database (first time only)
ROODB_ROOT_PASSWORD=changeme ./roodb_init --data-dir ./data
# 3. Start server
./roodb
# 4. Connect via MySQL client
mysql -h 127.0.0.1 -P 3307 -u root -p --ssl-mode=REQUIRED./roodb --port PORT --data-dir PATH --cert-path PATH --key-path PATH --raft-ca-cert-path PATH
| Argument | Default | Description |
|---|---|---|
| port | 3307 | Client connection port |
| data_dir | ./data | Data directory path |
| cert_path | ./certs/server.crt | TLS certificate |
| key_path | ./certs/server.key | TLS private key |
| ca_cert_path | ./certs/ca.crt | CA certificate for Raft mTLS |
RooDB requires TLS for all connections. There is no plaintext mode. The server performs a STARTTLS handshake: initial connection is plaintext only long enough to negotiate TLS upgrade.
Self-signed certificate generation:
openssl req -x509 -newkey rsa:4096 \
-keyout server.key -out server.crt \
-days 365 -nodes \
-subj "/CN=roodb.local"For production, use certificates from a trusted CA or your organization's PKI.
RooDB enforces mutual TLS (mTLS) for Raft inter-node communication. Each cluster node must present a certificate signed by a trusted CA. This prevents unauthorized nodes from joining the cluster.
Certificate model:
- Single CA signs all node certificates
- Each node trusts the CA and verifies peer certificates
- Client port (3307): standard TLS (no client cert required)
- Raft port (4307): mTLS (client cert required, verified against CA)
Generate CA and node certificates for production:
# 1. Generate CA key and certificate
openssl genrsa -out ca.key 4096
openssl req -new -x509 -key ca.key -out ca.crt -days 365 -subj "/CN=RooDB CA"
# 2. Generate node key and CSR
openssl genrsa -out node1.key 4096
openssl req -new -key node1.key -out node1.csr -subj "/CN=roodb-node1"
# 3. Sign node certificate with CA
openssl x509 -req -in node1.csr -CA ca.crt -CAkey ca.key -CAcreateserial \
-out node1.crt -days 365 -extfile <(printf "subjectAltName=DNS:localhost,IP:127.0.0.1")
# Repeat steps 2-3 for node2, node3, etc.Starting nodes with mTLS:
# Each node uses its own cert/key, same CA
./roodb --port 3307 --data-dir ./data1 --cert-path ./node1.crt --key-path ./node1.key --raft-ca-cert-path ./ca.crt
./roodb --port 3308 --data-dir ./data2 --cert-path ./node2.crt --key-path ./node2.key --raft-ca-cert-path ./ca.crt
./roodb --port 3309 --data-dir ./data3 --cert-path ./node3.crt --key-path ./node3.key --raft-ca-cert-path ./ca.crtTesting mTLS rejection: A node with a certificate signed by a different CA will be rejected by the cluster. This protects against rogue nodes attempting to join.
| Port | Purpose |
|---|---|
| 3307 (default) | Client connections (MySQL protocol over TLS) |
| 4307 (port + 1000) | Raft RPC (inter-node replication) |
| Variable | Description |
|---|---|
ROODB_ROOT_PASSWORD |
Set root password on first boot |
ROODB_ROOT_PASSWORD_FILE |
Read password from file (Docker secrets) |
Priority: ROODB_ROOT_PASSWORD takes precedence over ROODB_ROOT_PASSWORD_FILE.
data/
├── sstables/ # LSM SSTable files
│ └── *.sst # Individual SSTable files
├── manifest.json # SSTable metadata (critical for recovery)
└── [raft state] # Raft log entries and vote state (LSM-backed)
Important: The manifest.json file tracks all SSTables. Losing it while keeping SSTables results in data loss. Always backup the entire data directory as a unit.
Single-node deployment is production-ready and fully supported via CLI:
./roodb --port 3307 --data-dir ./data --cert-path ./certs/server.crt --key-path ./certs/server.keyThe server automatically bootstraps as a single-node Raft cluster, electing itself as leader. All reads and writes are processed locally.
Use single-node when:
- Development and testing
- Small deployments where HA is not required
- Simplicity is preferred over fault tolerance
Multi-node clusters provide high availability through Raft consensus. Currently, cluster deployment requires programmatic setup (CLI support planned).
Cluster requirements:
- 3 nodes minimum (tolerates 1 failure)
- 5 nodes for higher availability (tolerates 2 failures)
- All nodes need TLS certificates signed by the same CA (mTLS)
- Shared CA certificate on all nodes for peer verification
- Network connectivity between all nodes on Raft RPC port
Bootstrap procedure (programmatic):
// 1. Create RaftNode for each member
let mut node1 = RaftNode::new(1, addr1, tls_config.clone(), storage1, catalog1).await?;
let mut node2 = RaftNode::new(2, addr2, tls_config.clone(), storage2, catalog2).await?;
let mut node3 = RaftNode::new(3, addr3, tls_config.clone(), storage3, catalog3).await?;
// 2. Register all peers on all nodes
node1.add_peer(2, addr2);
node1.add_peer(3, addr3);
node2.add_peer(1, addr1);
node2.add_peer(3, addr3);
node3.add_peer(1, addr1);
node3.add_peer(2, addr2);
// 3. Start RPC servers on all nodes
node1.start_rpc_server().await?;
node2.start_rpc_server().await?;
node3.start_rpc_server().await?;
// 4. Bootstrap from ONE node with full membership
let members = vec![(1, addr1), (2, addr2), (3, addr3)];
node1.bootstrap_cluster(members).await?;
// 5. Wait for leader election (~500-1000ms)
tokio::time::sleep(Duration::from_millis(1000)).await;See tests/raft_cluster.rs for complete working example.
Cluster behavior:
- Writes must go to leader (followers return "not leader" error)
- Reads served from any node (local LSM)
- Leader election: 150-300ms timeout, 50ms heartbeat
- Automatic failover when leader fails (if quorum remains)
Database initialization is handled by the separate roodb_init binary. The server (roodb) requires an initialized database and will exit with an error if the database has not been initialized.
roodb_init --data-dir PATH
| Argument | Default | Description |
|---|---|---|
| data_dir | ./data | Data directory path |
Exit codes:
0- Success (initialized or already initialized - idempotent)2- No password configured3- Storage error
Set the root password via environment variable:
| Variable | Description |
|---|---|
ROODB_ROOT_PASSWORD |
Set root password directly |
ROODB_ROOT_PASSWORD_FILE |
Read password from file (Docker secrets) |
Priority: ROODB_ROOT_PASSWORD takes precedence over ROODB_ROOT_PASSWORD_FILE.
Bare metal:
ROODB_ROOT_PASSWORD=mysecretpassword ./roodb_init --data-dir ./data
./roodb --port 3307 --data-dir ./data --cert-path ./certs/server.crt --key-path ./certs/server.keyContainer deployment:
# Initialize (run once or as idempotent entrypoint)
docker run -e ROODB_ROOT_PASSWORD=mysecretpassword -v ./data:/data roodb roodb_init --data-dir /data
# Start server
docker run -v ./data:/data roodb roodb --port 3307 --data-dir /data --cert-path /certs/server.crt --key-path /certs/server.keyDocker secrets:
docker run -e ROODB_ROOT_PASSWORD_FILE=/run/secrets/db_password \
-v /path/to/secrets:/run/secrets:ro \
-v ./data:/data roodb roodb_init --data-dir /dataWhen roodb_init runs on an empty data directory:
- Creates root user (
root@%) with the configured password - Grants
ALL PRIVILEGES ON *.*withGRANT OPTIONto root - Writes schema version marker
System tables (system.tables, system.columns, etc.) are created by the catalog at server startup, not by initialization.
CREATE USER 'username'@'host' IDENTIFIED BY 'password';Host patterns:
%- matches any hostlocalhost- local connections only192.168.%- IP wildcard matching192.168.1.100- specific IP
Examples:
-- User accessible from any host
CREATE USER 'app_user'@'%' IDENTIFIED BY 'app_password';
-- Local connections only
CREATE USER 'local_admin'@'localhost' IDENTIFIED BY 'admin_pass';
-- Specific subnet
CREATE USER 'internal'@'192.168.%' IDENTIFIED BY 'internal_pass';ALTER USER 'username'@'host' IDENTIFIED BY 'new_password';
-- Or use SET PASSWORD
SET PASSWORD FOR 'username'@'host' = 'new_password';DROP USER 'username'@'host';
-- Use IF EXISTS to avoid error when user doesn't exist
DROP USER IF EXISTS 'username'@'host';RooDB implements MySQL-compatible privileges at multiple levels.
| Level | Syntax | Description |
|---|---|---|
| Global | *.* |
All databases, all tables |
| Database | db_name.* |
All tables in database |
| Table | db_name.table_name |
Specific table |
| Privilege | Description |
|---|---|
ALL PRIVILEGES |
All available privileges |
SELECT |
Read data from tables |
INSERT |
Insert new rows |
UPDATE |
Modify existing rows |
DELETE |
Remove rows |
CREATE |
Create tables and indexes |
DROP |
Drop tables and indexes |
ALTER |
Alter table structure |
INDEX |
Create and drop indexes |
GRANT OPTION |
Grant privileges to others |
-- Full admin access
GRANT ALL PRIVILEGES ON *.* TO 'admin'@'%' WITH GRANT OPTION;
-- Read-only access to database
GRANT SELECT ON mydb.* TO 'reader'@'%';
-- Application access to specific table
GRANT SELECT, INSERT, UPDATE, DELETE ON mydb.users TO 'app'@'%';
-- DDL permissions for developers
GRANT CREATE, DROP, ALTER, INDEX ON mydb.* TO 'developer'@'%';REVOKE DELETE ON mydb.* FROM 'app'@'%';
REVOKE ALL PRIVILEGES ON *.* FROM 'user'@'%';SHOW GRANTS; -- Current user
SHOW GRANTS FOR 'username'@'host'; -- Specific user- Use CA-signed certificates in production
- Rotate certificates before expiration
- Store private keys with restricted permissions (600)
- For clusters, each node must use a certificate signed by the shared CA
- Protect the CA private key carefully; it can authorize new cluster members
- Firewall rules: Restrict client port (3307) to application servers
- Raft port: Restrict RPC port (4307) to cluster members only
- No plaintext: RooDB enforces TLS; there is no way to disable it
- Create separate users for each application
- Grant only necessary privileges
- Use database-level grants instead of global (
*.*) when possible - Avoid granting
GRANT OPTIONunless necessary
- Use strong passwords (12+ characters, mixed case, numbers, symbols)
- Use
ROODB_ROOT_PASSWORD_FILEwith Docker secrets for container deployments - Rotate passwords periodically via
ALTER USER
MySQL clients can use COM_PING for health checks. The server responds with OK if operational.
mysqladmin -h 127.0.0.1 -P 3307 -u root -p pingQuery system variables for configuration and version info:
SELECT @@version; -- 8.0.0-RooDB
SELECT @@max_allowed_packet; -- 16777216
SELECT @@transaction_isolation; -- REPEATABLE-READ
SELECT @@autocommit; -- 1Query system tables for metadata (read-only, managed by SQL commands):
| Table | Contents |
|---|---|
system.tables |
Table definitions |
system.columns |
Column definitions |
system.indexes |
Index definitions |
system.constraints |
Table constraints |
system.users |
User accounts |
system.grants |
Privileges |
system.roles |
Role definitions |
system.role_grants |
Role-to-user assignments |
Note: SHOW STATUS and SHOW PROCESSLIST are not yet implemented.
RooDB uses filesystem-level backups. The data directory contains all state:
# Stop server for consistent backup
pkill roodb
# Copy entire data directory
cp -r ./data ./backup/data-$(date +%Y%m%d)
# Restart server
./roodbFor minimal downtime, you can snapshot while running (point-in-time consistency via LSM design), but stopping ensures full consistency.
Critical files:
manifest.json- SSTable registry; must match SSTable filessstables/*.sst- Actual data- Raft state - Cluster membership and log position
- Stop server (if running)
- Remove or rename corrupted data directory
- Restore data directory from backup
- Start server
pkill roodb
mv ./data ./data.corrupted
cp -r ./backup/data-20240115 ./data
./roodbRaft handles node failures automatically:
Leader failure:
- Remaining nodes detect missing heartbeats (50ms interval)
- Election triggered after timeout (150-300ms)
- New leader elected if quorum exists
- Clients reconnect and retry to new leader
Follower failure:
- Cluster continues operating normally
- When follower rejoins, it catches up via Raft log replay
- No manual intervention required
Quorum requirements:
| Cluster Size | Quorum | Failures Tolerated |
|---|---|---|
| 3 nodes | 2 | 1 |
| 5 nodes | 3 | 2 |
| 7 nodes | 4 | 3 |
Total cluster loss:
- Restore single node from most recent backup
- Start as single-node cluster
- Verify data integrity
- (Optional) Rebuild multi-node cluster
# Restore from backup
cp -r ./backup/data-20240115 ./data
# Start single node
./roodb
# Verify data
mysql -h 127.0.0.1 -P 3307 -u root -p -e "SELECT COUNT(*) FROM mydb.mytable"| Error | Cause | Solution |
|---|---|---|
| Connection refused | Server not running or wrong port | Check server process, verify port |
| SSL connection error | TLS required but client not using SSL | Add --ssl-mode=REQUIRED to mysql client |
| Access denied | Wrong username/password/host | Verify credentials and host pattern |
| Error | Cause | Solution |
|---|---|---|
| "ROODB_ROOT_PASSWORD required" | First boot without password env var | Set ROODB_ROOT_PASSWORD environment variable |
| "Failed to load certificate" | Missing or invalid TLS certificate | Generate certificate or check path |
| "Address already in use" | Port conflict | Check for other processes on port 3307 |
| Error | Cause | Solution |
|---|---|---|
| "Not leader" | Write sent to follower | Redirect write to leader node |
| Election not completing | Network partition or <quorum nodes | Check network connectivity, verify quorum |
| Node won't rejoin | Stale state | Clear data directory, rejoin as new node |
| Symptom | Cause | Solution |
|---|---|---|
| Missing tables after restart | Corrupted manifest.json | Restore from backup |
| Slow queries | Pending compaction | Wait for background compaction |
mysql -h 127.0.0.1 -P 3307 -u root -p --ssl-mode=REQUIREDlet opts: Opts = OptsBuilder::default()
.ip_or_hostname("127.0.0.1")
.tcp_port(3307)
.user(Some("root"))
.pass(Some("password"))
.ssl_opts(SslOpts::default())
.into();jdbc:mysql://127.0.0.1:3307/mydb?user=root&password=secret&useSSL=true
import mysql.connector
conn = mysql.connector.connect(
host="127.0.0.1",
port=3307,
user="root",
password="secret",
ssl_disabled=False
)