Compare commits

...

4 Commits

25 changed files with 2120 additions and 81 deletions
+16
View File
@@ -1,5 +1,21 @@
# Changelog # Changelog
## 2026-05-02 - 2.9.0 - feat(server)
add tenant management, health checks, and database export/import APIs
- adds TypeScript and Rust management commands for creating, listing, deleting, and rotating isolated database tenants
- introduces health reporting with storage, auth, database, collection, and uptime information
- supports exporting and importing single-database snapshots and increases IPC payload size for larger transfers
- adds integration coverage for tenant isolation, password rotation, persistence across restart, and database restore flows
## 2026-04-29 - 2.8.0 - feat(transactions)
add single-node transaction support with session-aware reads, commits, aborts, and transaction metrics
- Buffer insert, update, delete, find, count, distinct, and findAndModify operations inside driver sessions and apply them on commit with write-conflict checks
- Return MongoDB-compatible NoSuchTransaction and WriteConflict errors for transaction lifecycle failures
- Expose authenticated users in connectionStatus and add session, transaction, auth, and oplog data to serverStatus and management metrics
- Document transaction support and extend bridge metrics typings and integration tests accordingly
## 2026-04-29 - 2.7.1 - fix(repo) ## 2026-04-29 - 2.7.1 - fix(repo)
no changes to commit no changes to commit
+1 -1
View File
@@ -1,6 +1,6 @@
{ {
"name": "@push.rocks/smartdb", "name": "@push.rocks/smartdb",
"version": "2.7.1", "version": "2.9.0",
"private": false, "private": false,
"description": "A MongoDB-compatible embedded database server with wire protocol support, backed by a high-performance Rust engine.", "description": "A MongoDB-compatible embedded database server with wire protocol support, backed by a high-performance Rust engine.",
"exports": { "exports": {
+8 -4
View File
@@ -290,10 +290,12 @@ await client.connect();
TLS is available for TCP listeners. `getConnectionUri()` includes `?tls=true` when TLS is enabled; pass the trusted CA to the MongoDB driver with `tlsCAFile`, `ca`, or `secureContext`. TLS is available for TCP listeners. `getConnectionUri()` includes `?tls=true` when TLS is enabled; pass the trusted CA to the MongoDB driver with `tlsCAFile`, `ca`, or `secureContext`.
Authentication verifies SCRAM credentials, denies unauthenticated commands, and enforces command-level built-in roles for supported operations. Authentication verifies SCRAM credentials, denies unauthenticated commands, and enforces command-level built-in roles for supported operations. `connectionStatus` reports the authenticated users and roles for the current socket.
Supported built-in role names are `root`, `read`, `readWrite`, `dbAdmin`, `userAdmin`, `clusterMonitor`, plus `readAnyDatabase`, `readWriteAnyDatabase`, `dbAdminAnyDatabase`, and `userAdminAnyDatabase`. When `usersPath` is set, SmartDB persists SCRAM credential material atomically and does not store plaintext passwords. Supported built-in role names are `root`, `read`, `readWrite`, `dbAdmin`, `userAdmin`, `clusterMonitor`, plus `readAnyDatabase`, `readWriteAnyDatabase`, `dbAdminAnyDatabase`, and `userAdminAnyDatabase`. When `usersPath` is set, SmartDB persists SCRAM credential material atomically and does not store plaintext passwords.
Single-node transactions are supported through official MongoDB driver sessions. Writes with `startTransaction` and `autocommit: false` are buffered per logical session, reads inside the transaction see the buffered overlay, `commitTransaction` applies the write set with conflict checks, and `abortTransaction` discards it.
Basic user management commands are available for authenticated users with `root` or `userAdmin` privileges: Basic user management commands are available for authenticated users with `root` or `userAdmin` privileges:
```typescript ```typescript
@@ -317,7 +319,7 @@ await client.db('admin').command({ usersInfo: 'reader' });
| `port` | `number` | Configured port (TCP mode) | | `port` | `number` | Configured port (TCP mode) |
| `host` | `string` | Configured host (TCP mode) | | `host` | `string` | Configured host (TCP mode) |
| `socketPath` | `string \| undefined` | Socket path (socket mode) | | `socketPath` | `string \| undefined` | Socket path (socket mode) |
| `getMetrics()` | `Promise<ISmartDbMetrics>` | Server metrics (db/collection counts, uptime) | | `getMetrics()` | `Promise<ISmartDbMetrics>` | Server metrics (db/collection counts, sessions, transactions, auth, uptime) |
| `getOpLog(params?)` | `Promise<IOpLogResult>` | Query oplog entries with optional filters | | `getOpLog(params?)` | `Promise<IOpLogResult>` | Query oplog entries with optional filters |
| `getOpLogStats()` | `Promise<IOpLogStats>` | Aggregate oplog statistics | | `getOpLogStats()` | `Promise<IOpLogStats>` | Aggregate oplog statistics |
| `revertToSeq(seq, dryRun?)` | `Promise<IRevertResult>` | Revert to a specific oplog sequence | | `revertToSeq(seq, dryRun?)` | `Promise<IRevertResult>` | Revert to a specific oplog sequence |
@@ -531,7 +533,7 @@ const names = await collection.distinct('name');
| **Aggregation** | `aggregate`, `count`, `distinct` | | **Aggregation** | `aggregate`, `count`, `distinct` |
| **Indexes** | `createIndexes`, `dropIndexes`, `listIndexes` | | **Indexes** | `createIndexes`, `dropIndexes`, `listIndexes` |
| **Sessions** | `startSession`, `endSessions` | | **Sessions** | `startSession`, `endSessions` |
| **Transactions** | `commitTransaction`, `abortTransaction` | | **Transactions** | `startTransaction`, `commitTransaction`, `abortTransaction` through driver sessions |
| **Admin** | `ping`, `listDatabases`, `listCollections`, `drop`, `dropDatabase`, `create`, `serverStatus`, `buildInfo`, `dbStats`, `collStats`, `connectionStatus`, `currentOp`, `renameCollection` | | **Admin** | `ping`, `listDatabases`, `listCollections`, `drop`, `dropDatabase`, `create`, `serverStatus`, `buildInfo`, `dbStats`, `collStats`, `connectionStatus`, `currentOp`, `renameCollection` |
Compatible with wire protocol versions 021 (driver versions 3.6 through 7.0). Compatible with wire protocol versions 021 (driver versions 3.6 through 7.0).
@@ -540,7 +542,7 @@ Compatible with wire protocol versions 021 (driver versions 3.6 through 7.0).
## Rust Crate Architecture 🦀 ## Rust Crate Architecture 🦀
The Rust engine is organized as a Cargo workspace with 8 focused crates: The Rust engine is organized as a Cargo workspace with 9 focused crates:
| Crate | Purpose | | Crate | Purpose |
|---|---| |---|---|
@@ -551,6 +553,7 @@ The Rust engine is organized as a Cargo workspace with 8 focused crates:
| `rustdb-storage` | Storage backends (memory, file), OpLog with point-in-time replay | | `rustdb-storage` | Storage backends (memory, file), OpLog with point-in-time replay |
| `rustdb-index` | B-tree/hash indexes, query planner (IXSCAN/COLLSCAN) | | `rustdb-index` | B-tree/hash indexes, query planner (IXSCAN/COLLSCAN) |
| `rustdb-txn` | Transaction + session management with snapshot isolation | | `rustdb-txn` | Transaction + session management with snapshot isolation |
| `rustdb-auth` | SCRAM-SHA-256 credential handling, user metadata persistence, RBAC checks |
| `rustdb-commands` | 40+ command handlers wiring everything together | | `rustdb-commands` | 40+ command handlers wiring everything together |
Cross-compiled for `linux_amd64` and `linux_arm64` via [@git.zone/tsrust](https://www.npmjs.com/package/@git.zone/tsrust). Cross-compiled for `linux_amd64` and `linux_arm64` via [@git.zone/tsrust](https://www.npmjs.com/package/@git.zone/tsrust).
@@ -563,6 +566,7 @@ The Bitcask-style file storage engine includes several reliability features:
- **CRC32 checksums** — every record is integrity-checked on read - **CRC32 checksums** — every record is integrity-checked on read
- **Automatic compaction** — dead records are reclaimed when they exceed 50% of file size, runs on startup and after every write - **Automatic compaction** — dead records are reclaimed when they exceed 50% of file size, runs on startup and after every write
- **Hint file staleness detection** — the hint file records the data file size at write time; if data.rdb changed since (e.g. crash after a delete), the engine falls back to a full scan to ensure tombstones are not lost - **Hint file staleness detection** — the hint file records the data file size at write time; if data.rdb changed since (e.g. crash after a delete), the engine falls back to a full scan to ensure tombstones are not lost
- **Torn-tail repair** — startup scans `data.rdb` to the last valid record, truncates invalid trailing bytes, and preserves all verified records after interrupted writes
- **Stale socket cleanup** — orphaned `/tmp/smartdb-*.sock` files from crashed instances are automatically cleaned up on startup - **Stale socket cleanup** — orphaned `/tmp/smartdb-*.sock` files from crashed instances are automatically cleaned up on startup
### Data Integrity CLI 🔍 ### Data Integrity CLI 🔍
+28
View File
@@ -150,6 +150,13 @@ impl AuthEngine {
self.enabled self.enabled
} }
pub fn user_count(&self) -> usize {
self.users
.read()
.unwrap_or_else(|poisoned| poisoned.into_inner())
.len()
}
pub fn supported_mechanisms(&self, namespace_user: &str) -> Vec<String> { pub fn supported_mechanisms(&self, namespace_user: &str) -> Vec<String> {
let Some((database, username)) = namespace_user.split_once('.') else { let Some((database, username)) = namespace_user.split_once('.') else {
return Vec::new(); return Vec::new();
@@ -275,6 +282,27 @@ impl AuthEngine {
.collect() .collect()
} }
pub fn list_users(&self) -> Vec<AuthenticatedUser> {
let users = self.users.read().unwrap_or_else(|poisoned| poisoned.into_inner());
let mut result: Vec<AuthenticatedUser> = users
.values()
.map(AuthUser::to_authenticated_user)
.collect();
result.sort_by(|a, b| a.database.cmp(&b.database).then(a.username.cmp(&b.username)));
result
}
pub fn drop_users_for_database(&self, database: &str) -> Result<usize, AuthError> {
let mut users = self.users.write().unwrap_or_else(|poisoned| poisoned.into_inner());
let before = users.len();
users.retain(|_, user| user.database != database);
let dropped = before.saturating_sub(users.len());
if dropped > 0 {
self.persist_locked(&users)?;
}
Ok(dropped)
}
pub fn start_scram_sha256( pub fn start_scram_sha256(
&self, &self,
database: &str, database: &str,
+17 -1
View File
@@ -18,6 +18,12 @@ pub enum CommandError {
#[error("transaction error: {0}")] #[error("transaction error: {0}")]
TransactionError(String), TransactionError(String),
#[error("no such transaction: {0}")]
NoSuchTransaction(String),
#[error("write conflict: {0}")]
WriteConflict(String),
#[error("namespace not found: {0}")] #[error("namespace not found: {0}")]
NamespaceNotFound(String), NamespaceNotFound(String),
@@ -52,6 +58,8 @@ impl CommandError {
CommandError::StorageError(_) => (1, "InternalError"), CommandError::StorageError(_) => (1, "InternalError"),
CommandError::IndexError(_) => (27, "IndexNotFound"), CommandError::IndexError(_) => (27, "IndexNotFound"),
CommandError::TransactionError(_) => (112, "WriteConflict"), CommandError::TransactionError(_) => (112, "WriteConflict"),
CommandError::NoSuchTransaction(_) => (251, "NoSuchTransaction"),
CommandError::WriteConflict(_) => (112, "WriteConflict"),
CommandError::NamespaceNotFound(_) => (26, "NamespaceNotFound"), CommandError::NamespaceNotFound(_) => (26, "NamespaceNotFound"),
CommandError::NamespaceExists(_) => (48, "NamespaceExists"), CommandError::NamespaceExists(_) => (48, "NamespaceExists"),
CommandError::DuplicateKey(_) => (11000, "DuplicateKey"), CommandError::DuplicateKey(_) => (11000, "DuplicateKey"),
@@ -79,7 +87,15 @@ impl From<rustdb_storage::StorageError> for CommandError {
impl From<rustdb_txn::TransactionError> for CommandError { impl From<rustdb_txn::TransactionError> for CommandError {
fn from(e: rustdb_txn::TransactionError) -> Self { fn from(e: rustdb_txn::TransactionError) -> Self {
CommandError::TransactionError(e.to_string()) match e {
rustdb_txn::TransactionError::NotFound(message) => {
CommandError::NoSuchTransaction(message)
}
rustdb_txn::TransactionError::WriteConflict(message) => {
CommandError::WriteConflict(message)
}
other => CommandError::TransactionError(other.to_string()),
}
} }
} }
@@ -2,8 +2,9 @@ use bson::{doc, Bson, Document};
use rustdb_index::IndexEngine; use rustdb_index::IndexEngine;
use tracing::debug; use tracing::debug;
use crate::context::{CommandContext, CursorState}; use crate::context::{CommandContext, ConnectionState, CursorState};
use crate::error::{CommandError, CommandResult}; use crate::error::{CommandError, CommandResult};
use crate::transactions;
/// Handle various admin / diagnostic / session / auth commands. /// Handle various admin / diagnostic / session / auth commands.
pub async fn handle( pub async fn handle(
@@ -11,6 +12,7 @@ pub async fn handle(
db: &str, db: &str,
ctx: &CommandContext, ctx: &CommandContext,
command_name: &str, command_name: &str,
connection: &ConnectionState,
) -> CommandResult<Document> { ) -> CommandResult<Document> {
match command_name { match command_name {
"ping" => Ok(doc! { "ok": 1.0 }), "ping" => Ok(doc! { "ok": 1.0 }),
@@ -24,13 +26,7 @@ pub async fn handle(
"ok": 1.0, "ok": 1.0,
}), }),
"serverStatus" => Ok(doc! { "serverStatus" => handle_server_status(ctx),
"host": "localhost",
"version": "7.0.0",
"process": "rustdb",
"uptime": ctx.start_time.elapsed().as_secs() as i64,
"ok": 1.0,
}),
"hostInfo" => Ok(doc! { "hostInfo" => Ok(doc! {
"system": { "system": {
@@ -90,13 +86,7 @@ pub async fn handle(
"codeName": "CommandNotFound", "codeName": "CommandNotFound",
}), }),
"connectionStatus" => Ok(doc! { "connectionStatus" => Ok(handle_connection_status(connection)),
"authInfo": {
"authenticatedUsers": [],
"authenticatedUserRoles": [],
},
"ok": 1.0,
}),
"createUser" => handle_create_user(cmd, db, ctx).await, "createUser" => handle_create_user(cmd, db, ctx).await,
@@ -156,9 +146,9 @@ pub async fn handle(
Ok(doc! { "ok": 1.0 }) Ok(doc! { "ok": 1.0 })
} }
"commitTransaction" | "abortTransaction" => Err(CommandError::IllegalOperation( "commitTransaction" => transactions::commit_transaction_command(cmd, ctx).await,
"Transaction numbers are only allowed on a replica set member or mongos".into(),
)), "abortTransaction" => transactions::abort_transaction_command(cmd, ctx),
// Auth stubs - accept silently. // Auth stubs - accept silently.
"saslStart" => Ok(doc! { "saslStart" => Ok(doc! {
@@ -195,6 +185,72 @@ pub async fn handle(
} }
} }
fn handle_server_status(ctx: &CommandContext) -> CommandResult<Document> {
let oplog_stats = ctx.oplog.stats();
Ok(doc! {
"host": "localhost",
"version": "7.0.0",
"process": "rustdb",
"uptime": ctx.start_time.elapsed().as_secs() as i64,
"connections": {
"current": 0_i32,
"available": i32::MAX,
},
"logicalSessionRecordCache": {
"activeSessionsCount": ctx.sessions.len() as i64,
},
"transactions": {
"currentActive": ctx.transactions.len() as i64,
},
"oplog": {
"currentSeq": oplog_stats.current_seq as i64,
"totalEntries": oplog_stats.total_entries as i64,
"oldestSeq": oplog_stats.oldest_seq as i64,
"entriesByOp": {
"insert": oplog_stats.inserts as i64,
"update": oplog_stats.updates as i64,
"delete": oplog_stats.deletes as i64,
},
},
"security": {
"authentication": ctx.auth.enabled(),
"users": ctx.auth.user_count() as i64,
},
"ok": 1.0,
})
}
fn handle_connection_status(connection: &ConnectionState) -> Document {
let authenticated_users: Vec<Bson> = connection
.authenticated_users
.iter()
.map(|user| {
Bson::Document(doc! {
"user": user.username.clone(),
"db": user.database.clone(),
})
})
.collect();
let authenticated_roles: Vec<Bson> = connection
.authenticated_users
.iter()
.flat_map(|user| {
user.roles
.iter()
.map(|role| Bson::Document(role_to_document(&user.database, role)))
})
.collect();
doc! {
"authInfo": {
"authenticatedUsers": authenticated_users,
"authenticatedUserRoles": authenticated_roles,
},
"ok": 1.0,
}
}
async fn handle_create_user( async fn handle_create_user(
cmd: &Document, cmd: &Document,
db: &str, db: &str,
@@ -7,6 +7,7 @@ use tracing::debug;
use crate::context::CommandContext; use crate::context::CommandContext;
use crate::error::{CommandError, CommandResult}; use crate::error::{CommandError, CommandResult};
use crate::transactions;
/// Handle the `delete` command. /// Handle the `delete` command.
pub async fn handle( pub async fn handle(
@@ -36,6 +37,7 @@ pub async fn handle(
); );
let ns_key = format!("{}.{}", db, coll); let ns_key = format!("{}.{}", db, coll);
let txn_id = transactions::active_transaction_id(ctx, cmd);
let mut total_deleted: i32 = 0; let mut total_deleted: i32 = 0;
let mut write_errors: Vec<Document> = Vec::new(); let mut write_errors: Vec<Document> = Vec::new();
@@ -69,7 +71,7 @@ pub async fn handle(
_ => 0, // default: delete all matches _ => 0, // default: delete all matches
}; };
match delete_matching(db, coll, &ns_key, &filter, limit, ctx).await { match delete_matching(db, coll, &ns_key, &filter, limit, ctx, txn_id.as_deref()).await {
Ok(count) => { Ok(count) => {
total_deleted += count; total_deleted += count;
} }
@@ -114,7 +116,24 @@ async fn delete_matching(
filter: &Document, filter: &Document,
limit: i32, limit: i32,
ctx: &CommandContext, ctx: &CommandContext,
txn_id: Option<&str>,
) -> Result<i32, CommandError> { ) -> Result<i32, CommandError> {
if let Some(txn_id) = txn_id {
let docs = transactions::load_transaction_docs(ctx, txn_id, db, coll).await?;
let matched = QueryMatcher::filter(&docs, filter);
let to_delete: &[Document] = if limit == 1 && !matched.is_empty() {
&matched[..1]
} else {
&matched
};
for doc in to_delete {
transactions::record_delete(ctx, txn_id, db, coll, doc.clone()).await?;
}
return Ok(to_delete.len() as i32);
}
// Check if the collection exists; if not, nothing to delete. // Check if the collection exists; if not, nothing to delete.
match ctx.storage.collection_exists(db, coll).await { match ctx.storage.collection_exists(db, coll).await {
Ok(false) => return Ok(0), Ok(false) => return Ok(0),
@@ -7,6 +7,7 @@ use rustdb_query::{QueryMatcher, sort_documents, apply_projection, distinct_valu
use crate::context::{CommandContext, CursorState}; use crate::context::{CommandContext, CursorState};
use crate::error::{CommandError, CommandResult}; use crate::error::{CommandError, CommandResult};
use crate::transactions;
/// Atomic counter for generating unique cursor IDs. /// Atomic counter for generating unique cursor IDs.
static CURSOR_ID_COUNTER: AtomicI64 = AtomicI64::new(1); static CURSOR_ID_COUNTER: AtomicI64 = AtomicI64::new(1);
@@ -80,9 +81,14 @@ pub async fn handle(
let limit = get_i64(cmd, "limit").unwrap_or(0).max(0) as usize; let limit = get_i64(cmd, "limit").unwrap_or(0).max(0) as usize;
let batch_size = get_i32(cmd, "batchSize").unwrap_or(101).max(0) as usize; let batch_size = get_i32(cmd, "batchSize").unwrap_or(101).max(0) as usize;
let single_batch = get_bool(cmd, "singleBatch").unwrap_or(false); let single_batch = get_bool(cmd, "singleBatch").unwrap_or(false);
let txn_id = transactions::active_transaction_id(ctx, cmd);
// If the collection does not exist, return an empty cursor. // If the collection does not exist, return an empty cursor.
let exists = ctx.storage.collection_exists(db, coll).await?; let exists = if txn_id.is_some() {
true
} else {
ctx.storage.collection_exists(db, coll).await?
};
if !exists { if !exists {
return Ok(doc! { return Ok(doc! {
"cursor": { "cursor": {
@@ -96,7 +102,9 @@ pub async fn handle(
// Try index-accelerated lookup. // Try index-accelerated lookup.
let index_key = format!("{}.{}", db, coll); let index_key = format!("{}.{}", db, coll);
let docs = if let Some(idx_ref) = ctx.indexes.get(&index_key) { let docs = if let Some(ref txn_id) = txn_id {
transactions::load_transaction_docs(ctx, txn_id, db, coll).await?
} else if let Some(idx_ref) = ctx.indexes.get(&index_key) {
if let Some(candidate_ids) = idx_ref.find_candidate_ids(&filter) { if let Some(candidate_ids) = idx_ref.find_candidate_ids(&filter) {
debug!( debug!(
ns = %ns, ns = %ns,
@@ -298,9 +306,14 @@ pub async fn handle_count(
ctx: &CommandContext, ctx: &CommandContext,
) -> CommandResult<Document> { ) -> CommandResult<Document> {
let coll = get_str(cmd, "count").unwrap_or("unknown"); let coll = get_str(cmd, "count").unwrap_or("unknown");
let txn_id = transactions::active_transaction_id(ctx, cmd);
// Check collection existence. // Check collection existence.
let exists = ctx.storage.collection_exists(db, coll).await?; let exists = if txn_id.is_some() {
true
} else {
ctx.storage.collection_exists(db, coll).await?
};
if !exists { if !exists {
return Ok(doc! { "n": 0_i64, "ok": 1.0 }); return Ok(doc! { "n": 0_i64, "ok": 1.0 });
} }
@@ -309,6 +322,23 @@ pub async fn handle_count(
let skip = get_i64(cmd, "skip").unwrap_or(0).max(0) as usize; let skip = get_i64(cmd, "skip").unwrap_or(0).max(0) as usize;
let limit = get_i64(cmd, "limit").unwrap_or(0).max(0) as usize; let limit = get_i64(cmd, "limit").unwrap_or(0).max(0) as usize;
if let Some(ref txn_id) = txn_id {
let docs = transactions::load_transaction_docs(ctx, txn_id, db, coll).await?;
let filtered = if query.is_empty() {
docs
} else {
QueryMatcher::filter(&docs, &query)
};
let mut n = filtered.len().saturating_sub(skip);
if limit > 0 {
n = n.min(limit);
}
return Ok(doc! {
"n": n as i64,
"ok": 1.0,
});
}
let count: u64 = if query.is_empty() && skip == 0 && limit == 0 { let count: u64 = if query.is_empty() && skip == 0 && limit == 0 {
// Fast path: use storage-level count. // Fast path: use storage-level count.
ctx.storage.count(db, coll).await? ctx.storage.count(db, coll).await?
@@ -352,15 +382,24 @@ pub async fn handle_distinct(
let key = get_str(cmd, "key").ok_or_else(|| { let key = get_str(cmd, "key").ok_or_else(|| {
CommandError::InvalidArgument("distinct requires a 'key' field".into()) CommandError::InvalidArgument("distinct requires a 'key' field".into())
})?; })?;
let txn_id = transactions::active_transaction_id(ctx, cmd);
// Check collection existence. // Check collection existence.
let exists = ctx.storage.collection_exists(db, coll).await?; let exists = if txn_id.is_some() {
true
} else {
ctx.storage.collection_exists(db, coll).await?
};
if !exists { if !exists {
return Ok(doc! { "values": [], "ok": 1.0 }); return Ok(doc! { "values": [], "ok": 1.0 });
} }
let query = get_document(cmd, "query").cloned(); let query = get_document(cmd, "query").cloned();
let docs = ctx.storage.find_all(db, coll).await?; let docs = if let Some(txn_id) = txn_id {
transactions::load_transaction_docs(ctx, &txn_id, db, coll).await?
} else {
ctx.storage.find_all(db, coll).await?
};
let values = distinct_values(&docs, key, query.as_ref()); let values = distinct_values(&docs, key, query.as_ref());
Ok(doc! { Ok(doc! {
@@ -6,6 +6,7 @@ use tracing::debug;
use crate::context::CommandContext; use crate::context::CommandContext;
use crate::error::{CommandError, CommandResult}; use crate::error::{CommandError, CommandResult};
use crate::transactions;
/// Handle the `insert` command. /// Handle the `insert` command.
pub async fn handle( pub async fn handle(
@@ -48,8 +49,13 @@ pub async fn handle(
"insert command" "insert command"
); );
// Auto-create database and collection if they don't exist. let txn_id = transactions::active_transaction_id(ctx, cmd);
ensure_collection_exists(db, coll, ctx).await?;
// Auto-create database and collection if they don't exist. Transactional
// writes defer collection creation until commit so abort remains clean.
if txn_id.is_none() {
ensure_collection_exists(db, coll, ctx).await?;
}
let ns_key = format!("{}.{}", db, coll); let ns_key = format!("{}.{}", db, coll);
let mut inserted_count: i32 = 0; let mut inserted_count: i32 = 0;
@@ -84,6 +90,24 @@ pub async fn handle(
} }
} }
if let Some(ref txn_id) = txn_id {
match transactions::record_insert(ctx, txn_id, db, coll, doc.clone()).await {
Ok(_) => inserted_count += 1,
Err(e) => {
write_errors.push(doc! {
"index": idx as i32,
"code": 11000_i32,
"codeName": "DuplicateKey",
"errmsg": e.to_string(),
});
if ordered {
break;
}
}
}
continue;
}
// Attempt storage insert. // Attempt storage insert.
match ctx.storage.insert_one(db, coll, doc.clone()).await { match ctx.storage.insert_one(db, coll, doc.clone()).await {
Ok(id_str) => { Ok(id_str) => {
@@ -7,6 +7,7 @@ use tracing::debug;
use crate::context::CommandContext; use crate::context::CommandContext;
use crate::error::{CommandError, CommandResult}; use crate::error::{CommandError, CommandResult};
use crate::transactions;
/// Handle `update` and `findAndModify` commands. /// Handle `update` and `findAndModify` commands.
pub async fn handle( pub async fn handle(
@@ -47,8 +48,12 @@ async fn handle_update(
debug!(db = db, collection = coll, count = updates.len(), "update command"); debug!(db = db, collection = coll, count = updates.len(), "update command");
// Auto-create database and collection if needed. let txn_id = transactions::active_transaction_id(ctx, cmd);
ensure_collection_exists(db, coll, ctx).await?;
// Transactional writes defer namespace creation until commit.
if txn_id.is_none() {
ensure_collection_exists(db, coll, ctx).await?;
}
let ns_key = format!("{}.{}", db, coll); let ns_key = format!("{}.{}", db, coll);
@@ -136,7 +141,7 @@ async fn handle_update(
}); });
// Load all documents and filter. // Load all documents and filter.
let all_docs = load_filtered_docs(db, coll, &filter, &ns_key, ctx).await?; let all_docs = load_filtered_docs(db, coll, &filter, &ns_key, ctx, txn_id.as_deref()).await?;
if all_docs.is_empty() && upsert { if all_docs.is_empty() && upsert {
// Upsert: create a new document. // Upsert: create a new document.
@@ -166,6 +171,30 @@ async fn handle_update(
} }
} }
if let Some(ref txn_id) = txn_id {
match transactions::record_insert(ctx, txn_id, db, coll, updated.clone()).await {
Ok(_) => {
total_n += 1;
upserted_list.push(doc! {
"index": idx as i32,
"_id": new_id,
});
}
Err(e) => {
write_errors.push(doc! {
"index": idx as i32,
"code": 1_i32,
"codeName": "InternalError",
"errmsg": e.to_string(),
});
if ordered {
break;
}
}
}
continue;
}
// Insert the new document. // Insert the new document.
match ctx.storage.insert_one(db, coll, updated.clone()).await { match ctx.storage.insert_one(db, coll, updated.clone()).await {
Ok(id_str) => { Ok(id_str) => {
@@ -258,6 +287,38 @@ async fn handle_update(
} }
let id_str = extract_id_string(matched_doc); let id_str = extract_id_string(matched_doc);
if let Some(ref txn_id) = txn_id {
match transactions::record_update(
ctx,
txn_id,
db,
coll,
matched_doc.clone(),
updated_doc.clone(),
)
.await
{
Ok(_) => {
total_n += 1;
if matched_doc != &updated_doc {
total_n_modified += 1;
}
}
Err(e) => {
write_errors.push(doc! {
"index": idx as i32,
"code": 1_i32,
"codeName": "InternalError",
"errmsg": e.to_string(),
});
if ordered {
break;
}
}
}
continue;
}
match ctx match ctx
.storage .storage
.update_by_id(db, coll, &id_str, updated_doc.clone()) .update_by_id(db, coll, &id_str, updated_doc.clone())
@@ -407,8 +468,12 @@ async fn handle_find_and_modify(
.collect() .collect()
}); });
// Auto-create database and collection. let txn_id = transactions::active_transaction_id(ctx, cmd);
ensure_collection_exists(db, coll, ctx).await?;
// Transactional writes defer namespace creation until commit.
if txn_id.is_none() {
ensure_collection_exists(db, coll, ctx).await?;
}
let ns_key = format!("{}.{}", db, coll); let ns_key = format!("{}.{}", db, coll);
@@ -416,7 +481,7 @@ async fn handle_find_and_modify(
drop(ctx.get_or_init_index_engine(db, coll).await); drop(ctx.get_or_init_index_engine(db, coll).await);
// Load and filter documents. // Load and filter documents.
let mut matched = load_filtered_docs(db, coll, &query, &ns_key, ctx).await?; let mut matched = load_filtered_docs(db, coll, &query, &ns_key, ctx, txn_id.as_deref()).await?;
// Sort if specified. // Sort if specified.
if let Some(ref sort_spec) = sort { if let Some(ref sort_spec) = sort {
@@ -430,6 +495,21 @@ async fn handle_find_and_modify(
// Remove operation. // Remove operation.
if let Some(ref doc) = target { if let Some(ref doc) = target {
let id_str = extract_id_string(doc); let id_str = extract_id_string(doc);
if let Some(ref txn_id) = txn_id {
transactions::record_delete(ctx, txn_id, db, coll, doc.clone()).await?;
let value = apply_fields_projection(doc, &fields);
return Ok(doc! {
"value": value,
"lastErrorObject": {
"n": 1_i32,
"updatedExisting": false,
},
"ok": 1.0,
});
}
ctx.storage.delete_by_id(db, coll, &id_str).await?; ctx.storage.delete_by_id(db, coll, &id_str).await?;
// Record in oplog. // Record in oplog.
@@ -503,6 +583,35 @@ async fn handle_find_and_modify(
} }
let id_str = extract_id_string(&original_doc); let id_str = extract_id_string(&original_doc);
if let Some(ref txn_id) = txn_id {
transactions::record_update(
ctx,
txn_id,
db,
coll,
original_doc.clone(),
updated_doc.clone(),
)
.await?;
let return_doc = if return_new {
&updated_doc
} else {
&original_doc
};
let value = apply_fields_projection(return_doc, &fields);
return Ok(doc! {
"value": value,
"lastErrorObject": {
"n": 1_i32,
"updatedExisting": true,
},
"ok": 1.0,
});
}
ctx.storage ctx.storage
.update_by_id(db, coll, &id_str, updated_doc.clone()) .update_by_id(db, coll, &id_str, updated_doc.clone())
.await?; .await?;
@@ -563,6 +672,26 @@ async fn handle_find_and_modify(
} }
} }
if let Some(ref txn_id) = txn_id {
transactions::record_insert(ctx, txn_id, db, coll, updated_doc.clone()).await?;
let value = if return_new {
apply_fields_projection(&updated_doc, &fields)
} else {
Bson::Null
};
return Ok(doc! {
"value": value,
"lastErrorObject": {
"n": 1_i32,
"updatedExisting": false,
"upserted": upserted_id,
},
"ok": 1.0,
});
}
let inserted_id_str = ctx.storage let inserted_id_str = ctx.storage
.insert_one(db, coll, updated_doc.clone()) .insert_one(db, coll, updated_doc.clone())
.await?; .await?;
@@ -622,7 +751,17 @@ async fn load_filtered_docs(
filter: &Document, filter: &Document,
ns_key: &str, ns_key: &str,
ctx: &CommandContext, ctx: &CommandContext,
txn_id: Option<&str>,
) -> CommandResult<Vec<Document>> { ) -> CommandResult<Vec<Document>> {
if let Some(txn_id) = txn_id {
let docs = transactions::load_transaction_docs(ctx, txn_id, db, coll).await?;
return if filter.is_empty() {
Ok(docs)
} else {
Ok(QueryMatcher::filter(&docs, filter))
};
}
// Try to use index to narrow candidates. // Try to use index to narrow candidates.
let candidate_ids: Option<HashSet<String>> = ctx let candidate_ids: Option<HashSet<String>> = ctx
.indexes .indexes
+1
View File
@@ -1,6 +1,7 @@
mod context; mod context;
pub mod error; pub mod error;
pub mod handlers; pub mod handlers;
pub mod transactions;
mod router; mod router;
pub use context::{CommandContext, ConnectionState, CursorState}; pub use context::{CommandContext, ConnectionState, CursorState};
+8 -13
View File
@@ -8,7 +8,7 @@ use rustdb_auth::AuthAction;
use crate::context::{CommandContext, ConnectionState}; use crate::context::{CommandContext, ConnectionState};
use crate::error::CommandError; use crate::error::CommandError;
use crate::handlers; use crate::{handlers, transactions};
/// Routes parsed wire protocol commands to the appropriate handler. /// Routes parsed wire protocol commands to the appropriate handler.
pub struct CommandRouter { pub struct CommandRouter {
@@ -55,11 +55,12 @@ impl CommandRouter {
} }
} }
if transaction_command_unsupported(command_name, &cmd.command) { if let Err(e) = transactions::prepare_transaction_for_command(
return CommandError::IllegalOperation( &self.ctx,
"Transaction numbers are only allowed on a replica set member or mongos".into(), &cmd.command,
) command_name,
.to_error_doc(); ) {
return e.to_error_doc();
} }
// Extract session id if present, and touch the session. // Extract session id if present, and touch the session.
@@ -136,7 +137,7 @@ impl CommandRouter {
| "grantRolesToUser" | "revokeRolesFromUser" | "grantRolesToUser" | "revokeRolesFromUser"
| "currentOp" | "killOp" | "top" | "profile" | "currentOp" | "killOp" | "top" | "profile"
| "compact" | "reIndex" | "fsync" | "connPoolSync" => { | "compact" | "reIndex" | "fsync" | "connPoolSync" => {
handlers::admin_handler::handle(&cmd.command, db, &self.ctx, command_name).await handlers::admin_handler::handle(&cmd.command, db, &self.ctx, command_name, connection).await
} }
// -- unknown command -- // -- unknown command --
@@ -207,9 +208,3 @@ fn aggregate_writes(command: &Document) -> bool {
_ => None, _ => None,
}).unwrap_or(false) }).unwrap_or(false)
} }
fn transaction_command_unsupported(command_name: &str, command: &Document) -> bool {
matches!(command_name, "commitTransaction" | "abortTransaction")
|| matches!(command.get("startTransaction"), Some(Bson::Boolean(true)))
|| matches!(command.get("autocommit"), Some(Bson::Boolean(false)))
}
@@ -0,0 +1,367 @@
use bson::{doc, Bson, Document};
use rustdb_storage::OpType;
use rustdb_txn::{TransactionState, WriteEntry, WriteOp};
use crate::context::CommandContext;
use crate::error::{CommandError, CommandResult};
pub fn command_starts_transaction(cmd: &Document) -> bool {
matches!(cmd.get("startTransaction"), Some(Bson::Boolean(true)))
}
pub fn command_uses_transaction(cmd: &Document) -> bool {
command_starts_transaction(cmd) || matches!(cmd.get("autocommit"), Some(Bson::Boolean(false)))
}
pub fn active_transaction_id(ctx: &CommandContext, cmd: &Document) -> Option<String> {
if !command_uses_transaction(cmd) {
return None;
}
let session_id = cmd
.get("lsid")
.and_then(rustdb_txn::SessionEngine::extract_session_id)?;
ctx.sessions.get_transaction_id(&session_id)
}
pub fn prepare_transaction_for_command(
ctx: &CommandContext,
cmd: &Document,
command_name: &str,
) -> CommandResult<()> {
if matches!(command_name, "commitTransaction" | "abortTransaction") {
return Ok(());
}
let starts_transaction = command_starts_transaction(cmd);
let uses_transaction = command_uses_transaction(cmd);
if !uses_transaction {
return Ok(());
}
let session_id = session_id_from_command(cmd)?;
require_txn_number(cmd)?;
ctx.sessions.get_or_create_session(&session_id);
if starts_transaction {
let txn_id = ctx.transactions.start_transaction(&session_id)?;
ctx.sessions.start_transaction(&session_id, &txn_id)?;
return Ok(());
}
if ctx.sessions.get_transaction_id(&session_id).is_none() {
return Err(CommandError::NoSuchTransaction(format!(
"session {session_id} has no active transaction"
)));
}
Ok(())
}
pub async fn load_transaction_docs(
ctx: &CommandContext,
txn_id: &str,
db: &str,
coll: &str,
) -> CommandResult<Vec<Document>> {
let ns = namespace(db, coll);
if !ctx.transactions.has_snapshot(txn_id, &ns) {
let docs = match ctx.storage.collection_exists(db, coll).await {
Ok(true) => ctx.storage.find_all(db, coll).await?,
Ok(false) => Vec::new(),
Err(_) => Vec::new(),
};
ctx.transactions.set_snapshot(txn_id, &ns, docs);
}
ctx.transactions
.get_snapshot(txn_id, &ns)
.ok_or_else(|| CommandError::NoSuchTransaction(txn_id.to_string()))
}
pub async fn record_insert(
ctx: &CommandContext,
txn_id: &str,
db: &str,
coll: &str,
doc: Document,
) -> CommandResult<String> {
let id = document_id_string(&doc)?;
let docs = load_transaction_docs(ctx, txn_id, db, coll).await?;
if docs.iter().any(|existing| document_id_string(existing).ok().as_deref() == Some(id.as_str())) {
return Err(CommandError::DuplicateKey(format!(
"duplicate _id '{}' in transaction",
id
)));
}
ctx.transactions.record_write(
txn_id,
&namespace(db, coll),
&id,
WriteOp::Insert,
Some(doc),
None,
);
Ok(id)
}
pub async fn record_update(
ctx: &CommandContext,
txn_id: &str,
db: &str,
coll: &str,
original: Document,
updated: Document,
) -> CommandResult<String> {
let id = document_id_string(&original)?;
ctx.transactions.record_write(
txn_id,
&namespace(db, coll),
&id,
WriteOp::Update,
Some(updated),
Some(original),
);
Ok(id)
}
pub async fn record_delete(
ctx: &CommandContext,
txn_id: &str,
db: &str,
coll: &str,
original: Document,
) -> CommandResult<String> {
let id = document_id_string(&original)?;
ctx.transactions.record_write(
txn_id,
&namespace(db, coll),
&id,
WriteOp::Delete,
None,
Some(original),
);
Ok(id)
}
pub async fn commit_transaction_command(
cmd: &Document,
ctx: &CommandContext,
) -> CommandResult<Document> {
let session_id = session_id_from_command(cmd)?;
let txn_id = ctx
.sessions
.get_transaction_id(&session_id)
.ok_or_else(|| CommandError::NoSuchTransaction(format!(
"session {session_id} has no active transaction"
)))?;
let state = ctx.transactions.take_transaction(&txn_id)?;
preflight_transaction(&state, ctx).await?;
apply_transaction(state, ctx).await?;
ctx.sessions.end_transaction(&session_id);
Ok(doc! { "ok": 1.0 })
}
pub fn abort_transaction_command(cmd: &Document, ctx: &CommandContext) -> CommandResult<Document> {
let session_id = session_id_from_command(cmd)?;
let txn_id = ctx
.sessions
.get_transaction_id(&session_id)
.ok_or_else(|| CommandError::NoSuchTransaction(format!(
"session {session_id} has no active transaction"
)))?;
ctx.transactions.abort_transaction(&txn_id)?;
ctx.sessions.end_transaction(&session_id);
Ok(doc! { "ok": 1.0 })
}
pub fn document_id_string(doc: &Document) -> CommandResult<String> {
match doc.get("_id") {
Some(Bson::ObjectId(oid)) => Ok(oid.to_hex()),
Some(Bson::String(s)) => Ok(s.clone()),
Some(other) => Ok(format!("{}", other)),
None => Err(CommandError::InvalidArgument("document missing _id field".into())),
}
}
fn session_id_from_command(cmd: &Document) -> CommandResult<String> {
cmd.get("lsid")
.and_then(rustdb_txn::SessionEngine::extract_session_id)
.ok_or_else(|| CommandError::InvalidArgument("transaction command requires lsid".into()))
}
fn require_txn_number(cmd: &Document) -> CommandResult<()> {
match cmd.get("txnNumber") {
Some(Bson::Int64(_)) | Some(Bson::Int32(_)) => Ok(()),
_ => Err(CommandError::InvalidArgument(
"transaction command requires txnNumber".into(),
)),
}
}
fn namespace(db: &str, coll: &str) -> String {
format!("{db}.{coll}")
}
async fn preflight_transaction(state: &TransactionState, ctx: &CommandContext) -> CommandResult<()> {
for (ns, writes) in &state.write_set {
let (db, coll) = split_namespace(ns)?;
drop(ctx.get_or_init_index_engine(db, coll).await);
for (doc_id, entry) in writes {
let current = current_doc(ctx, db, coll, doc_id).await?;
match entry.op {
WriteOp::Insert => {
if current.is_some() {
return Err(CommandError::DuplicateKey(format!(
"duplicate _id '{}' on transaction commit",
doc_id
)));
}
if let Some(ref doc) = entry.doc {
if let Some(engine) = ctx.indexes.get(ns) {
engine.check_unique_constraints(doc)?;
}
}
}
WriteOp::Update => {
assert_unchanged(doc_id, current.as_ref(), entry.original_doc.as_ref())?;
if let (Some(current_doc), Some(updated_doc)) = (current.as_ref(), entry.doc.as_ref()) {
if let Some(engine) = ctx.indexes.get(ns) {
engine.check_unique_constraints_for_update(current_doc, updated_doc)?;
}
}
}
WriteOp::Delete => {
assert_unchanged(doc_id, current.as_ref(), entry.original_doc.as_ref())?;
}
}
}
}
Ok(())
}
async fn apply_transaction(state: TransactionState, ctx: &CommandContext) -> CommandResult<()> {
let mut namespaces: Vec<_> = state.write_set.into_iter().collect();
namespaces.sort_by(|a, b| a.0.cmp(&b.0));
for (ns, writes) in namespaces {
let (db, coll) = split_namespace(&ns)?;
ensure_collection_exists(db, coll, ctx).await?;
drop(ctx.get_or_init_index_engine(db, coll).await);
let mut writes: Vec<(String, WriteEntry)> = writes.into_iter().collect();
writes.sort_by(|a, b| a.0.cmp(&b.0));
for (doc_id, entry) in writes {
match entry.op {
WriteOp::Insert => {
let Some(doc) = entry.doc else { continue; };
let inserted_id = ctx.storage.insert_one(db, coll, doc.clone()).await?;
ctx.oplog.append(OpType::Insert, db, coll, &inserted_id, Some(doc.clone()), None);
if let Some(mut engine) = ctx.indexes.get_mut(&ns) {
engine.on_insert(&doc)?;
}
}
WriteOp::Update => {
let Some(doc) = entry.doc else { continue; };
ctx.storage.update_by_id(db, coll, &doc_id, doc.clone()).await?;
ctx.oplog.append(
OpType::Update,
db,
coll,
&doc_id,
Some(doc.clone()),
entry.original_doc.clone(),
);
if let (Some(mut engine), Some(ref original)) =
(ctx.indexes.get_mut(&ns), entry.original_doc.as_ref())
{
engine.on_update(original, &doc)?;
}
}
WriteOp::Delete => {
ctx.storage.delete_by_id(db, coll, &doc_id).await?;
ctx.oplog.append(
OpType::Delete,
db,
coll,
&doc_id,
None,
entry.original_doc.clone(),
);
if let (Some(mut engine), Some(ref original)) =
(ctx.indexes.get_mut(&ns), entry.original_doc.as_ref())
{
engine.on_delete(original);
}
}
}
}
}
Ok(())
}
async fn current_doc(
ctx: &CommandContext,
db: &str,
coll: &str,
doc_id: &str,
) -> CommandResult<Option<Document>> {
match ctx.storage.collection_exists(db, coll).await {
Ok(true) => Ok(ctx.storage.find_by_id(db, coll, doc_id).await?),
Ok(false) => Ok(None),
Err(_) => Ok(None),
}
}
fn assert_unchanged(
doc_id: &str,
current: Option<&Document>,
original: Option<&Document>,
) -> CommandResult<()> {
if current == original {
return Ok(());
}
Err(CommandError::WriteConflict(format!(
"document '{}' changed during transaction",
doc_id
)))
}
async fn ensure_collection_exists(
db: &str,
coll: &str,
ctx: &CommandContext,
) -> CommandResult<()> {
if let Err(e) = ctx.storage.create_database(db).await {
let msg = e.to_string();
if !msg.contains("AlreadyExists") && !msg.contains("already exists") {
return Err(CommandError::StorageError(msg));
}
}
match ctx.storage.collection_exists(db, coll).await {
Ok(true) => Ok(()),
Ok(false) | Err(_) => {
if let Err(e) = ctx.storage.create_collection(db, coll).await {
let msg = e.to_string();
if !msg.contains("AlreadyExists") && !msg.contains("already exists") {
return Err(CommandError::StorageError(msg));
}
}
Ok(())
}
}
}
fn split_namespace(ns: &str) -> CommandResult<(&str, &str)> {
ns.split_once('.')
.ok_or_else(|| CommandError::InvalidArgument(format!("invalid namespace '{ns}'")))
}
+10
View File
@@ -170,6 +170,16 @@ impl SessionEngine {
} }
count count
} }
/// Number of currently tracked logical sessions.
pub fn len(&self) -> usize {
self.sessions.len()
}
/// Whether there are no tracked logical sessions.
pub fn is_empty(&self) -> bool {
self.sessions.is_empty()
}
} }
impl Default for SessionEngine { impl Default for SessionEngine {
+104 -11
View File
@@ -18,7 +18,7 @@ pub enum TransactionStatus {
} }
/// Describes a write operation within a transaction. /// Describes a write operation within a transaction.
#[derive(Debug, Clone, PartialEq, Eq)] #[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum WriteOp { pub enum WriteOp {
Insert, Insert,
Update, Update,
@@ -137,6 +137,25 @@ impl TransactionEngine {
Ok(()) Ok(())
} }
/// Remove an active transaction and return its buffered state for an
/// external committer that needs to update secondary indexes and oplogs.
pub fn take_transaction(&self, txn_id: &str) -> TransactionResult<TransactionState> {
let state = self
.transactions
.remove(txn_id)
.map(|(_, s)| s)
.ok_or_else(|| TransactionError::NotFound(txn_id.to_string()))?;
if state.status != TransactionStatus::Active {
return Err(TransactionError::InvalidState(format!(
"transaction {} is {:?}, cannot commit",
txn_id, state.status
)));
}
Ok(state)
}
/// Abort a transaction, discarding all buffered writes. /// Abort a transaction, discarding all buffered writes.
pub fn abort_transaction(&self, txn_id: &str) -> TransactionResult<()> { pub fn abort_transaction(&self, txn_id: &str) -> TransactionResult<()> {
let mut state = self let mut state = self
@@ -191,19 +210,32 @@ impl TransactionEngine {
original: Option<Document>, original: Option<Document>,
) { ) {
if let Some(mut state) = self.transactions.get_mut(txn_id) { if let Some(mut state) = self.transactions.get_mut(txn_id) {
let entry = WriteEntry { let writes = state.write_set.entry(ns.to_string()).or_default();
op, if let Some(existing) = writes.remove(doc_id) {
doc, if let Some(merged) = merge_write_entry(existing, op, doc, original) {
original_doc: original, writes.insert(doc_id.to_string(), merged);
}; }
state } else {
.write_set writes.insert(
.entry(ns.to_string()) doc_id.to_string(),
.or_default() WriteEntry {
.insert(doc_id.to_string(), entry); op,
doc,
original_doc: original,
},
);
}
} }
} }
/// Return true if the transaction already has a base snapshot for a namespace.
pub fn has_snapshot(&self, txn_id: &str, ns: &str) -> bool {
self.transactions
.get(txn_id)
.map(|state| state.snapshots.contains_key(ns))
.unwrap_or(false)
}
/// Get a snapshot of documents for a namespace within a transaction, /// Get a snapshot of documents for a namespace within a transaction,
/// applying the write overlay (inserts, updates, deletes) on top. /// applying the write overlay (inserts, updates, deletes) on top.
pub fn get_snapshot(&self, txn_id: &str, ns: &str) -> Option<Vec<Document>> { pub fn get_snapshot(&self, txn_id: &str, ns: &str) -> Option<Vec<Document>> {
@@ -270,6 +302,67 @@ impl TransactionEngine {
state.snapshots.insert(ns.to_string(), docs); state.snapshots.insert(ns.to_string(), docs);
} }
} }
/// Number of currently active transactions.
pub fn len(&self) -> usize {
self.transactions.len()
}
/// Whether there are no active transactions.
pub fn is_empty(&self) -> bool {
self.transactions.is_empty()
}
}
fn merge_write_entry(
existing: WriteEntry,
next_op: WriteOp,
next_doc: Option<Document>,
next_original: Option<Document>,
) -> Option<WriteEntry> {
match (existing.op, next_op) {
(WriteOp::Insert, WriteOp::Update) => Some(WriteEntry {
op: WriteOp::Insert,
doc: next_doc,
original_doc: None,
}),
(WriteOp::Insert, WriteOp::Delete) => None,
(WriteOp::Insert, WriteOp::Insert) => Some(WriteEntry {
op: WriteOp::Insert,
doc: next_doc,
original_doc: None,
}),
(WriteOp::Update, WriteOp::Update) => Some(WriteEntry {
op: WriteOp::Update,
doc: next_doc,
original_doc: existing.original_doc,
}),
(WriteOp::Update, WriteOp::Delete) => Some(WriteEntry {
op: WriteOp::Delete,
doc: None,
original_doc: existing.original_doc,
}),
(WriteOp::Update, WriteOp::Insert) => Some(WriteEntry {
op: WriteOp::Update,
doc: next_doc,
original_doc: existing.original_doc,
}),
(WriteOp::Delete, WriteOp::Insert) => Some(WriteEntry {
op: if existing.original_doc.is_some() {
WriteOp::Update
} else {
WriteOp::Insert
},
doc: next_doc,
original_doc: existing.original_doc,
}),
(WriteOp::Delete, WriteOp::Update) => Some(WriteEntry {
op: WriteOp::Update,
doc: next_doc,
original_doc: existing.original_doc.or(next_original),
}),
(WriteOp::Delete, WriteOp::Delete) => Some(existing),
}
} }
impl Default for TransactionEngine { impl Default for TransactionEngine {
+5
View File
@@ -299,6 +299,11 @@ impl RustDb {
pub fn ctx(&self) -> &Arc<CommandContext> { pub fn ctx(&self) -> &Arc<CommandContext> {
&self.ctx &self.ctx
} }
/// Get the server options used for this instance.
pub fn options(&self) -> &RustDbOptions {
&self.options
}
} }
fn build_tls_acceptor(options: &TlsOptions) -> Result<TlsAcceptor> { fn build_tls_acceptor(options: &TlsOptions) -> Result<TlsAcceptor> {
+675 -2
View File
@@ -1,10 +1,11 @@
use anyhow::Result; use anyhow::Result;
use bson::{Bson, Document};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use tokio::io::{AsyncBufReadExt, BufReader}; use tokio::io::{AsyncBufReadExt, BufReader};
use tracing::{info, error}; use tracing::{info, error};
use crate::RustDb; use crate::RustDb;
use rustdb_config::RustDbOptions; use rustdb_config::{RustDbOptions, StorageType};
/// A management request from the TypeScript wrapper. /// A management request from the TypeScript wrapper.
#[derive(Debug, Deserialize)] #[derive(Debug, Deserialize)]
@@ -139,7 +140,19 @@ async fn handle_request(
"start" => handle_start(&id, &request.params, db).await, "start" => handle_start(&id, &request.params, db).await,
"stop" => handle_stop(&id, db).await, "stop" => handle_stop(&id, db).await,
"getStatus" => handle_get_status(&id, db), "getStatus" => handle_get_status(&id, db),
"getHealth" => handle_get_health(&id, db).await,
"getMetrics" => handle_get_metrics(&id, db).await, "getMetrics" => handle_get_metrics(&id, db).await,
"createDatabaseTenant" => handle_create_database_tenant(&id, &request.params, db).await,
"deleteDatabaseTenant" => handle_delete_database_tenant(&id, &request.params, db).await,
"rotateDatabaseTenantPassword" => {
handle_rotate_database_tenant_password(&id, &request.params, db).await
}
"listDatabaseTenants" => handle_list_database_tenants(&id, db),
"getDatabaseTenantDescriptor" => {
handle_get_database_tenant_descriptor(&id, &request.params, db)
}
"exportDatabase" => handle_export_database(&id, &request.params, db).await,
"importDatabase" => handle_import_database(&id, &request.params, db).await,
"getOpLog" => handle_get_oplog(&id, &request.params, db), "getOpLog" => handle_get_oplog(&id, &request.params, db),
"getOpLogStats" => handle_get_oplog_stats(&id, db), "getOpLogStats" => handle_get_oplog_stats(&id, db),
"revertToSeq" => handle_revert_to_seq(&id, &request.params, db).await, "revertToSeq" => handle_revert_to_seq(&id, &request.params, db).await,
@@ -231,6 +244,42 @@ fn handle_get_status(
} }
} }
async fn handle_get_health(id: &str, db: &Option<RustDb>) -> ManagementResponse {
match db.as_ref() {
Some(d) => {
let ctx = d.ctx();
let (database_count, collection_count) = database_and_collection_counts(ctx).await;
let options = d.options();
let storage = match &options.storage {
StorageType::Memory => "memory",
StorageType::File => "file",
};
ManagementResponse::ok(
id.to_string(),
serde_json::json!({
"running": true,
"storage": storage,
"storagePath": options.storage_path.clone().or_else(|| options.persist_path.clone()),
"authEnabled": ctx.auth.enabled(),
"authUsers": ctx.auth.user_count(),
"usersPathConfigured": options.auth.users_path.is_some(),
"databaseCount": database_count,
"collectionCount": collection_count,
"uptimeSeconds": ctx.start_time.elapsed().as_secs(),
}),
)
}
None => ManagementResponse::ok(
id.to_string(),
serde_json::json!({
"running": false,
"databaseCount": 0,
"collectionCount": 0,
}),
),
}
}
async fn handle_get_metrics( async fn handle_get_metrics(
id: &str, id: &str,
db: &Option<RustDb>, db: &Option<RustDb>,
@@ -255,6 +304,10 @@ async fn handle_get_metrics(
"collections": total_collections, "collections": total_collections,
"oplogEntries": oplog_stats.total_entries, "oplogEntries": oplog_stats.total_entries,
"oplogCurrentSeq": oplog_stats.current_seq, "oplogCurrentSeq": oplog_stats.current_seq,
"sessions": ctx.sessions.len(),
"activeTransactions": ctx.transactions.len(),
"authEnabled": ctx.auth.enabled(),
"authUsers": ctx.auth.user_count(),
"uptimeSeconds": uptime_secs, "uptimeSeconds": uptime_secs,
}), }),
) )
@@ -263,6 +316,501 @@ async fn handle_get_metrics(
} }
} }
async fn handle_create_database_tenant(
id: &str,
params: &serde_json::Value,
db: &Option<RustDb>,
) -> ManagementResponse {
let d = match db.as_ref() {
Some(d) => d,
None => return ManagementResponse::err(id.to_string(), "Server is not running".to_string()),
};
let ctx = d.ctx();
if !ctx.auth.enabled() {
return ManagementResponse::err(
id.to_string(),
"Authentication must be enabled to create database tenants".to_string(),
);
}
let database_name = match string_param(params, "databaseName") {
Ok(value) => value,
Err(message) => return ManagementResponse::err(id.to_string(), message),
};
if let Err(message) = validate_database_name(database_name) {
return ManagementResponse::err(id.to_string(), message);
}
let username = match string_param(params, "username") {
Ok(value) => value,
Err(message) => return ManagementResponse::err(id.to_string(), message),
};
if let Err(message) = validate_username(username) {
return ManagementResponse::err(id.to_string(), message);
}
let password = match string_param(params, "password") {
Ok(value) => value,
Err(message) => return ManagementResponse::err(id.to_string(), message),
};
if password.is_empty() {
return ManagementResponse::err(id.to_string(), "password must not be empty".to_string());
}
let roles = match roles_param(params) {
Ok(roles) => roles,
Err(message) => return ManagementResponse::err(id.to_string(), message),
};
if let Err(e) = ctx.storage.create_database(database_name).await {
if !is_already_exists(&e.to_string()) {
return ManagementResponse::err(
id.to_string(),
format!("Failed to create database: {e}"),
);
}
}
match ctx
.auth
.create_user(database_name, username, password, roles)
{
Ok(()) => {
let users = ctx.auth.users_info(database_name, Some(username));
match users.first() {
Some(user) => ManagementResponse::ok(id.to_string(), tenant_descriptor_json(user)),
None => ManagementResponse::err(
id.to_string(),
"Tenant user was created but could not be read back".to_string(),
),
}
}
Err(e) => {
ManagementResponse::err(id.to_string(), format!("Failed to create tenant user: {e}"))
}
}
}
async fn handle_delete_database_tenant(
id: &str,
params: &serde_json::Value,
db: &Option<RustDb>,
) -> ManagementResponse {
let d = match db.as_ref() {
Some(d) => d,
None => {
return ManagementResponse::err(id.to_string(), "Server is not running".to_string())
}
};
let ctx = d.ctx();
let database_name = match string_param(params, "databaseName") {
Ok(value) => value,
Err(message) => return ManagementResponse::err(id.to_string(), message),
};
if let Err(message) = validate_database_name(database_name) {
return ManagementResponse::err(id.to_string(), message);
}
let username = params.get("username").and_then(|v| v.as_str());
if let Some(username) = username {
if let Err(message) = validate_username(username) {
return ManagementResponse::err(id.to_string(), message);
}
}
if let Err(e) = ctx.storage.drop_database(database_name).await {
return ManagementResponse::err(id.to_string(), format!("Failed to drop database: {e}"));
}
remove_database_indexes(ctx, database_name);
let mut deleted_users = 0usize;
if ctx.auth.enabled() {
if let Some(username) = username {
match ctx.auth.drop_user(database_name, username) {
Ok(()) => deleted_users = 1,
Err(rustdb_auth::AuthError::UserNotFound(_)) => deleted_users = 0,
Err(e) => {
return ManagementResponse::err(
id.to_string(),
format!("Failed to drop tenant user: {e}"),
)
}
}
} else {
match ctx.auth.drop_users_for_database(database_name) {
Ok(count) => deleted_users = count,
Err(e) => {
return ManagementResponse::err(
id.to_string(),
format!("Failed to drop tenant users: {e}"),
)
}
}
}
}
ManagementResponse::ok(
id.to_string(),
serde_json::json!({
"databaseName": database_name,
"deletedUsers": deleted_users,
"databaseDropped": true,
}),
)
}
async fn handle_rotate_database_tenant_password(
id: &str,
params: &serde_json::Value,
db: &Option<RustDb>,
) -> ManagementResponse {
let d = match db.as_ref() {
Some(d) => d,
None => {
return ManagementResponse::err(id.to_string(), "Server is not running".to_string())
}
};
let ctx = d.ctx();
if !ctx.auth.enabled() {
return ManagementResponse::err(
id.to_string(),
"Authentication must be enabled to rotate database tenant passwords".to_string(),
);
}
let username = match string_param(params, "username") {
Ok(value) => value,
Err(message) => return ManagementResponse::err(id.to_string(), message),
};
if let Err(message) = validate_username(username) {
return ManagementResponse::err(id.to_string(), message);
}
let password = match string_param(params, "password") {
Ok(value) => value,
Err(message) => return ManagementResponse::err(id.to_string(), message),
};
if password.is_empty() {
return ManagementResponse::err(id.to_string(), "password must not be empty".to_string());
}
let matches: Vec<_> = ctx
.auth
.list_users()
.into_iter()
.filter(|user| user.username == username)
.collect();
if matches.is_empty() {
return ManagementResponse::err(
id.to_string(),
format!("tenant user not found: {username}"),
);
}
if matches.len() > 1 {
return ManagementResponse::err(
id.to_string(),
format!("tenant username is ambiguous across databases: {username}"),
);
}
let user = &matches[0];
match ctx
.auth
.update_user(&user.database, username, Some(password), None)
{
Ok(()) => {
let users = ctx.auth.users_info(&user.database, Some(username));
match users.first() {
Some(user) => ManagementResponse::ok(id.to_string(), tenant_descriptor_json(user)),
None => ManagementResponse::err(
id.to_string(),
"Tenant user was updated but could not be read back".to_string(),
),
}
}
Err(e) => ManagementResponse::err(
id.to_string(),
format!("Failed to rotate tenant password: {e}"),
),
}
}
fn handle_list_database_tenants(id: &str, db: &Option<RustDb>) -> ManagementResponse {
let d = match db.as_ref() {
Some(d) => d,
None => {
return ManagementResponse::err(id.to_string(), "Server is not running".to_string())
}
};
let tenants: Vec<serde_json::Value> = d
.ctx()
.auth
.list_users()
.into_iter()
.filter(|user| user.database != "admin")
.map(|user| tenant_descriptor_json(&user))
.collect();
ManagementResponse::ok(id.to_string(), serde_json::json!({ "tenants": tenants }))
}
fn handle_get_database_tenant_descriptor(
id: &str,
params: &serde_json::Value,
db: &Option<RustDb>,
) -> ManagementResponse {
let d = match db.as_ref() {
Some(d) => d,
None => {
return ManagementResponse::err(id.to_string(), "Server is not running".to_string())
}
};
let database_name = match string_param(params, "databaseName") {
Ok(value) => value,
Err(message) => return ManagementResponse::err(id.to_string(), message),
};
let username = match string_param(params, "username") {
Ok(value) => value,
Err(message) => return ManagementResponse::err(id.to_string(), message),
};
let users = d.ctx().auth.users_info(database_name, Some(username));
match users.first() {
Some(user) => ManagementResponse::ok(id.to_string(), tenant_descriptor_json(user)),
None => ManagementResponse::err(
id.to_string(),
format!("tenant user not found: {database_name}.{username}"),
),
}
}
async fn handle_export_database(
id: &str,
params: &serde_json::Value,
db: &Option<RustDb>,
) -> ManagementResponse {
let d = match db.as_ref() {
Some(d) => d,
None => {
return ManagementResponse::err(id.to_string(), "Server is not running".to_string())
}
};
let ctx = d.ctx();
let database_name = match string_param(params, "databaseName") {
Ok(value) => value,
Err(message) => return ManagementResponse::err(id.to_string(), message),
};
if let Err(message) = validate_database_name(database_name) {
return ManagementResponse::err(id.to_string(), message);
}
match ctx.storage.database_exists(database_name).await {
Ok(true) => {}
Ok(false) => {
return ManagementResponse::err(
id.to_string(),
format!("database not found: {database_name}"),
)
}
Err(e) => {
return ManagementResponse::err(
id.to_string(),
format!("Failed to check database: {e}"),
)
}
}
let collection_names = match ctx.storage.list_collections(database_name).await {
Ok(collections) => collections,
Err(e) => {
return ManagementResponse::err(
id.to_string(),
format!("Failed to list collections: {e}"),
)
}
};
let mut collections = Vec::with_capacity(collection_names.len());
for collection_name in collection_names {
let documents = match ctx.storage.find_all(database_name, &collection_name).await {
Ok(docs) => docs
.into_iter()
.map(|doc| bson_doc_to_json(&doc))
.collect::<Vec<_>>(),
Err(e) => {
return ManagementResponse::err(
id.to_string(),
format!("Failed to export collection '{collection_name}': {e}"),
)
}
};
let indexes = match ctx
.storage
.get_indexes(database_name, &collection_name)
.await
{
Ok(specs) => specs
.into_iter()
.map(|doc| bson_doc_to_json(&doc))
.collect::<Vec<_>>(),
Err(_) => Vec::new(),
};
collections.push(serde_json::json!({
"name": collection_name,
"documents": documents,
"indexes": indexes,
}));
}
ManagementResponse::ok(
id.to_string(),
serde_json::json!({
"format": "smartdb.database.export.v1",
"databaseName": database_name,
"exportedAtMs": now_ms(),
"collections": collections,
}),
)
}
async fn handle_import_database(
id: &str,
params: &serde_json::Value,
db: &Option<RustDb>,
) -> ManagementResponse {
let d = match db.as_ref() {
Some(d) => d,
None => {
return ManagementResponse::err(id.to_string(), "Server is not running".to_string())
}
};
let ctx = d.ctx();
let database_name = match string_param(params, "databaseName") {
Ok(value) => value,
Err(message) => return ManagementResponse::err(id.to_string(), message),
};
if let Err(message) = validate_database_name(database_name) {
return ManagementResponse::err(id.to_string(), message);
}
let source = match params.get("source") {
Some(value) => value,
None => {
return ManagementResponse::err(
id.to_string(),
"Missing 'source' parameter".to_string(),
)
}
};
let source_collections = match source.get("collections").and_then(|value| value.as_array()) {
Some(collections) => collections,
None => {
return ManagementResponse::err(
id.to_string(),
"source.collections must be an array".to_string(),
)
}
};
if let Err(e) = ctx.storage.drop_database(database_name).await {
return ManagementResponse::err(
id.to_string(),
format!("Failed to clear database before import: {e}"),
);
}
remove_database_indexes(ctx, database_name);
if let Err(e) = ctx.storage.create_database(database_name).await {
if !is_already_exists(&e.to_string()) {
return ManagementResponse::err(
id.to_string(),
format!("Failed to create database: {e}"),
);
}
}
let mut imported_collections = 0usize;
let mut imported_documents = 0usize;
for collection in source_collections {
let collection_name = match collection.get("name").and_then(|value| value.as_str()) {
Some(value) => value,
None => {
return ManagementResponse::err(
id.to_string(),
"source collection is missing a string 'name'".to_string(),
)
}
};
if let Err(message) = validate_collection_name(collection_name) {
return ManagementResponse::err(id.to_string(), message);
}
if let Err(e) = ctx
.storage
.create_collection(database_name, collection_name)
.await
{
if !is_already_exists(&e.to_string()) {
return ManagementResponse::err(
id.to_string(),
format!("Failed to create collection '{collection_name}': {e}"),
);
}
}
if let Some(documents) = collection
.get("documents")
.and_then(|value| value.as_array())
{
for document_value in documents {
let document = match json_to_bson_doc(document_value) {
Ok(document) => document,
Err(message) => {
return ManagementResponse::err(
id.to_string(),
format!("Invalid document in '{collection_name}': {message}"),
)
}
};
if let Err(e) = ctx
.storage
.insert_one(database_name, collection_name, document)
.await
{
return ManagementResponse::err(
id.to_string(),
format!("Failed to import document into '{collection_name}': {e}"),
);
}
imported_documents += 1;
}
}
if let Some(indexes) = collection.get("indexes").and_then(|value| value.as_array()) {
for index_value in indexes {
let index_doc = match json_to_bson_doc(index_value) {
Ok(document) => document,
Err(message) => {
return ManagementResponse::err(
id.to_string(),
format!("Invalid index in '{collection_name}': {message}"),
)
}
};
let name = index_doc.get_str("name").unwrap_or("_id_").to_string();
if let Err(e) = ctx
.storage
.save_index(database_name, collection_name, &name, index_doc)
.await
{
return ManagementResponse::err(
id.to_string(),
format!("Failed to import index '{name}' into '{collection_name}': {e}"),
);
}
}
}
imported_collections += 1;
}
ManagementResponse::ok(
id.to_string(),
serde_json::json!({
"databaseName": database_name,
"collections": imported_collections,
"documents": imported_documents,
}),
)
}
fn handle_get_oplog( fn handle_get_oplog(
id: &str, id: &str,
params: &serde_json::Value, params: &serde_json::Value,
@@ -270,7 +818,9 @@ fn handle_get_oplog(
) -> ManagementResponse { ) -> ManagementResponse {
let d = match db.as_ref() { let d = match db.as_ref() {
Some(d) => d, Some(d) => d,
None => return ManagementResponse::err(id.to_string(), "Server is not running".to_string()), None => {
return ManagementResponse::err(id.to_string(), "Server is not running".to_string())
}
}; };
let ctx = d.ctx(); let ctx = d.ctx();
@@ -555,6 +1105,129 @@ async fn handle_get_documents(
) )
} }
async fn database_and_collection_counts(ctx: &rustdb_commands::CommandContext) -> (usize, u64) {
let databases = ctx.storage.list_databases().await.unwrap_or_default();
let mut collections = 0u64;
for database in &databases {
if let Ok(database_collections) = ctx.storage.list_collections(database).await {
collections += database_collections.len() as u64;
}
}
(databases.len(), collections)
}
fn remove_database_indexes(ctx: &rustdb_commands::CommandContext, database_name: &str) {
let prefix = format!("{}.", database_name);
let keys_to_remove: Vec<String> = ctx
.indexes
.iter()
.filter(|entry| entry.key().starts_with(&prefix))
.map(|entry| entry.key().clone())
.collect();
for key in keys_to_remove {
ctx.indexes.remove(&key);
}
}
fn tenant_descriptor_json(user: &rustdb_auth::AuthenticatedUser) -> serde_json::Value {
serde_json::json!({
"databaseName": user.database.clone(),
"username": user.username.clone(),
"roles": user.roles.clone(),
"authSource": user.database.clone(),
})
}
fn string_param<'a>(params: &'a serde_json::Value, key: &str) -> Result<&'a str, String> {
params
.get(key)
.and_then(|value| value.as_str())
.ok_or_else(|| format!("Missing '{key}' parameter"))
}
fn roles_param(params: &serde_json::Value) -> Result<Vec<String>, String> {
let Some(value) = params.get("roles") else {
return Ok(vec!["readWrite".to_string(), "dbAdmin".to_string()]);
};
let roles = value
.as_array()
.ok_or_else(|| "roles must be an array of strings".to_string())?;
let mut result = Vec::with_capacity(roles.len());
for role in roles {
let Some(role_name) = role.as_str() else {
return Err("roles must be an array of strings".to_string());
};
if role_name.is_empty() {
return Err("roles must not contain empty role names".to_string());
}
result.push(role_name.to_string());
}
Ok(result)
}
fn validate_database_name(name: &str) -> Result<(), String> {
if name.is_empty() {
return Err("databaseName must not be empty".to_string());
}
if name == "."
|| name == ".."
|| name.contains('/')
|| name.contains('\\')
|| name.contains('\0')
{
return Err(format!(
"databaseName contains invalid path characters: {name}"
));
}
Ok(())
}
fn validate_collection_name(name: &str) -> Result<(), String> {
if name.is_empty() {
return Err("collection name must not be empty".to_string());
}
if name == "."
|| name == ".."
|| name.contains('/')
|| name.contains('\\')
|| name.contains('\0')
{
return Err(format!(
"collection name contains invalid path characters: {name}"
));
}
Ok(())
}
fn validate_username(username: &str) -> Result<(), String> {
if username.is_empty() {
return Err("username must not be empty".to_string());
}
if username.contains('\0') {
return Err("username must not contain NUL bytes".to_string());
}
Ok(())
}
fn is_already_exists(message: &str) -> bool {
message.contains("AlreadyExists") || message.contains("already exists")
}
fn json_to_bson_doc(value: &serde_json::Value) -> Result<Document, String> {
let bson_value: Bson = serde_json::from_value(value.clone()).map_err(|e| e.to_string())?;
match bson_value {
Bson::Document(document) => Ok(document),
_ => Err("expected BSON document".to_string()),
}
}
fn now_ms() -> u64 {
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_millis() as u64
}
/// Convert a BSON Document to a serde_json::Value. /// Convert a BSON Document to a serde_json::Value.
fn bson_doc_to_json(doc: &bson::Document) -> serde_json::Value { fn bson_doc_to_json(doc: &bson::Document) -> serde_json::Value {
// Use bson's built-in relaxed extended JSON serialization. // Use bson's built-in relaxed extended JSON serialization.
+5
View File
@@ -88,6 +88,11 @@ tap.test('auth: should authenticate valid credentials', async () => {
await authedClient.connect(); await authedClient.connect();
const result = await authedClient.db('admin').command({ ping: 1 }); const result = await authedClient.db('admin').command({ ping: 1 });
expect(result.ok).toEqual(1); expect(result.ok).toEqual(1);
const status = await authedClient.db('admin').command({ connectionStatus: 1 });
expect(status.ok).toEqual(1);
expect(status.authInfo.authenticatedUsers[0]).toEqual({ user: 'root', db: 'admin' });
expect(status.authInfo.authenticatedUserRoles[0]).toEqual({ role: 'root', db: 'admin' });
}); });
tap.test('auth: should allow CRUD after authentication', async () => { tap.test('auth: should allow CRUD after authentication', async () => {
+232
View File
@@ -0,0 +1,232 @@
import { expect, tap } from '@git.zone/tstest/tapbundle';
import * as smartdb from '../ts/index.js';
import { MongoClient } from 'mongodb';
import * as fs from 'fs';
import * as os from 'os';
import * as path from 'path';
let server: smartdb.SmartdbServer;
let tmpDir: string;
let storagePath: string;
let usersPath: string;
const port = 27129;
const openedClients: MongoClient[] = [];
let tenantA: smartdb.ISmartDbDatabaseTenantDescriptor;
let tenantB: smartdb.ISmartDbDatabaseTenantDescriptor;
let exportedTenantA: smartdb.ISmartDbDatabaseExport;
function makeTmpDir(): string {
return fs.mkdtempSync(path.join(os.tmpdir(), 'smartdb-tenants-test-'));
}
function cleanTmpDir(dir: string): void {
if (fs.existsSync(dir)) {
fs.rmSync(dir, { recursive: true, force: true });
}
}
async function connect(uri: string): Promise<MongoClient> {
const client = new MongoClient(uri, {
directConnection: true,
serverSelectionTimeoutMS: 5000,
});
await client.connect();
openedClients.push(client);
return client;
}
async function expectConnectionToFail(uri: string): Promise<void> {
const client = new MongoClient(uri, {
directConnection: true,
serverSelectionTimeoutMS: 5000,
});
let threw = false;
try {
await client.connect();
await client.db('tenant_a').command({ ping: 1 });
} catch {
threw = true;
} finally {
await client.close().catch(() => undefined);
}
expect(threw).toBeTrue();
}
async function closeOpenedClients(): Promise<void> {
while (openedClients.length > 0) {
const client = openedClients.pop();
await client?.close().catch(() => undefined);
}
}
function createServer(): smartdb.SmartdbServer {
return new smartdb.SmartdbServer({
port,
storage: 'file',
storagePath,
auth: {
enabled: true,
usersPath,
scramIterations: 4096,
users: [
{
username: 'root',
password: 'secret',
database: 'admin',
roles: ['root'],
},
],
},
});
}
tap.test('tenants: should start durable authenticated service', async () => {
tmpDir = makeTmpDir();
storagePath = path.join(tmpDir, 'data');
usersPath = path.join(tmpDir, 'users.json');
server = createServer();
await server.start();
expect(server.running).toBeTrue();
});
tap.test('tenants: should create isolated database tenants', async () => {
tenantA = await server.createDatabaseTenant({
databaseName: 'tenant_a',
username: 'tenant_a_user',
password: 'tenant-a-pass-1',
});
tenantB = await server.createDatabaseTenant({
databaseName: 'tenant_b',
username: 'tenant_b_user',
password: 'tenant-b-pass-1',
});
expect(tenantA.databaseName).toEqual('tenant_a');
expect(tenantA.authSource).toEqual('tenant_a');
expect(tenantA.roles.includes('readWrite')).toBeTrue();
expect(tenantA.roles.includes('dbAdmin')).toBeTrue();
expect(typeof tenantA.mongodbUri).toEqual('string');
const tenants = await server.listDatabaseTenants();
expect(tenants.some((tenant) => tenant.databaseName === 'tenant_a')).toBeTrue();
expect(tenants.some((tenant) => tenant.databaseName === 'tenant_b')).toBeTrue();
const descriptor = await server.getDatabaseTenantDescriptor({
databaseName: 'tenant_a',
username: 'tenant_a_user',
});
expect(descriptor.username).toEqual('tenant_a_user');
});
tap.test('tenants: should work with official MongoDB driver and enforce auth isolation', async () => {
const clientA = await connect(tenantA.mongodbUri!);
const clientB = await connect(tenantB.mongodbUri!);
const ping = await clientA.db('tenant_a').command({ ping: 1 });
expect(ping.ok).toEqual(1);
await clientA.db('tenant_a').collection('notes').insertOne({ title: 'tenant a note' });
await clientA.db('tenant_a').collection('notes').createIndex({ title: 1 });
await clientB.db('tenant_b').collection('notes').insertOne({ title: 'tenant b note' });
let threw = false;
try {
await clientA.db('tenant_b').collection('notes').findOne({ title: 'tenant b note' });
} catch (err: any) {
threw = true;
expect(err.code).toEqual(13);
}
expect(threw).toBeTrue();
});
tap.test('tenants: should expose health and metrics for readiness checks', async () => {
const health = await server.getHealth();
expect(health.running).toBeTrue();
expect(health.storagePath).toEqual(storagePath);
expect(health.authEnabled).toBeTrue();
expect(health.databaseCount >= 2).toBeTrue();
expect(health.collectionCount >= 2).toBeTrue();
const metrics = await server.getMetrics();
expect(metrics.authEnabled).toBeTrue();
expect(metrics.databases >= 2).toBeTrue();
expect(metrics.collections >= 2).toBeTrue();
});
tap.test('tenants: should rotate password without restart', async () => {
const oldUri = tenantA.mongodbUri!;
await closeOpenedClients();
tenantA = await server.rotateDatabaseTenantPassword({
username: 'tenant_a_user',
password: 'tenant-a-pass-2',
});
expect(typeof tenantA.mongodbUri).toEqual('string');
await expectConnectionToFail(oldUri);
const rotatedClient = await connect(tenantA.mongodbUri!);
const doc = await rotatedClient.db('tenant_a').collection('notes').findOne({ title: 'tenant a note' });
expect(doc).toBeTruthy();
});
tap.test('tenants: should persist runtime users and file-backed data across restart', async () => {
await closeOpenedClients();
await server.stop();
server = createServer();
await server.start();
const clientA = await connect(tenantA.mongodbUri!);
const clientB = await connect(tenantB.mongodbUri!);
const docA = await clientA.db('tenant_a').collection('notes').findOne({ title: 'tenant a note' });
const docB = await clientB.db('tenant_b').collection('notes').findOne({ title: 'tenant b note' });
expect(docA).toBeTruthy();
expect(docB).toBeTruthy();
});
tap.test('tenants: should export and restore one database without unrelated tenants', async () => {
exportedTenantA = await server.exportDatabase({ databaseName: 'tenant_a' });
expect(exportedTenantA.databaseName).toEqual('tenant_a');
expect(exportedTenantA.collections.length).toEqual(1);
expect(JSON.stringify(exportedTenantA).includes('tenant b note')).toBeFalse();
await closeOpenedClients();
const deleteResult = await server.deleteDatabaseTenant({
databaseName: 'tenant_a',
username: 'tenant_a_user',
});
expect(deleteResult.databaseDropped).toBeTrue();
expect(deleteResult.deletedUsers).toEqual(1);
await expectConnectionToFail(tenantA.mongodbUri!);
const importResult = await server.importDatabase({
databaseName: 'tenant_a',
source: exportedTenantA,
});
expect(importResult.databaseName).toEqual('tenant_a');
expect(importResult.documents).toEqual(1);
tenantA = await server.createDatabaseTenant({
databaseName: 'tenant_a',
username: 'tenant_a_user',
password: 'tenant-a-pass-3',
});
const restoredClient = await connect(tenantA.mongodbUri!);
const restoredDoc = await restoredClient.db('tenant_a').collection('notes').findOne({ title: 'tenant a note' });
expect(restoredDoc).toBeTruthy();
const clientB = await connect(tenantB.mongodbUri!);
const unrelatedDoc = await clientB.db('tenant_b').collection('notes').findOne({ title: 'tenant b note' });
expect(unrelatedDoc).toBeTruthy();
});
tap.test('tenants: cleanup', async () => {
await closeOpenedClients();
await server.stop();
expect(server.running).toBeFalse();
cleanTmpDir(tmpDir);
});
export default tap.start();
+60 -15
View File
@@ -44,7 +44,7 @@ tap.test('transactions: should still support explicit sessions', async () => {
expect(end.ok).toEqual(1); expect(end.ok).toEqual(1);
}); });
tap.test('transactions: should reject raw transaction-scoped writes before mutation', async () => { tap.test('transactions: should reject transaction-scoped writes without txnNumber before mutation', async () => {
const db = client.db('txntest'); const db = client.db('txntest');
const coll = db.collection('docs'); const coll = db.collection('docs');
await coll.insertOne({ key: 'outside', value: 1 }); await coll.insertOne({ key: 'outside', value: 1 });
@@ -59,8 +59,8 @@ tap.test('transactions: should reject raw transaction-scoped writes before mutat
}); });
} catch (err: any) { } catch (err: any) {
threw = true; threw = true;
expect(err.code).toEqual(20); expect(err.code).toEqual(14);
expect(err.codeName).toEqual('IllegalOperation'); expect(err.codeName).toEqual('TypeMismatch');
} }
expect(threw).toBeTrue(); expect(threw).toBeTrue();
@@ -68,44 +68,89 @@ tap.test('transactions: should reject raw transaction-scoped writes before mutat
expect(await coll.countDocuments({ key: 'outside' })).toEqual(1); expect(await coll.countDocuments({ key: 'outside' })).toEqual(1);
}); });
tap.test('transactions: official driver transaction should fail without committing writes', async () => { tap.test('transactions: official driver transaction should commit buffered writes', async () => {
const coll = client.db('txntest').collection('driverdocs'); const coll = client.db('txntest').collection('driverdocs');
await coll.insertOne({ key: 'outside-driver', value: 0 }); await coll.insertOne({ key: 'outside-driver', value: 0 });
const session = client.startSession(); const session = client.startSession();
let threw = false;
try { try {
session.startTransaction(); session.startTransaction();
await coll.insertOne({ key: 'inside-driver', value: 1 }, { session }); await coll.insertOne({ key: 'inside-driver', value: 1 }, { session });
const inTxn = await coll.findOne({ key: 'inside-driver' }, { session });
expect(inTxn).toBeTruthy();
expect(await coll.countDocuments({ key: 'inside-driver' })).toEqual(0);
await session.commitTransaction(); await session.commitTransaction();
} catch (err: any) {
threw = true;
expect(err.code).toEqual(20);
expect(err.codeName).toEqual('IllegalOperation');
await session.abortTransaction().catch(() => undefined);
} finally { } finally {
await session.endSession(); await session.endSession();
} }
expect(threw).toBeTrue(); expect(await coll.countDocuments({ key: 'inside-driver' })).toEqual(1);
expect(await coll.countDocuments({ key: 'inside-driver' })).toEqual(0);
expect(await coll.countDocuments({ key: 'outside-driver' })).toEqual(1); expect(await coll.countDocuments({ key: 'outside-driver' })).toEqual(1);
}); });
tap.test('transactions: commit and abort commands should be explicit unsupported errors', async () => { tap.test('transactions: abort should discard buffered writes', async () => {
const coll = client.db('txntest').collection('abortdocs');
const session = client.startSession();
try {
session.startTransaction();
await coll.insertOne({ key: 'abort-me', value: 1 }, { session });
expect(await coll.findOne({ key: 'abort-me' }, { session })).toBeTruthy();
await session.abortTransaction();
} finally {
await session.endSession();
}
expect(await coll.findOne({ key: 'abort-me' })).toBeNull();
});
tap.test('transactions: update and delete should commit atomically', async () => {
const coll = client.db('txntest').collection('mutations');
await coll.insertMany([
{ key: 'update-me', value: 1 },
{ key: 'delete-me', value: 2 },
]);
const session = client.startSession();
try {
session.startTransaction();
await coll.updateOne({ key: 'update-me' }, { $set: { value: 10 } }, { session });
await coll.deleteOne({ key: 'delete-me' }, { session });
expect((await coll.findOne({ key: 'update-me' }, { session }))!.value).toEqual(10);
expect(await coll.findOne({ key: 'delete-me' }, { session })).toBeNull();
expect((await coll.findOne({ key: 'update-me' }))!.value).toEqual(1);
expect(await coll.findOne({ key: 'delete-me' })).toBeTruthy();
await session.commitTransaction();
} finally {
await session.endSession();
}
expect((await coll.findOne({ key: 'update-me' }))!.value).toEqual(10);
expect(await coll.findOne({ key: 'delete-me' })).toBeNull();
});
tap.test('transactions: commit and abort without active transaction should be explicit errors', async () => {
for (const command of [{ commitTransaction: 1 }, { abortTransaction: 1 }]) { for (const command of [{ commitTransaction: 1 }, { abortTransaction: 1 }]) {
let threw = false; let threw = false;
try { try {
await client.db('admin').command(command); await client.db('admin').command(command);
} catch (err: any) { } catch (err: any) {
threw = true; threw = true;
expect(err.code).toEqual(20); expect(err.code).toEqual(251);
expect(err.codeName).toEqual('IllegalOperation'); expect(err.codeName).toEqual('NoSuchTransaction');
} }
expect(threw).toBeTrue(); expect(threw).toBeTrue();
} }
}); });
tap.test('transactions: serverStatus should expose transaction and oplog metrics', async () => {
const status = await client.db('admin').command({ serverStatus: 1 });
expect(status.ok).toEqual(1);
expect(status.transactions.currentActive).toEqual(0);
expect(status.logicalSessionRecordCache.activeSessionsCount).toBeGreaterThanOrEqual(0);
expect(status.oplog.totalEntries).toBeGreaterThan(0);
});
tap.test('transactions: cleanup', async () => { tap.test('transactions: cleanup', async () => {
await client.close(); await client.close();
await server.stop(); await server.stop();
+1 -1
View File
@@ -3,6 +3,6 @@
*/ */
export const commitinfo = { export const commitinfo = {
name: '@push.rocks/smartdb', name: '@push.rocks/smartdb',
version: '2.7.1', version: '2.9.0',
description: 'A MongoDB-compatible embedded database server with wire protocol support, backed by a high-performance Rust engine.' description: 'A MongoDB-compatible embedded database server with wire protocol support, backed by a high-performance Rust engine.'
} }
+10
View File
@@ -22,4 +22,14 @@ export type {
ICollectionInfo, ICollectionInfo,
IDocumentsResult, IDocumentsResult,
ISmartDbMetrics, ISmartDbMetrics,
ISmartDbHealth,
ISmartDbDatabaseTenantInput,
ISmartDbDeleteDatabaseTenantInput,
ISmartDbRotateDatabaseTenantPasswordInput,
ISmartDbDatabaseTenantDescriptor,
ISmartDbDeleteDatabaseTenantResult,
ISmartDbDatabaseExportCollection,
ISmartDbDatabaseExport,
ISmartDbImportDatabaseInput,
ISmartDbImportDatabaseResult,
} from './ts_smartdb/index.js'; } from './ts_smartdb/index.js';
+10
View File
@@ -21,4 +21,14 @@ export type {
ICollectionInfo, ICollectionInfo,
IDocumentsResult, IDocumentsResult,
ISmartDbMetrics, ISmartDbMetrics,
ISmartDbHealth,
ISmartDbDatabaseTenantInput,
ISmartDbDeleteDatabaseTenantInput,
ISmartDbRotateDatabaseTenantPasswordInput,
ISmartDbDatabaseTenantDescriptor,
ISmartDbDeleteDatabaseTenantResult,
ISmartDbDatabaseExportCollection,
ISmartDbDatabaseExport,
ISmartDbImportDatabaseInput,
ISmartDbImportDatabaseResult,
} from './rust-db-bridge.js'; } from './rust-db-bridge.js';
+143 -1
View File
@@ -76,9 +76,80 @@ export interface ISmartDbMetrics {
collections: number; collections: number;
oplogEntries: number; oplogEntries: number;
oplogCurrentSeq: number; oplogCurrentSeq: number;
sessions: number;
activeTransactions: number;
authEnabled: boolean;
authUsers: number;
uptimeSeconds: number; uptimeSeconds: number;
} }
export interface ISmartDbHealth {
running: boolean;
storage?: 'memory' | 'file';
storagePath?: string;
authEnabled?: boolean;
authUsers?: number;
usersPathConfigured?: boolean;
databaseCount: number;
collectionCount: number;
uptimeSeconds?: number;
}
export interface ISmartDbDatabaseTenantInput {
databaseName: string;
username: string;
password: string;
roles?: string[];
}
export interface ISmartDbDeleteDatabaseTenantInput {
databaseName: string;
username?: string;
}
export interface ISmartDbRotateDatabaseTenantPasswordInput {
username: string;
password: string;
}
export interface ISmartDbDatabaseTenantDescriptor {
databaseName: string;
username: string;
roles: string[];
authSource: string;
mongodbUri?: string;
}
export interface ISmartDbDeleteDatabaseTenantResult {
databaseName: string;
deletedUsers: number;
databaseDropped: boolean;
}
export interface ISmartDbDatabaseExportCollection {
name: string;
documents: Record<string, any>[];
indexes: Record<string, any>[];
}
export interface ISmartDbDatabaseExport {
format: 'smartdb.database.export.v1';
databaseName: string;
exportedAtMs: number;
collections: ISmartDbDatabaseExportCollection[];
}
export interface ISmartDbImportDatabaseInput {
databaseName: string;
source: ISmartDbDatabaseExport;
}
export interface ISmartDbImportDatabaseResult {
databaseName: string;
collections: number;
documents: number;
}
/** /**
* Type-safe command definitions for the RustDb IPC protocol. * Type-safe command definitions for the RustDb IPC protocol.
*/ */
@@ -86,7 +157,36 @@ type TSmartDbCommands = {
start: { params: { config: ISmartDbRustConfig }; result: { connectionUri: string } }; start: { params: { config: ISmartDbRustConfig }; result: { connectionUri: string } };
stop: { params: Record<string, never>; result: void }; stop: { params: Record<string, never>; result: void };
getStatus: { params: Record<string, never>; result: { running: boolean } }; getStatus: { params: Record<string, never>; result: { running: boolean } };
getHealth: { params: Record<string, never>; result: ISmartDbHealth };
getMetrics: { params: Record<string, never>; result: ISmartDbMetrics }; getMetrics: { params: Record<string, never>; result: ISmartDbMetrics };
createDatabaseTenant: {
params: ISmartDbDatabaseTenantInput;
result: ISmartDbDatabaseTenantDescriptor;
};
deleteDatabaseTenant: {
params: ISmartDbDeleteDatabaseTenantInput;
result: ISmartDbDeleteDatabaseTenantResult;
};
rotateDatabaseTenantPassword: {
params: ISmartDbRotateDatabaseTenantPasswordInput;
result: ISmartDbDatabaseTenantDescriptor;
};
listDatabaseTenants: {
params: Record<string, never>;
result: { tenants: ISmartDbDatabaseTenantDescriptor[] };
};
getDatabaseTenantDescriptor: {
params: { databaseName: string; username: string };
result: ISmartDbDatabaseTenantDescriptor;
};
exportDatabase: {
params: { databaseName: string };
result: ISmartDbDatabaseExport;
};
importDatabase: {
params: ISmartDbImportDatabaseInput;
result: ISmartDbImportDatabaseResult;
};
getOpLog: { getOpLog: {
params: { sinceSeq?: number; limit?: number; db?: string; collection?: string }; params: { sinceSeq?: number; limit?: number; db?: string; collection?: string };
result: IOpLogResult; result: IOpLogResult;
@@ -198,7 +298,7 @@ export class RustDbBridge extends EventEmitter {
envVarName: 'SMARTDB_RUST_BINARY', envVarName: 'SMARTDB_RUST_BINARY',
platformPackagePrefix: '@push.rocks/smartdb', platformPackagePrefix: '@push.rocks/smartdb',
localPaths: buildLocalPaths(), localPaths: buildLocalPaths(),
maxPayloadSize: 10 * 1024 * 1024, // 10 MB maxPayloadSize: 100 * 1024 * 1024, // database exports/imports can be larger than command replies
}); });
// Forward events from the inner bridge // Forward events from the inner bridge
@@ -247,6 +347,48 @@ export class RustDbBridge extends EventEmitter {
return this.bridge.sendCommand('getMetrics', {} as Record<string, never>) as Promise<ISmartDbMetrics>; return this.bridge.sendCommand('getMetrics', {} as Record<string, never>) as Promise<ISmartDbMetrics>;
} }
public async getHealth(): Promise<ISmartDbHealth> {
return this.bridge.sendCommand('getHealth', {} as Record<string, never>) as Promise<ISmartDbHealth>;
}
public async createDatabaseTenant(
params: ISmartDbDatabaseTenantInput,
): Promise<ISmartDbDatabaseTenantDescriptor> {
return this.bridge.sendCommand('createDatabaseTenant', params) as Promise<ISmartDbDatabaseTenantDescriptor>;
}
public async deleteDatabaseTenant(
params: ISmartDbDeleteDatabaseTenantInput,
): Promise<ISmartDbDeleteDatabaseTenantResult> {
return this.bridge.sendCommand('deleteDatabaseTenant', params) as Promise<ISmartDbDeleteDatabaseTenantResult>;
}
public async rotateDatabaseTenantPassword(
params: ISmartDbRotateDatabaseTenantPasswordInput,
): Promise<ISmartDbDatabaseTenantDescriptor> {
return this.bridge.sendCommand('rotateDatabaseTenantPassword', params) as Promise<ISmartDbDatabaseTenantDescriptor>;
}
public async listDatabaseTenants(): Promise<ISmartDbDatabaseTenantDescriptor[]> {
const result = await this.bridge.sendCommand('listDatabaseTenants', {} as Record<string, never>) as { tenants: ISmartDbDatabaseTenantDescriptor[] };
return result.tenants;
}
public async getDatabaseTenantDescriptor(params: {
databaseName: string;
username: string;
}): Promise<ISmartDbDatabaseTenantDescriptor> {
return this.bridge.sendCommand('getDatabaseTenantDescriptor', params) as Promise<ISmartDbDatabaseTenantDescriptor>;
}
public async exportDatabase(params: { databaseName: string }): Promise<ISmartDbDatabaseExport> {
return this.bridge.sendCommand('exportDatabase', params) as Promise<ISmartDbDatabaseExport>;
}
public async importDatabase(params: ISmartDbImportDatabaseInput): Promise<ISmartDbImportDatabaseResult> {
return this.bridge.sendCommand('importDatabase', params) as Promise<ISmartDbImportDatabaseResult>;
}
public async getOpLog(params: { public async getOpLog(params: {
sinceSeq?: number; sinceSeq?: number;
limit?: number; limit?: number;
+110
View File
@@ -8,6 +8,15 @@ import type {
ICollectionInfo, ICollectionInfo,
IDocumentsResult, IDocumentsResult,
ISmartDbMetrics, ISmartDbMetrics,
ISmartDbHealth,
ISmartDbDatabaseTenantInput,
ISmartDbDeleteDatabaseTenantInput,
ISmartDbRotateDatabaseTenantPasswordInput,
ISmartDbDatabaseTenantDescriptor,
ISmartDbDeleteDatabaseTenantResult,
ISmartDbDatabaseExport,
ISmartDbImportDatabaseInput,
ISmartDbImportDatabaseResult,
} from '../rust-db-bridge.js'; } from '../rust-db-bridge.js';
/** /**
@@ -204,6 +213,85 @@ export class SmartdbServer {
return this.options.host ?? '127.0.0.1'; return this.options.host ?? '127.0.0.1';
} }
/**
* Create an isolated database/user pair for an application tenant.
*/
async createDatabaseTenant(
params: ISmartDbDatabaseTenantInput,
): Promise<ISmartDbDatabaseTenantDescriptor> {
const descriptor = await this.bridge.createDatabaseTenant(params);
return this.withTenantMongoUri(descriptor, params.password);
}
/**
* Delete a tenant database and its tenant user(s).
*/
async deleteDatabaseTenant(
params: ISmartDbDeleteDatabaseTenantInput,
): Promise<ISmartDbDeleteDatabaseTenantResult> {
return this.bridge.deleteDatabaseTenant(params);
}
/**
* Rotate a tenant user's password without restarting the server.
*/
async rotateDatabaseTenantPassword(
params: ISmartDbRotateDatabaseTenantPasswordInput,
): Promise<ISmartDbDatabaseTenantDescriptor> {
const descriptor = await this.bridge.rotateDatabaseTenantPassword(params);
return this.withTenantMongoUri(descriptor, params.password);
}
/**
* List known database tenants.
*/
async listDatabaseTenants(): Promise<ISmartDbDatabaseTenantDescriptor[]> {
return this.bridge.listDatabaseTenants();
}
/**
* Get a tenant descriptor without exposing a password.
*/
async getDatabaseTenantDescriptor(params: {
databaseName: string;
username: string;
}): Promise<ISmartDbDatabaseTenantDescriptor> {
return this.bridge.getDatabaseTenantDescriptor(params);
}
/**
* Export one database as an Extended JSON snapshot.
*/
async exportDatabase(params: { databaseName: string }): Promise<ISmartDbDatabaseExport> {
return this.bridge.exportDatabase(params);
}
/**
* Replace one database with a previously exported snapshot.
*/
async importDatabase(params: ISmartDbImportDatabaseInput): Promise<ISmartDbImportDatabaseResult> {
return this.bridge.importDatabase(params);
}
/**
* Get readiness/health details for long-running service use.
*/
async getHealth(): Promise<ISmartDbHealth> {
if (!this.isRunning) {
return {
running: false,
storage: this.options.storage,
storagePath: this.options.storage === 'file' ? this.options.storagePath : this.options.persistPath,
authEnabled: Boolean(this.options.auth?.enabled),
authUsers: this.options.auth?.users?.length ?? 0,
usersPathConfigured: Boolean(this.options.auth?.usersPath),
databaseCount: 0,
collectionCount: 0,
};
}
return this.bridge.getHealth();
}
// --- OpLog / Debug API --- // --- OpLog / Debug API ---
/** /**
@@ -258,4 +346,26 @@ export class SmartdbServer {
async getMetrics(): Promise<ISmartDbMetrics> { async getMetrics(): Promise<ISmartDbMetrics> {
return this.bridge.getMetrics(); return this.bridge.getMetrics();
} }
private withTenantMongoUri(
descriptor: ISmartDbDatabaseTenantDescriptor,
password: string,
): ISmartDbDatabaseTenantDescriptor {
return {
...descriptor,
mongodbUri: this.buildTenantMongoUri(descriptor.databaseName, descriptor.username, password),
};
}
private buildTenantMongoUri(databaseName: string, username: string, password: string): string {
const host = this.options.socketPath
? encodeURIComponent(this.options.socketPath)
: `${this.options.host ?? '127.0.0.1'}:${this.options.port ?? 27017}`;
const auth = `${encodeURIComponent(username)}:${encodeURIComponent(password)}@`;
const query = new URLSearchParams({ authSource: databaseName });
if (this.options.tls?.enabled) {
query.set('tls', 'true');
}
return `mongodb://${auth}${host}/${encodeURIComponent(databaseName)}?${query.toString()}`;
}
} }