feat(transactions): add single-node transaction support with session-aware reads, commits, aborts, and transaction metrics
This commit is contained in:
@@ -1,5 +1,13 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## 2026-04-29 - 2.8.0 - feat(transactions)
|
||||||
|
add single-node transaction support with session-aware reads, commits, aborts, and transaction metrics
|
||||||
|
|
||||||
|
- Buffer insert, update, delete, find, count, distinct, and findAndModify operations inside driver sessions and apply them on commit with write-conflict checks
|
||||||
|
- Return MongoDB-compatible NoSuchTransaction and WriteConflict errors for transaction lifecycle failures
|
||||||
|
- Expose authenticated users in connectionStatus and add session, transaction, auth, and oplog data to serverStatus and management metrics
|
||||||
|
- Document transaction support and extend bridge metrics typings and integration tests accordingly
|
||||||
|
|
||||||
## 2026-04-29 - 2.7.1 - fix(repo)
|
## 2026-04-29 - 2.7.1 - fix(repo)
|
||||||
no changes to commit
|
no changes to commit
|
||||||
|
|
||||||
|
|||||||
@@ -290,10 +290,12 @@ await client.connect();
|
|||||||
|
|
||||||
TLS is available for TCP listeners. `getConnectionUri()` includes `?tls=true` when TLS is enabled; pass the trusted CA to the MongoDB driver with `tlsCAFile`, `ca`, or `secureContext`.
|
TLS is available for TCP listeners. `getConnectionUri()` includes `?tls=true` when TLS is enabled; pass the trusted CA to the MongoDB driver with `tlsCAFile`, `ca`, or `secureContext`.
|
||||||
|
|
||||||
Authentication verifies SCRAM credentials, denies unauthenticated commands, and enforces command-level built-in roles for supported operations.
|
Authentication verifies SCRAM credentials, denies unauthenticated commands, and enforces command-level built-in roles for supported operations. `connectionStatus` reports the authenticated users and roles for the current socket.
|
||||||
|
|
||||||
Supported built-in role names are `root`, `read`, `readWrite`, `dbAdmin`, `userAdmin`, `clusterMonitor`, plus `readAnyDatabase`, `readWriteAnyDatabase`, `dbAdminAnyDatabase`, and `userAdminAnyDatabase`. When `usersPath` is set, SmartDB persists SCRAM credential material atomically and does not store plaintext passwords.
|
Supported built-in role names are `root`, `read`, `readWrite`, `dbAdmin`, `userAdmin`, `clusterMonitor`, plus `readAnyDatabase`, `readWriteAnyDatabase`, `dbAdminAnyDatabase`, and `userAdminAnyDatabase`. When `usersPath` is set, SmartDB persists SCRAM credential material atomically and does not store plaintext passwords.
|
||||||
|
|
||||||
|
Single-node transactions are supported through official MongoDB driver sessions. Writes with `startTransaction` and `autocommit: false` are buffered per logical session, reads inside the transaction see the buffered overlay, `commitTransaction` applies the write set with conflict checks, and `abortTransaction` discards it.
|
||||||
|
|
||||||
Basic user management commands are available for authenticated users with `root` or `userAdmin` privileges:
|
Basic user management commands are available for authenticated users with `root` or `userAdmin` privileges:
|
||||||
|
|
||||||
```typescript
|
```typescript
|
||||||
@@ -317,7 +319,7 @@ await client.db('admin').command({ usersInfo: 'reader' });
|
|||||||
| `port` | `number` | Configured port (TCP mode) |
|
| `port` | `number` | Configured port (TCP mode) |
|
||||||
| `host` | `string` | Configured host (TCP mode) |
|
| `host` | `string` | Configured host (TCP mode) |
|
||||||
| `socketPath` | `string \| undefined` | Socket path (socket mode) |
|
| `socketPath` | `string \| undefined` | Socket path (socket mode) |
|
||||||
| `getMetrics()` | `Promise<ISmartDbMetrics>` | Server metrics (db/collection counts, uptime) |
|
| `getMetrics()` | `Promise<ISmartDbMetrics>` | Server metrics (db/collection counts, sessions, transactions, auth, uptime) |
|
||||||
| `getOpLog(params?)` | `Promise<IOpLogResult>` | Query oplog entries with optional filters |
|
| `getOpLog(params?)` | `Promise<IOpLogResult>` | Query oplog entries with optional filters |
|
||||||
| `getOpLogStats()` | `Promise<IOpLogStats>` | Aggregate oplog statistics |
|
| `getOpLogStats()` | `Promise<IOpLogStats>` | Aggregate oplog statistics |
|
||||||
| `revertToSeq(seq, dryRun?)` | `Promise<IRevertResult>` | Revert to a specific oplog sequence |
|
| `revertToSeq(seq, dryRun?)` | `Promise<IRevertResult>` | Revert to a specific oplog sequence |
|
||||||
@@ -531,7 +533,7 @@ const names = await collection.distinct('name');
|
|||||||
| **Aggregation** | `aggregate`, `count`, `distinct` |
|
| **Aggregation** | `aggregate`, `count`, `distinct` |
|
||||||
| **Indexes** | `createIndexes`, `dropIndexes`, `listIndexes` |
|
| **Indexes** | `createIndexes`, `dropIndexes`, `listIndexes` |
|
||||||
| **Sessions** | `startSession`, `endSessions` |
|
| **Sessions** | `startSession`, `endSessions` |
|
||||||
| **Transactions** | `commitTransaction`, `abortTransaction` |
|
| **Transactions** | `startTransaction`, `commitTransaction`, `abortTransaction` through driver sessions |
|
||||||
| **Admin** | `ping`, `listDatabases`, `listCollections`, `drop`, `dropDatabase`, `create`, `serverStatus`, `buildInfo`, `dbStats`, `collStats`, `connectionStatus`, `currentOp`, `renameCollection` |
|
| **Admin** | `ping`, `listDatabases`, `listCollections`, `drop`, `dropDatabase`, `create`, `serverStatus`, `buildInfo`, `dbStats`, `collStats`, `connectionStatus`, `currentOp`, `renameCollection` |
|
||||||
|
|
||||||
Compatible with wire protocol versions 0–21 (driver versions 3.6 through 7.0).
|
Compatible with wire protocol versions 0–21 (driver versions 3.6 through 7.0).
|
||||||
@@ -540,7 +542,7 @@ Compatible with wire protocol versions 0–21 (driver versions 3.6 through 7.0).
|
|||||||
|
|
||||||
## Rust Crate Architecture 🦀
|
## Rust Crate Architecture 🦀
|
||||||
|
|
||||||
The Rust engine is organized as a Cargo workspace with 8 focused crates:
|
The Rust engine is organized as a Cargo workspace with 9 focused crates:
|
||||||
|
|
||||||
| Crate | Purpose |
|
| Crate | Purpose |
|
||||||
|---|---|
|
|---|---|
|
||||||
@@ -551,6 +553,7 @@ The Rust engine is organized as a Cargo workspace with 8 focused crates:
|
|||||||
| `rustdb-storage` | Storage backends (memory, file), OpLog with point-in-time replay |
|
| `rustdb-storage` | Storage backends (memory, file), OpLog with point-in-time replay |
|
||||||
| `rustdb-index` | B-tree/hash indexes, query planner (IXSCAN/COLLSCAN) |
|
| `rustdb-index` | B-tree/hash indexes, query planner (IXSCAN/COLLSCAN) |
|
||||||
| `rustdb-txn` | Transaction + session management with snapshot isolation |
|
| `rustdb-txn` | Transaction + session management with snapshot isolation |
|
||||||
|
| `rustdb-auth` | SCRAM-SHA-256 credential handling, user metadata persistence, RBAC checks |
|
||||||
| `rustdb-commands` | 40+ command handlers wiring everything together |
|
| `rustdb-commands` | 40+ command handlers wiring everything together |
|
||||||
|
|
||||||
Cross-compiled for `linux_amd64` and `linux_arm64` via [@git.zone/tsrust](https://www.npmjs.com/package/@git.zone/tsrust).
|
Cross-compiled for `linux_amd64` and `linux_arm64` via [@git.zone/tsrust](https://www.npmjs.com/package/@git.zone/tsrust).
|
||||||
@@ -563,6 +566,7 @@ The Bitcask-style file storage engine includes several reliability features:
|
|||||||
- **CRC32 checksums** — every record is integrity-checked on read
|
- **CRC32 checksums** — every record is integrity-checked on read
|
||||||
- **Automatic compaction** — dead records are reclaimed when they exceed 50% of file size, runs on startup and after every write
|
- **Automatic compaction** — dead records are reclaimed when they exceed 50% of file size, runs on startup and after every write
|
||||||
- **Hint file staleness detection** — the hint file records the data file size at write time; if data.rdb changed since (e.g. crash after a delete), the engine falls back to a full scan to ensure tombstones are not lost
|
- **Hint file staleness detection** — the hint file records the data file size at write time; if data.rdb changed since (e.g. crash after a delete), the engine falls back to a full scan to ensure tombstones are not lost
|
||||||
|
- **Torn-tail repair** — startup scans `data.rdb` to the last valid record, truncates invalid trailing bytes, and preserves all verified records after interrupted writes
|
||||||
- **Stale socket cleanup** — orphaned `/tmp/smartdb-*.sock` files from crashed instances are automatically cleaned up on startup
|
- **Stale socket cleanup** — orphaned `/tmp/smartdb-*.sock` files from crashed instances are automatically cleaned up on startup
|
||||||
|
|
||||||
### Data Integrity CLI 🔍
|
### Data Integrity CLI 🔍
|
||||||
|
|||||||
@@ -150,6 +150,13 @@ impl AuthEngine {
|
|||||||
self.enabled
|
self.enabled
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn user_count(&self) -> usize {
|
||||||
|
self.users
|
||||||
|
.read()
|
||||||
|
.unwrap_or_else(|poisoned| poisoned.into_inner())
|
||||||
|
.len()
|
||||||
|
}
|
||||||
|
|
||||||
pub fn supported_mechanisms(&self, namespace_user: &str) -> Vec<String> {
|
pub fn supported_mechanisms(&self, namespace_user: &str) -> Vec<String> {
|
||||||
let Some((database, username)) = namespace_user.split_once('.') else {
|
let Some((database, username)) = namespace_user.split_once('.') else {
|
||||||
return Vec::new();
|
return Vec::new();
|
||||||
|
|||||||
@@ -18,6 +18,12 @@ pub enum CommandError {
|
|||||||
#[error("transaction error: {0}")]
|
#[error("transaction error: {0}")]
|
||||||
TransactionError(String),
|
TransactionError(String),
|
||||||
|
|
||||||
|
#[error("no such transaction: {0}")]
|
||||||
|
NoSuchTransaction(String),
|
||||||
|
|
||||||
|
#[error("write conflict: {0}")]
|
||||||
|
WriteConflict(String),
|
||||||
|
|
||||||
#[error("namespace not found: {0}")]
|
#[error("namespace not found: {0}")]
|
||||||
NamespaceNotFound(String),
|
NamespaceNotFound(String),
|
||||||
|
|
||||||
@@ -52,6 +58,8 @@ impl CommandError {
|
|||||||
CommandError::StorageError(_) => (1, "InternalError"),
|
CommandError::StorageError(_) => (1, "InternalError"),
|
||||||
CommandError::IndexError(_) => (27, "IndexNotFound"),
|
CommandError::IndexError(_) => (27, "IndexNotFound"),
|
||||||
CommandError::TransactionError(_) => (112, "WriteConflict"),
|
CommandError::TransactionError(_) => (112, "WriteConflict"),
|
||||||
|
CommandError::NoSuchTransaction(_) => (251, "NoSuchTransaction"),
|
||||||
|
CommandError::WriteConflict(_) => (112, "WriteConflict"),
|
||||||
CommandError::NamespaceNotFound(_) => (26, "NamespaceNotFound"),
|
CommandError::NamespaceNotFound(_) => (26, "NamespaceNotFound"),
|
||||||
CommandError::NamespaceExists(_) => (48, "NamespaceExists"),
|
CommandError::NamespaceExists(_) => (48, "NamespaceExists"),
|
||||||
CommandError::DuplicateKey(_) => (11000, "DuplicateKey"),
|
CommandError::DuplicateKey(_) => (11000, "DuplicateKey"),
|
||||||
@@ -79,7 +87,15 @@ impl From<rustdb_storage::StorageError> for CommandError {
|
|||||||
|
|
||||||
impl From<rustdb_txn::TransactionError> for CommandError {
|
impl From<rustdb_txn::TransactionError> for CommandError {
|
||||||
fn from(e: rustdb_txn::TransactionError) -> Self {
|
fn from(e: rustdb_txn::TransactionError) -> Self {
|
||||||
CommandError::TransactionError(e.to_string())
|
match e {
|
||||||
|
rustdb_txn::TransactionError::NotFound(message) => {
|
||||||
|
CommandError::NoSuchTransaction(message)
|
||||||
|
}
|
||||||
|
rustdb_txn::TransactionError::WriteConflict(message) => {
|
||||||
|
CommandError::WriteConflict(message)
|
||||||
|
}
|
||||||
|
other => CommandError::TransactionError(other.to_string()),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -2,8 +2,9 @@ use bson::{doc, Bson, Document};
|
|||||||
use rustdb_index::IndexEngine;
|
use rustdb_index::IndexEngine;
|
||||||
use tracing::debug;
|
use tracing::debug;
|
||||||
|
|
||||||
use crate::context::{CommandContext, CursorState};
|
use crate::context::{CommandContext, ConnectionState, CursorState};
|
||||||
use crate::error::{CommandError, CommandResult};
|
use crate::error::{CommandError, CommandResult};
|
||||||
|
use crate::transactions;
|
||||||
|
|
||||||
/// Handle various admin / diagnostic / session / auth commands.
|
/// Handle various admin / diagnostic / session / auth commands.
|
||||||
pub async fn handle(
|
pub async fn handle(
|
||||||
@@ -11,6 +12,7 @@ pub async fn handle(
|
|||||||
db: &str,
|
db: &str,
|
||||||
ctx: &CommandContext,
|
ctx: &CommandContext,
|
||||||
command_name: &str,
|
command_name: &str,
|
||||||
|
connection: &ConnectionState,
|
||||||
) -> CommandResult<Document> {
|
) -> CommandResult<Document> {
|
||||||
match command_name {
|
match command_name {
|
||||||
"ping" => Ok(doc! { "ok": 1.0 }),
|
"ping" => Ok(doc! { "ok": 1.0 }),
|
||||||
@@ -24,13 +26,7 @@ pub async fn handle(
|
|||||||
"ok": 1.0,
|
"ok": 1.0,
|
||||||
}),
|
}),
|
||||||
|
|
||||||
"serverStatus" => Ok(doc! {
|
"serverStatus" => handle_server_status(ctx),
|
||||||
"host": "localhost",
|
|
||||||
"version": "7.0.0",
|
|
||||||
"process": "rustdb",
|
|
||||||
"uptime": ctx.start_time.elapsed().as_secs() as i64,
|
|
||||||
"ok": 1.0,
|
|
||||||
}),
|
|
||||||
|
|
||||||
"hostInfo" => Ok(doc! {
|
"hostInfo" => Ok(doc! {
|
||||||
"system": {
|
"system": {
|
||||||
@@ -90,13 +86,7 @@ pub async fn handle(
|
|||||||
"codeName": "CommandNotFound",
|
"codeName": "CommandNotFound",
|
||||||
}),
|
}),
|
||||||
|
|
||||||
"connectionStatus" => Ok(doc! {
|
"connectionStatus" => Ok(handle_connection_status(connection)),
|
||||||
"authInfo": {
|
|
||||||
"authenticatedUsers": [],
|
|
||||||
"authenticatedUserRoles": [],
|
|
||||||
},
|
|
||||||
"ok": 1.0,
|
|
||||||
}),
|
|
||||||
|
|
||||||
"createUser" => handle_create_user(cmd, db, ctx).await,
|
"createUser" => handle_create_user(cmd, db, ctx).await,
|
||||||
|
|
||||||
@@ -156,9 +146,9 @@ pub async fn handle(
|
|||||||
Ok(doc! { "ok": 1.0 })
|
Ok(doc! { "ok": 1.0 })
|
||||||
}
|
}
|
||||||
|
|
||||||
"commitTransaction" | "abortTransaction" => Err(CommandError::IllegalOperation(
|
"commitTransaction" => transactions::commit_transaction_command(cmd, ctx).await,
|
||||||
"Transaction numbers are only allowed on a replica set member or mongos".into(),
|
|
||||||
)),
|
"abortTransaction" => transactions::abort_transaction_command(cmd, ctx),
|
||||||
|
|
||||||
// Auth stubs - accept silently.
|
// Auth stubs - accept silently.
|
||||||
"saslStart" => Ok(doc! {
|
"saslStart" => Ok(doc! {
|
||||||
@@ -195,6 +185,72 @@ pub async fn handle(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn handle_server_status(ctx: &CommandContext) -> CommandResult<Document> {
|
||||||
|
let oplog_stats = ctx.oplog.stats();
|
||||||
|
Ok(doc! {
|
||||||
|
"host": "localhost",
|
||||||
|
"version": "7.0.0",
|
||||||
|
"process": "rustdb",
|
||||||
|
"uptime": ctx.start_time.elapsed().as_secs() as i64,
|
||||||
|
"connections": {
|
||||||
|
"current": 0_i32,
|
||||||
|
"available": i32::MAX,
|
||||||
|
},
|
||||||
|
"logicalSessionRecordCache": {
|
||||||
|
"activeSessionsCount": ctx.sessions.len() as i64,
|
||||||
|
},
|
||||||
|
"transactions": {
|
||||||
|
"currentActive": ctx.transactions.len() as i64,
|
||||||
|
},
|
||||||
|
"oplog": {
|
||||||
|
"currentSeq": oplog_stats.current_seq as i64,
|
||||||
|
"totalEntries": oplog_stats.total_entries as i64,
|
||||||
|
"oldestSeq": oplog_stats.oldest_seq as i64,
|
||||||
|
"entriesByOp": {
|
||||||
|
"insert": oplog_stats.inserts as i64,
|
||||||
|
"update": oplog_stats.updates as i64,
|
||||||
|
"delete": oplog_stats.deletes as i64,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"security": {
|
||||||
|
"authentication": ctx.auth.enabled(),
|
||||||
|
"users": ctx.auth.user_count() as i64,
|
||||||
|
},
|
||||||
|
"ok": 1.0,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn handle_connection_status(connection: &ConnectionState) -> Document {
|
||||||
|
let authenticated_users: Vec<Bson> = connection
|
||||||
|
.authenticated_users
|
||||||
|
.iter()
|
||||||
|
.map(|user| {
|
||||||
|
Bson::Document(doc! {
|
||||||
|
"user": user.username.clone(),
|
||||||
|
"db": user.database.clone(),
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
let authenticated_roles: Vec<Bson> = connection
|
||||||
|
.authenticated_users
|
||||||
|
.iter()
|
||||||
|
.flat_map(|user| {
|
||||||
|
user.roles
|
||||||
|
.iter()
|
||||||
|
.map(|role| Bson::Document(role_to_document(&user.database, role)))
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
doc! {
|
||||||
|
"authInfo": {
|
||||||
|
"authenticatedUsers": authenticated_users,
|
||||||
|
"authenticatedUserRoles": authenticated_roles,
|
||||||
|
},
|
||||||
|
"ok": 1.0,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
async fn handle_create_user(
|
async fn handle_create_user(
|
||||||
cmd: &Document,
|
cmd: &Document,
|
||||||
db: &str,
|
db: &str,
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ use tracing::debug;
|
|||||||
|
|
||||||
use crate::context::CommandContext;
|
use crate::context::CommandContext;
|
||||||
use crate::error::{CommandError, CommandResult};
|
use crate::error::{CommandError, CommandResult};
|
||||||
|
use crate::transactions;
|
||||||
|
|
||||||
/// Handle the `delete` command.
|
/// Handle the `delete` command.
|
||||||
pub async fn handle(
|
pub async fn handle(
|
||||||
@@ -36,6 +37,7 @@ pub async fn handle(
|
|||||||
);
|
);
|
||||||
|
|
||||||
let ns_key = format!("{}.{}", db, coll);
|
let ns_key = format!("{}.{}", db, coll);
|
||||||
|
let txn_id = transactions::active_transaction_id(ctx, cmd);
|
||||||
let mut total_deleted: i32 = 0;
|
let mut total_deleted: i32 = 0;
|
||||||
let mut write_errors: Vec<Document> = Vec::new();
|
let mut write_errors: Vec<Document> = Vec::new();
|
||||||
|
|
||||||
@@ -69,7 +71,7 @@ pub async fn handle(
|
|||||||
_ => 0, // default: delete all matches
|
_ => 0, // default: delete all matches
|
||||||
};
|
};
|
||||||
|
|
||||||
match delete_matching(db, coll, &ns_key, &filter, limit, ctx).await {
|
match delete_matching(db, coll, &ns_key, &filter, limit, ctx, txn_id.as_deref()).await {
|
||||||
Ok(count) => {
|
Ok(count) => {
|
||||||
total_deleted += count;
|
total_deleted += count;
|
||||||
}
|
}
|
||||||
@@ -114,7 +116,24 @@ async fn delete_matching(
|
|||||||
filter: &Document,
|
filter: &Document,
|
||||||
limit: i32,
|
limit: i32,
|
||||||
ctx: &CommandContext,
|
ctx: &CommandContext,
|
||||||
|
txn_id: Option<&str>,
|
||||||
) -> Result<i32, CommandError> {
|
) -> Result<i32, CommandError> {
|
||||||
|
if let Some(txn_id) = txn_id {
|
||||||
|
let docs = transactions::load_transaction_docs(ctx, txn_id, db, coll).await?;
|
||||||
|
let matched = QueryMatcher::filter(&docs, filter);
|
||||||
|
let to_delete: &[Document] = if limit == 1 && !matched.is_empty() {
|
||||||
|
&matched[..1]
|
||||||
|
} else {
|
||||||
|
&matched
|
||||||
|
};
|
||||||
|
|
||||||
|
for doc in to_delete {
|
||||||
|
transactions::record_delete(ctx, txn_id, db, coll, doc.clone()).await?;
|
||||||
|
}
|
||||||
|
|
||||||
|
return Ok(to_delete.len() as i32);
|
||||||
|
}
|
||||||
|
|
||||||
// Check if the collection exists; if not, nothing to delete.
|
// Check if the collection exists; if not, nothing to delete.
|
||||||
match ctx.storage.collection_exists(db, coll).await {
|
match ctx.storage.collection_exists(db, coll).await {
|
||||||
Ok(false) => return Ok(0),
|
Ok(false) => return Ok(0),
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ use rustdb_query::{QueryMatcher, sort_documents, apply_projection, distinct_valu
|
|||||||
|
|
||||||
use crate::context::{CommandContext, CursorState};
|
use crate::context::{CommandContext, CursorState};
|
||||||
use crate::error::{CommandError, CommandResult};
|
use crate::error::{CommandError, CommandResult};
|
||||||
|
use crate::transactions;
|
||||||
|
|
||||||
/// Atomic counter for generating unique cursor IDs.
|
/// Atomic counter for generating unique cursor IDs.
|
||||||
static CURSOR_ID_COUNTER: AtomicI64 = AtomicI64::new(1);
|
static CURSOR_ID_COUNTER: AtomicI64 = AtomicI64::new(1);
|
||||||
@@ -80,9 +81,14 @@ pub async fn handle(
|
|||||||
let limit = get_i64(cmd, "limit").unwrap_or(0).max(0) as usize;
|
let limit = get_i64(cmd, "limit").unwrap_or(0).max(0) as usize;
|
||||||
let batch_size = get_i32(cmd, "batchSize").unwrap_or(101).max(0) as usize;
|
let batch_size = get_i32(cmd, "batchSize").unwrap_or(101).max(0) as usize;
|
||||||
let single_batch = get_bool(cmd, "singleBatch").unwrap_or(false);
|
let single_batch = get_bool(cmd, "singleBatch").unwrap_or(false);
|
||||||
|
let txn_id = transactions::active_transaction_id(ctx, cmd);
|
||||||
|
|
||||||
// If the collection does not exist, return an empty cursor.
|
// If the collection does not exist, return an empty cursor.
|
||||||
let exists = ctx.storage.collection_exists(db, coll).await?;
|
let exists = if txn_id.is_some() {
|
||||||
|
true
|
||||||
|
} else {
|
||||||
|
ctx.storage.collection_exists(db, coll).await?
|
||||||
|
};
|
||||||
if !exists {
|
if !exists {
|
||||||
return Ok(doc! {
|
return Ok(doc! {
|
||||||
"cursor": {
|
"cursor": {
|
||||||
@@ -96,7 +102,9 @@ pub async fn handle(
|
|||||||
|
|
||||||
// Try index-accelerated lookup.
|
// Try index-accelerated lookup.
|
||||||
let index_key = format!("{}.{}", db, coll);
|
let index_key = format!("{}.{}", db, coll);
|
||||||
let docs = if let Some(idx_ref) = ctx.indexes.get(&index_key) {
|
let docs = if let Some(ref txn_id) = txn_id {
|
||||||
|
transactions::load_transaction_docs(ctx, txn_id, db, coll).await?
|
||||||
|
} else if let Some(idx_ref) = ctx.indexes.get(&index_key) {
|
||||||
if let Some(candidate_ids) = idx_ref.find_candidate_ids(&filter) {
|
if let Some(candidate_ids) = idx_ref.find_candidate_ids(&filter) {
|
||||||
debug!(
|
debug!(
|
||||||
ns = %ns,
|
ns = %ns,
|
||||||
@@ -298,9 +306,14 @@ pub async fn handle_count(
|
|||||||
ctx: &CommandContext,
|
ctx: &CommandContext,
|
||||||
) -> CommandResult<Document> {
|
) -> CommandResult<Document> {
|
||||||
let coll = get_str(cmd, "count").unwrap_or("unknown");
|
let coll = get_str(cmd, "count").unwrap_or("unknown");
|
||||||
|
let txn_id = transactions::active_transaction_id(ctx, cmd);
|
||||||
|
|
||||||
// Check collection existence.
|
// Check collection existence.
|
||||||
let exists = ctx.storage.collection_exists(db, coll).await?;
|
let exists = if txn_id.is_some() {
|
||||||
|
true
|
||||||
|
} else {
|
||||||
|
ctx.storage.collection_exists(db, coll).await?
|
||||||
|
};
|
||||||
if !exists {
|
if !exists {
|
||||||
return Ok(doc! { "n": 0_i64, "ok": 1.0 });
|
return Ok(doc! { "n": 0_i64, "ok": 1.0 });
|
||||||
}
|
}
|
||||||
@@ -309,6 +322,23 @@ pub async fn handle_count(
|
|||||||
let skip = get_i64(cmd, "skip").unwrap_or(0).max(0) as usize;
|
let skip = get_i64(cmd, "skip").unwrap_or(0).max(0) as usize;
|
||||||
let limit = get_i64(cmd, "limit").unwrap_or(0).max(0) as usize;
|
let limit = get_i64(cmd, "limit").unwrap_or(0).max(0) as usize;
|
||||||
|
|
||||||
|
if let Some(ref txn_id) = txn_id {
|
||||||
|
let docs = transactions::load_transaction_docs(ctx, txn_id, db, coll).await?;
|
||||||
|
let filtered = if query.is_empty() {
|
||||||
|
docs
|
||||||
|
} else {
|
||||||
|
QueryMatcher::filter(&docs, &query)
|
||||||
|
};
|
||||||
|
let mut n = filtered.len().saturating_sub(skip);
|
||||||
|
if limit > 0 {
|
||||||
|
n = n.min(limit);
|
||||||
|
}
|
||||||
|
return Ok(doc! {
|
||||||
|
"n": n as i64,
|
||||||
|
"ok": 1.0,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
let count: u64 = if query.is_empty() && skip == 0 && limit == 0 {
|
let count: u64 = if query.is_empty() && skip == 0 && limit == 0 {
|
||||||
// Fast path: use storage-level count.
|
// Fast path: use storage-level count.
|
||||||
ctx.storage.count(db, coll).await?
|
ctx.storage.count(db, coll).await?
|
||||||
@@ -352,15 +382,24 @@ pub async fn handle_distinct(
|
|||||||
let key = get_str(cmd, "key").ok_or_else(|| {
|
let key = get_str(cmd, "key").ok_or_else(|| {
|
||||||
CommandError::InvalidArgument("distinct requires a 'key' field".into())
|
CommandError::InvalidArgument("distinct requires a 'key' field".into())
|
||||||
})?;
|
})?;
|
||||||
|
let txn_id = transactions::active_transaction_id(ctx, cmd);
|
||||||
|
|
||||||
// Check collection existence.
|
// Check collection existence.
|
||||||
let exists = ctx.storage.collection_exists(db, coll).await?;
|
let exists = if txn_id.is_some() {
|
||||||
|
true
|
||||||
|
} else {
|
||||||
|
ctx.storage.collection_exists(db, coll).await?
|
||||||
|
};
|
||||||
if !exists {
|
if !exists {
|
||||||
return Ok(doc! { "values": [], "ok": 1.0 });
|
return Ok(doc! { "values": [], "ok": 1.0 });
|
||||||
}
|
}
|
||||||
|
|
||||||
let query = get_document(cmd, "query").cloned();
|
let query = get_document(cmd, "query").cloned();
|
||||||
let docs = ctx.storage.find_all(db, coll).await?;
|
let docs = if let Some(txn_id) = txn_id {
|
||||||
|
transactions::load_transaction_docs(ctx, &txn_id, db, coll).await?
|
||||||
|
} else {
|
||||||
|
ctx.storage.find_all(db, coll).await?
|
||||||
|
};
|
||||||
let values = distinct_values(&docs, key, query.as_ref());
|
let values = distinct_values(&docs, key, query.as_ref());
|
||||||
|
|
||||||
Ok(doc! {
|
Ok(doc! {
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ use tracing::debug;
|
|||||||
|
|
||||||
use crate::context::CommandContext;
|
use crate::context::CommandContext;
|
||||||
use crate::error::{CommandError, CommandResult};
|
use crate::error::{CommandError, CommandResult};
|
||||||
|
use crate::transactions;
|
||||||
|
|
||||||
/// Handle the `insert` command.
|
/// Handle the `insert` command.
|
||||||
pub async fn handle(
|
pub async fn handle(
|
||||||
@@ -48,8 +49,13 @@ pub async fn handle(
|
|||||||
"insert command"
|
"insert command"
|
||||||
);
|
);
|
||||||
|
|
||||||
// Auto-create database and collection if they don't exist.
|
let txn_id = transactions::active_transaction_id(ctx, cmd);
|
||||||
ensure_collection_exists(db, coll, ctx).await?;
|
|
||||||
|
// Auto-create database and collection if they don't exist. Transactional
|
||||||
|
// writes defer collection creation until commit so abort remains clean.
|
||||||
|
if txn_id.is_none() {
|
||||||
|
ensure_collection_exists(db, coll, ctx).await?;
|
||||||
|
}
|
||||||
|
|
||||||
let ns_key = format!("{}.{}", db, coll);
|
let ns_key = format!("{}.{}", db, coll);
|
||||||
let mut inserted_count: i32 = 0;
|
let mut inserted_count: i32 = 0;
|
||||||
@@ -84,6 +90,24 @@ pub async fn handle(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if let Some(ref txn_id) = txn_id {
|
||||||
|
match transactions::record_insert(ctx, txn_id, db, coll, doc.clone()).await {
|
||||||
|
Ok(_) => inserted_count += 1,
|
||||||
|
Err(e) => {
|
||||||
|
write_errors.push(doc! {
|
||||||
|
"index": idx as i32,
|
||||||
|
"code": 11000_i32,
|
||||||
|
"codeName": "DuplicateKey",
|
||||||
|
"errmsg": e.to_string(),
|
||||||
|
});
|
||||||
|
if ordered {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
// Attempt storage insert.
|
// Attempt storage insert.
|
||||||
match ctx.storage.insert_one(db, coll, doc.clone()).await {
|
match ctx.storage.insert_one(db, coll, doc.clone()).await {
|
||||||
Ok(id_str) => {
|
Ok(id_str) => {
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ use tracing::debug;
|
|||||||
|
|
||||||
use crate::context::CommandContext;
|
use crate::context::CommandContext;
|
||||||
use crate::error::{CommandError, CommandResult};
|
use crate::error::{CommandError, CommandResult};
|
||||||
|
use crate::transactions;
|
||||||
|
|
||||||
/// Handle `update` and `findAndModify` commands.
|
/// Handle `update` and `findAndModify` commands.
|
||||||
pub async fn handle(
|
pub async fn handle(
|
||||||
@@ -47,8 +48,12 @@ async fn handle_update(
|
|||||||
|
|
||||||
debug!(db = db, collection = coll, count = updates.len(), "update command");
|
debug!(db = db, collection = coll, count = updates.len(), "update command");
|
||||||
|
|
||||||
// Auto-create database and collection if needed.
|
let txn_id = transactions::active_transaction_id(ctx, cmd);
|
||||||
ensure_collection_exists(db, coll, ctx).await?;
|
|
||||||
|
// Transactional writes defer namespace creation until commit.
|
||||||
|
if txn_id.is_none() {
|
||||||
|
ensure_collection_exists(db, coll, ctx).await?;
|
||||||
|
}
|
||||||
|
|
||||||
let ns_key = format!("{}.{}", db, coll);
|
let ns_key = format!("{}.{}", db, coll);
|
||||||
|
|
||||||
@@ -136,7 +141,7 @@ async fn handle_update(
|
|||||||
});
|
});
|
||||||
|
|
||||||
// Load all documents and filter.
|
// Load all documents and filter.
|
||||||
let all_docs = load_filtered_docs(db, coll, &filter, &ns_key, ctx).await?;
|
let all_docs = load_filtered_docs(db, coll, &filter, &ns_key, ctx, txn_id.as_deref()).await?;
|
||||||
|
|
||||||
if all_docs.is_empty() && upsert {
|
if all_docs.is_empty() && upsert {
|
||||||
// Upsert: create a new document.
|
// Upsert: create a new document.
|
||||||
@@ -166,6 +171,30 @@ async fn handle_update(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if let Some(ref txn_id) = txn_id {
|
||||||
|
match transactions::record_insert(ctx, txn_id, db, coll, updated.clone()).await {
|
||||||
|
Ok(_) => {
|
||||||
|
total_n += 1;
|
||||||
|
upserted_list.push(doc! {
|
||||||
|
"index": idx as i32,
|
||||||
|
"_id": new_id,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
write_errors.push(doc! {
|
||||||
|
"index": idx as i32,
|
||||||
|
"code": 1_i32,
|
||||||
|
"codeName": "InternalError",
|
||||||
|
"errmsg": e.to_string(),
|
||||||
|
});
|
||||||
|
if ordered {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
// Insert the new document.
|
// Insert the new document.
|
||||||
match ctx.storage.insert_one(db, coll, updated.clone()).await {
|
match ctx.storage.insert_one(db, coll, updated.clone()).await {
|
||||||
Ok(id_str) => {
|
Ok(id_str) => {
|
||||||
@@ -258,6 +287,38 @@ async fn handle_update(
|
|||||||
}
|
}
|
||||||
|
|
||||||
let id_str = extract_id_string(matched_doc);
|
let id_str = extract_id_string(matched_doc);
|
||||||
|
if let Some(ref txn_id) = txn_id {
|
||||||
|
match transactions::record_update(
|
||||||
|
ctx,
|
||||||
|
txn_id,
|
||||||
|
db,
|
||||||
|
coll,
|
||||||
|
matched_doc.clone(),
|
||||||
|
updated_doc.clone(),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
{
|
||||||
|
Ok(_) => {
|
||||||
|
total_n += 1;
|
||||||
|
if matched_doc != &updated_doc {
|
||||||
|
total_n_modified += 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
write_errors.push(doc! {
|
||||||
|
"index": idx as i32,
|
||||||
|
"code": 1_i32,
|
||||||
|
"codeName": "InternalError",
|
||||||
|
"errmsg": e.to_string(),
|
||||||
|
});
|
||||||
|
if ordered {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
match ctx
|
match ctx
|
||||||
.storage
|
.storage
|
||||||
.update_by_id(db, coll, &id_str, updated_doc.clone())
|
.update_by_id(db, coll, &id_str, updated_doc.clone())
|
||||||
@@ -407,8 +468,12 @@ async fn handle_find_and_modify(
|
|||||||
.collect()
|
.collect()
|
||||||
});
|
});
|
||||||
|
|
||||||
// Auto-create database and collection.
|
let txn_id = transactions::active_transaction_id(ctx, cmd);
|
||||||
ensure_collection_exists(db, coll, ctx).await?;
|
|
||||||
|
// Transactional writes defer namespace creation until commit.
|
||||||
|
if txn_id.is_none() {
|
||||||
|
ensure_collection_exists(db, coll, ctx).await?;
|
||||||
|
}
|
||||||
|
|
||||||
let ns_key = format!("{}.{}", db, coll);
|
let ns_key = format!("{}.{}", db, coll);
|
||||||
|
|
||||||
@@ -416,7 +481,7 @@ async fn handle_find_and_modify(
|
|||||||
drop(ctx.get_or_init_index_engine(db, coll).await);
|
drop(ctx.get_or_init_index_engine(db, coll).await);
|
||||||
|
|
||||||
// Load and filter documents.
|
// Load and filter documents.
|
||||||
let mut matched = load_filtered_docs(db, coll, &query, &ns_key, ctx).await?;
|
let mut matched = load_filtered_docs(db, coll, &query, &ns_key, ctx, txn_id.as_deref()).await?;
|
||||||
|
|
||||||
// Sort if specified.
|
// Sort if specified.
|
||||||
if let Some(ref sort_spec) = sort {
|
if let Some(ref sort_spec) = sort {
|
||||||
@@ -430,6 +495,21 @@ async fn handle_find_and_modify(
|
|||||||
// Remove operation.
|
// Remove operation.
|
||||||
if let Some(ref doc) = target {
|
if let Some(ref doc) = target {
|
||||||
let id_str = extract_id_string(doc);
|
let id_str = extract_id_string(doc);
|
||||||
|
if let Some(ref txn_id) = txn_id {
|
||||||
|
transactions::record_delete(ctx, txn_id, db, coll, doc.clone()).await?;
|
||||||
|
|
||||||
|
let value = apply_fields_projection(doc, &fields);
|
||||||
|
|
||||||
|
return Ok(doc! {
|
||||||
|
"value": value,
|
||||||
|
"lastErrorObject": {
|
||||||
|
"n": 1_i32,
|
||||||
|
"updatedExisting": false,
|
||||||
|
},
|
||||||
|
"ok": 1.0,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
ctx.storage.delete_by_id(db, coll, &id_str).await?;
|
ctx.storage.delete_by_id(db, coll, &id_str).await?;
|
||||||
|
|
||||||
// Record in oplog.
|
// Record in oplog.
|
||||||
@@ -503,6 +583,35 @@ async fn handle_find_and_modify(
|
|||||||
}
|
}
|
||||||
|
|
||||||
let id_str = extract_id_string(&original_doc);
|
let id_str = extract_id_string(&original_doc);
|
||||||
|
if let Some(ref txn_id) = txn_id {
|
||||||
|
transactions::record_update(
|
||||||
|
ctx,
|
||||||
|
txn_id,
|
||||||
|
db,
|
||||||
|
coll,
|
||||||
|
original_doc.clone(),
|
||||||
|
updated_doc.clone(),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
|
let return_doc = if return_new {
|
||||||
|
&updated_doc
|
||||||
|
} else {
|
||||||
|
&original_doc
|
||||||
|
};
|
||||||
|
|
||||||
|
let value = apply_fields_projection(return_doc, &fields);
|
||||||
|
|
||||||
|
return Ok(doc! {
|
||||||
|
"value": value,
|
||||||
|
"lastErrorObject": {
|
||||||
|
"n": 1_i32,
|
||||||
|
"updatedExisting": true,
|
||||||
|
},
|
||||||
|
"ok": 1.0,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
ctx.storage
|
ctx.storage
|
||||||
.update_by_id(db, coll, &id_str, updated_doc.clone())
|
.update_by_id(db, coll, &id_str, updated_doc.clone())
|
||||||
.await?;
|
.await?;
|
||||||
@@ -563,6 +672,26 @@ async fn handle_find_and_modify(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if let Some(ref txn_id) = txn_id {
|
||||||
|
transactions::record_insert(ctx, txn_id, db, coll, updated_doc.clone()).await?;
|
||||||
|
|
||||||
|
let value = if return_new {
|
||||||
|
apply_fields_projection(&updated_doc, &fields)
|
||||||
|
} else {
|
||||||
|
Bson::Null
|
||||||
|
};
|
||||||
|
|
||||||
|
return Ok(doc! {
|
||||||
|
"value": value,
|
||||||
|
"lastErrorObject": {
|
||||||
|
"n": 1_i32,
|
||||||
|
"updatedExisting": false,
|
||||||
|
"upserted": upserted_id,
|
||||||
|
},
|
||||||
|
"ok": 1.0,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
let inserted_id_str = ctx.storage
|
let inserted_id_str = ctx.storage
|
||||||
.insert_one(db, coll, updated_doc.clone())
|
.insert_one(db, coll, updated_doc.clone())
|
||||||
.await?;
|
.await?;
|
||||||
@@ -622,7 +751,17 @@ async fn load_filtered_docs(
|
|||||||
filter: &Document,
|
filter: &Document,
|
||||||
ns_key: &str,
|
ns_key: &str,
|
||||||
ctx: &CommandContext,
|
ctx: &CommandContext,
|
||||||
|
txn_id: Option<&str>,
|
||||||
) -> CommandResult<Vec<Document>> {
|
) -> CommandResult<Vec<Document>> {
|
||||||
|
if let Some(txn_id) = txn_id {
|
||||||
|
let docs = transactions::load_transaction_docs(ctx, txn_id, db, coll).await?;
|
||||||
|
return if filter.is_empty() {
|
||||||
|
Ok(docs)
|
||||||
|
} else {
|
||||||
|
Ok(QueryMatcher::filter(&docs, filter))
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
// Try to use index to narrow candidates.
|
// Try to use index to narrow candidates.
|
||||||
let candidate_ids: Option<HashSet<String>> = ctx
|
let candidate_ids: Option<HashSet<String>> = ctx
|
||||||
.indexes
|
.indexes
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
mod context;
|
mod context;
|
||||||
pub mod error;
|
pub mod error;
|
||||||
pub mod handlers;
|
pub mod handlers;
|
||||||
|
pub mod transactions;
|
||||||
mod router;
|
mod router;
|
||||||
|
|
||||||
pub use context::{CommandContext, ConnectionState, CursorState};
|
pub use context::{CommandContext, ConnectionState, CursorState};
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ use rustdb_auth::AuthAction;
|
|||||||
|
|
||||||
use crate::context::{CommandContext, ConnectionState};
|
use crate::context::{CommandContext, ConnectionState};
|
||||||
use crate::error::CommandError;
|
use crate::error::CommandError;
|
||||||
use crate::handlers;
|
use crate::{handlers, transactions};
|
||||||
|
|
||||||
/// Routes parsed wire protocol commands to the appropriate handler.
|
/// Routes parsed wire protocol commands to the appropriate handler.
|
||||||
pub struct CommandRouter {
|
pub struct CommandRouter {
|
||||||
@@ -55,11 +55,12 @@ impl CommandRouter {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if transaction_command_unsupported(command_name, &cmd.command) {
|
if let Err(e) = transactions::prepare_transaction_for_command(
|
||||||
return CommandError::IllegalOperation(
|
&self.ctx,
|
||||||
"Transaction numbers are only allowed on a replica set member or mongos".into(),
|
&cmd.command,
|
||||||
)
|
command_name,
|
||||||
.to_error_doc();
|
) {
|
||||||
|
return e.to_error_doc();
|
||||||
}
|
}
|
||||||
|
|
||||||
// Extract session id if present, and touch the session.
|
// Extract session id if present, and touch the session.
|
||||||
@@ -136,7 +137,7 @@ impl CommandRouter {
|
|||||||
| "grantRolesToUser" | "revokeRolesFromUser"
|
| "grantRolesToUser" | "revokeRolesFromUser"
|
||||||
| "currentOp" | "killOp" | "top" | "profile"
|
| "currentOp" | "killOp" | "top" | "profile"
|
||||||
| "compact" | "reIndex" | "fsync" | "connPoolSync" => {
|
| "compact" | "reIndex" | "fsync" | "connPoolSync" => {
|
||||||
handlers::admin_handler::handle(&cmd.command, db, &self.ctx, command_name).await
|
handlers::admin_handler::handle(&cmd.command, db, &self.ctx, command_name, connection).await
|
||||||
}
|
}
|
||||||
|
|
||||||
// -- unknown command --
|
// -- unknown command --
|
||||||
@@ -207,9 +208,3 @@ fn aggregate_writes(command: &Document) -> bool {
|
|||||||
_ => None,
|
_ => None,
|
||||||
}).unwrap_or(false)
|
}).unwrap_or(false)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn transaction_command_unsupported(command_name: &str, command: &Document) -> bool {
|
|
||||||
matches!(command_name, "commitTransaction" | "abortTransaction")
|
|
||||||
|| matches!(command.get("startTransaction"), Some(Bson::Boolean(true)))
|
|
||||||
|| matches!(command.get("autocommit"), Some(Bson::Boolean(false)))
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -0,0 +1,367 @@
|
|||||||
|
use bson::{doc, Bson, Document};
|
||||||
|
use rustdb_storage::OpType;
|
||||||
|
use rustdb_txn::{TransactionState, WriteEntry, WriteOp};
|
||||||
|
|
||||||
|
use crate::context::CommandContext;
|
||||||
|
use crate::error::{CommandError, CommandResult};
|
||||||
|
|
||||||
|
pub fn command_starts_transaction(cmd: &Document) -> bool {
|
||||||
|
matches!(cmd.get("startTransaction"), Some(Bson::Boolean(true)))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn command_uses_transaction(cmd: &Document) -> bool {
|
||||||
|
command_starts_transaction(cmd) || matches!(cmd.get("autocommit"), Some(Bson::Boolean(false)))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn active_transaction_id(ctx: &CommandContext, cmd: &Document) -> Option<String> {
|
||||||
|
if !command_uses_transaction(cmd) {
|
||||||
|
return None;
|
||||||
|
}
|
||||||
|
|
||||||
|
let session_id = cmd
|
||||||
|
.get("lsid")
|
||||||
|
.and_then(rustdb_txn::SessionEngine::extract_session_id)?;
|
||||||
|
ctx.sessions.get_transaction_id(&session_id)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn prepare_transaction_for_command(
|
||||||
|
ctx: &CommandContext,
|
||||||
|
cmd: &Document,
|
||||||
|
command_name: &str,
|
||||||
|
) -> CommandResult<()> {
|
||||||
|
if matches!(command_name, "commitTransaction" | "abortTransaction") {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
let starts_transaction = command_starts_transaction(cmd);
|
||||||
|
let uses_transaction = command_uses_transaction(cmd);
|
||||||
|
if !uses_transaction {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
let session_id = session_id_from_command(cmd)?;
|
||||||
|
require_txn_number(cmd)?;
|
||||||
|
ctx.sessions.get_or_create_session(&session_id);
|
||||||
|
|
||||||
|
if starts_transaction {
|
||||||
|
let txn_id = ctx.transactions.start_transaction(&session_id)?;
|
||||||
|
ctx.sessions.start_transaction(&session_id, &txn_id)?;
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
if ctx.sessions.get_transaction_id(&session_id).is_none() {
|
||||||
|
return Err(CommandError::NoSuchTransaction(format!(
|
||||||
|
"session {session_id} has no active transaction"
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn load_transaction_docs(
|
||||||
|
ctx: &CommandContext,
|
||||||
|
txn_id: &str,
|
||||||
|
db: &str,
|
||||||
|
coll: &str,
|
||||||
|
) -> CommandResult<Vec<Document>> {
|
||||||
|
let ns = namespace(db, coll);
|
||||||
|
if !ctx.transactions.has_snapshot(txn_id, &ns) {
|
||||||
|
let docs = match ctx.storage.collection_exists(db, coll).await {
|
||||||
|
Ok(true) => ctx.storage.find_all(db, coll).await?,
|
||||||
|
Ok(false) => Vec::new(),
|
||||||
|
Err(_) => Vec::new(),
|
||||||
|
};
|
||||||
|
ctx.transactions.set_snapshot(txn_id, &ns, docs);
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx.transactions
|
||||||
|
.get_snapshot(txn_id, &ns)
|
||||||
|
.ok_or_else(|| CommandError::NoSuchTransaction(txn_id.to_string()))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn record_insert(
|
||||||
|
ctx: &CommandContext,
|
||||||
|
txn_id: &str,
|
||||||
|
db: &str,
|
||||||
|
coll: &str,
|
||||||
|
doc: Document,
|
||||||
|
) -> CommandResult<String> {
|
||||||
|
let id = document_id_string(&doc)?;
|
||||||
|
let docs = load_transaction_docs(ctx, txn_id, db, coll).await?;
|
||||||
|
if docs.iter().any(|existing| document_id_string(existing).ok().as_deref() == Some(id.as_str())) {
|
||||||
|
return Err(CommandError::DuplicateKey(format!(
|
||||||
|
"duplicate _id '{}' in transaction",
|
||||||
|
id
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx.transactions.record_write(
|
||||||
|
txn_id,
|
||||||
|
&namespace(db, coll),
|
||||||
|
&id,
|
||||||
|
WriteOp::Insert,
|
||||||
|
Some(doc),
|
||||||
|
None,
|
||||||
|
);
|
||||||
|
Ok(id)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn record_update(
|
||||||
|
ctx: &CommandContext,
|
||||||
|
txn_id: &str,
|
||||||
|
db: &str,
|
||||||
|
coll: &str,
|
||||||
|
original: Document,
|
||||||
|
updated: Document,
|
||||||
|
) -> CommandResult<String> {
|
||||||
|
let id = document_id_string(&original)?;
|
||||||
|
ctx.transactions.record_write(
|
||||||
|
txn_id,
|
||||||
|
&namespace(db, coll),
|
||||||
|
&id,
|
||||||
|
WriteOp::Update,
|
||||||
|
Some(updated),
|
||||||
|
Some(original),
|
||||||
|
);
|
||||||
|
Ok(id)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn record_delete(
|
||||||
|
ctx: &CommandContext,
|
||||||
|
txn_id: &str,
|
||||||
|
db: &str,
|
||||||
|
coll: &str,
|
||||||
|
original: Document,
|
||||||
|
) -> CommandResult<String> {
|
||||||
|
let id = document_id_string(&original)?;
|
||||||
|
ctx.transactions.record_write(
|
||||||
|
txn_id,
|
||||||
|
&namespace(db, coll),
|
||||||
|
&id,
|
||||||
|
WriteOp::Delete,
|
||||||
|
None,
|
||||||
|
Some(original),
|
||||||
|
);
|
||||||
|
Ok(id)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn commit_transaction_command(
|
||||||
|
cmd: &Document,
|
||||||
|
ctx: &CommandContext,
|
||||||
|
) -> CommandResult<Document> {
|
||||||
|
let session_id = session_id_from_command(cmd)?;
|
||||||
|
let txn_id = ctx
|
||||||
|
.sessions
|
||||||
|
.get_transaction_id(&session_id)
|
||||||
|
.ok_or_else(|| CommandError::NoSuchTransaction(format!(
|
||||||
|
"session {session_id} has no active transaction"
|
||||||
|
)))?;
|
||||||
|
let state = ctx.transactions.take_transaction(&txn_id)?;
|
||||||
|
|
||||||
|
preflight_transaction(&state, ctx).await?;
|
||||||
|
apply_transaction(state, ctx).await?;
|
||||||
|
ctx.sessions.end_transaction(&session_id);
|
||||||
|
|
||||||
|
Ok(doc! { "ok": 1.0 })
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn abort_transaction_command(cmd: &Document, ctx: &CommandContext) -> CommandResult<Document> {
|
||||||
|
let session_id = session_id_from_command(cmd)?;
|
||||||
|
let txn_id = ctx
|
||||||
|
.sessions
|
||||||
|
.get_transaction_id(&session_id)
|
||||||
|
.ok_or_else(|| CommandError::NoSuchTransaction(format!(
|
||||||
|
"session {session_id} has no active transaction"
|
||||||
|
)))?;
|
||||||
|
ctx.transactions.abort_transaction(&txn_id)?;
|
||||||
|
ctx.sessions.end_transaction(&session_id);
|
||||||
|
Ok(doc! { "ok": 1.0 })
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn document_id_string(doc: &Document) -> CommandResult<String> {
|
||||||
|
match doc.get("_id") {
|
||||||
|
Some(Bson::ObjectId(oid)) => Ok(oid.to_hex()),
|
||||||
|
Some(Bson::String(s)) => Ok(s.clone()),
|
||||||
|
Some(other) => Ok(format!("{}", other)),
|
||||||
|
None => Err(CommandError::InvalidArgument("document missing _id field".into())),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn session_id_from_command(cmd: &Document) -> CommandResult<String> {
|
||||||
|
cmd.get("lsid")
|
||||||
|
.and_then(rustdb_txn::SessionEngine::extract_session_id)
|
||||||
|
.ok_or_else(|| CommandError::InvalidArgument("transaction command requires lsid".into()))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn require_txn_number(cmd: &Document) -> CommandResult<()> {
|
||||||
|
match cmd.get("txnNumber") {
|
||||||
|
Some(Bson::Int64(_)) | Some(Bson::Int32(_)) => Ok(()),
|
||||||
|
_ => Err(CommandError::InvalidArgument(
|
||||||
|
"transaction command requires txnNumber".into(),
|
||||||
|
)),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn namespace(db: &str, coll: &str) -> String {
|
||||||
|
format!("{db}.{coll}")
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn preflight_transaction(state: &TransactionState, ctx: &CommandContext) -> CommandResult<()> {
|
||||||
|
for (ns, writes) in &state.write_set {
|
||||||
|
let (db, coll) = split_namespace(ns)?;
|
||||||
|
drop(ctx.get_or_init_index_engine(db, coll).await);
|
||||||
|
|
||||||
|
for (doc_id, entry) in writes {
|
||||||
|
let current = current_doc(ctx, db, coll, doc_id).await?;
|
||||||
|
match entry.op {
|
||||||
|
WriteOp::Insert => {
|
||||||
|
if current.is_some() {
|
||||||
|
return Err(CommandError::DuplicateKey(format!(
|
||||||
|
"duplicate _id '{}' on transaction commit",
|
||||||
|
doc_id
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
if let Some(ref doc) = entry.doc {
|
||||||
|
if let Some(engine) = ctx.indexes.get(ns) {
|
||||||
|
engine.check_unique_constraints(doc)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
WriteOp::Update => {
|
||||||
|
assert_unchanged(doc_id, current.as_ref(), entry.original_doc.as_ref())?;
|
||||||
|
if let (Some(current_doc), Some(updated_doc)) = (current.as_ref(), entry.doc.as_ref()) {
|
||||||
|
if let Some(engine) = ctx.indexes.get(ns) {
|
||||||
|
engine.check_unique_constraints_for_update(current_doc, updated_doc)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
WriteOp::Delete => {
|
||||||
|
assert_unchanged(doc_id, current.as_ref(), entry.original_doc.as_ref())?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn apply_transaction(state: TransactionState, ctx: &CommandContext) -> CommandResult<()> {
|
||||||
|
let mut namespaces: Vec<_> = state.write_set.into_iter().collect();
|
||||||
|
namespaces.sort_by(|a, b| a.0.cmp(&b.0));
|
||||||
|
|
||||||
|
for (ns, writes) in namespaces {
|
||||||
|
let (db, coll) = split_namespace(&ns)?;
|
||||||
|
ensure_collection_exists(db, coll, ctx).await?;
|
||||||
|
drop(ctx.get_or_init_index_engine(db, coll).await);
|
||||||
|
|
||||||
|
let mut writes: Vec<(String, WriteEntry)> = writes.into_iter().collect();
|
||||||
|
writes.sort_by(|a, b| a.0.cmp(&b.0));
|
||||||
|
|
||||||
|
for (doc_id, entry) in writes {
|
||||||
|
match entry.op {
|
||||||
|
WriteOp::Insert => {
|
||||||
|
let Some(doc) = entry.doc else { continue; };
|
||||||
|
let inserted_id = ctx.storage.insert_one(db, coll, doc.clone()).await?;
|
||||||
|
ctx.oplog.append(OpType::Insert, db, coll, &inserted_id, Some(doc.clone()), None);
|
||||||
|
if let Some(mut engine) = ctx.indexes.get_mut(&ns) {
|
||||||
|
engine.on_insert(&doc)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
WriteOp::Update => {
|
||||||
|
let Some(doc) = entry.doc else { continue; };
|
||||||
|
ctx.storage.update_by_id(db, coll, &doc_id, doc.clone()).await?;
|
||||||
|
ctx.oplog.append(
|
||||||
|
OpType::Update,
|
||||||
|
db,
|
||||||
|
coll,
|
||||||
|
&doc_id,
|
||||||
|
Some(doc.clone()),
|
||||||
|
entry.original_doc.clone(),
|
||||||
|
);
|
||||||
|
if let (Some(mut engine), Some(ref original)) =
|
||||||
|
(ctx.indexes.get_mut(&ns), entry.original_doc.as_ref())
|
||||||
|
{
|
||||||
|
engine.on_update(original, &doc)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
WriteOp::Delete => {
|
||||||
|
ctx.storage.delete_by_id(db, coll, &doc_id).await?;
|
||||||
|
ctx.oplog.append(
|
||||||
|
OpType::Delete,
|
||||||
|
db,
|
||||||
|
coll,
|
||||||
|
&doc_id,
|
||||||
|
None,
|
||||||
|
entry.original_doc.clone(),
|
||||||
|
);
|
||||||
|
if let (Some(mut engine), Some(ref original)) =
|
||||||
|
(ctx.indexes.get_mut(&ns), entry.original_doc.as_ref())
|
||||||
|
{
|
||||||
|
engine.on_delete(original);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn current_doc(
|
||||||
|
ctx: &CommandContext,
|
||||||
|
db: &str,
|
||||||
|
coll: &str,
|
||||||
|
doc_id: &str,
|
||||||
|
) -> CommandResult<Option<Document>> {
|
||||||
|
match ctx.storage.collection_exists(db, coll).await {
|
||||||
|
Ok(true) => Ok(ctx.storage.find_by_id(db, coll, doc_id).await?),
|
||||||
|
Ok(false) => Ok(None),
|
||||||
|
Err(_) => Ok(None),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn assert_unchanged(
|
||||||
|
doc_id: &str,
|
||||||
|
current: Option<&Document>,
|
||||||
|
original: Option<&Document>,
|
||||||
|
) -> CommandResult<()> {
|
||||||
|
if current == original {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
Err(CommandError::WriteConflict(format!(
|
||||||
|
"document '{}' changed during transaction",
|
||||||
|
doc_id
|
||||||
|
)))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn ensure_collection_exists(
|
||||||
|
db: &str,
|
||||||
|
coll: &str,
|
||||||
|
ctx: &CommandContext,
|
||||||
|
) -> CommandResult<()> {
|
||||||
|
if let Err(e) = ctx.storage.create_database(db).await {
|
||||||
|
let msg = e.to_string();
|
||||||
|
if !msg.contains("AlreadyExists") && !msg.contains("already exists") {
|
||||||
|
return Err(CommandError::StorageError(msg));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
match ctx.storage.collection_exists(db, coll).await {
|
||||||
|
Ok(true) => Ok(()),
|
||||||
|
Ok(false) | Err(_) => {
|
||||||
|
if let Err(e) = ctx.storage.create_collection(db, coll).await {
|
||||||
|
let msg = e.to_string();
|
||||||
|
if !msg.contains("AlreadyExists") && !msg.contains("already exists") {
|
||||||
|
return Err(CommandError::StorageError(msg));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn split_namespace(ns: &str) -> CommandResult<(&str, &str)> {
|
||||||
|
ns.split_once('.')
|
||||||
|
.ok_or_else(|| CommandError::InvalidArgument(format!("invalid namespace '{ns}'")))
|
||||||
|
}
|
||||||
@@ -170,6 +170,16 @@ impl SessionEngine {
|
|||||||
}
|
}
|
||||||
count
|
count
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Number of currently tracked logical sessions.
|
||||||
|
pub fn len(&self) -> usize {
|
||||||
|
self.sessions.len()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Whether there are no tracked logical sessions.
|
||||||
|
pub fn is_empty(&self) -> bool {
|
||||||
|
self.sessions.is_empty()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Default for SessionEngine {
|
impl Default for SessionEngine {
|
||||||
|
|||||||
@@ -18,7 +18,7 @@ pub enum TransactionStatus {
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Describes a write operation within a transaction.
|
/// Describes a write operation within a transaction.
|
||||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||||
pub enum WriteOp {
|
pub enum WriteOp {
|
||||||
Insert,
|
Insert,
|
||||||
Update,
|
Update,
|
||||||
@@ -137,6 +137,25 @@ impl TransactionEngine {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Remove an active transaction and return its buffered state for an
|
||||||
|
/// external committer that needs to update secondary indexes and oplogs.
|
||||||
|
pub fn take_transaction(&self, txn_id: &str) -> TransactionResult<TransactionState> {
|
||||||
|
let state = self
|
||||||
|
.transactions
|
||||||
|
.remove(txn_id)
|
||||||
|
.map(|(_, s)| s)
|
||||||
|
.ok_or_else(|| TransactionError::NotFound(txn_id.to_string()))?;
|
||||||
|
|
||||||
|
if state.status != TransactionStatus::Active {
|
||||||
|
return Err(TransactionError::InvalidState(format!(
|
||||||
|
"transaction {} is {:?}, cannot commit",
|
||||||
|
txn_id, state.status
|
||||||
|
)));
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(state)
|
||||||
|
}
|
||||||
|
|
||||||
/// Abort a transaction, discarding all buffered writes.
|
/// Abort a transaction, discarding all buffered writes.
|
||||||
pub fn abort_transaction(&self, txn_id: &str) -> TransactionResult<()> {
|
pub fn abort_transaction(&self, txn_id: &str) -> TransactionResult<()> {
|
||||||
let mut state = self
|
let mut state = self
|
||||||
@@ -191,19 +210,32 @@ impl TransactionEngine {
|
|||||||
original: Option<Document>,
|
original: Option<Document>,
|
||||||
) {
|
) {
|
||||||
if let Some(mut state) = self.transactions.get_mut(txn_id) {
|
if let Some(mut state) = self.transactions.get_mut(txn_id) {
|
||||||
let entry = WriteEntry {
|
let writes = state.write_set.entry(ns.to_string()).or_default();
|
||||||
op,
|
if let Some(existing) = writes.remove(doc_id) {
|
||||||
doc,
|
if let Some(merged) = merge_write_entry(existing, op, doc, original) {
|
||||||
original_doc: original,
|
writes.insert(doc_id.to_string(), merged);
|
||||||
};
|
}
|
||||||
state
|
} else {
|
||||||
.write_set
|
writes.insert(
|
||||||
.entry(ns.to_string())
|
doc_id.to_string(),
|
||||||
.or_default()
|
WriteEntry {
|
||||||
.insert(doc_id.to_string(), entry);
|
op,
|
||||||
|
doc,
|
||||||
|
original_doc: original,
|
||||||
|
},
|
||||||
|
);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Return true if the transaction already has a base snapshot for a namespace.
|
||||||
|
pub fn has_snapshot(&self, txn_id: &str, ns: &str) -> bool {
|
||||||
|
self.transactions
|
||||||
|
.get(txn_id)
|
||||||
|
.map(|state| state.snapshots.contains_key(ns))
|
||||||
|
.unwrap_or(false)
|
||||||
|
}
|
||||||
|
|
||||||
/// Get a snapshot of documents for a namespace within a transaction,
|
/// Get a snapshot of documents for a namespace within a transaction,
|
||||||
/// applying the write overlay (inserts, updates, deletes) on top.
|
/// applying the write overlay (inserts, updates, deletes) on top.
|
||||||
pub fn get_snapshot(&self, txn_id: &str, ns: &str) -> Option<Vec<Document>> {
|
pub fn get_snapshot(&self, txn_id: &str, ns: &str) -> Option<Vec<Document>> {
|
||||||
@@ -270,6 +302,67 @@ impl TransactionEngine {
|
|||||||
state.snapshots.insert(ns.to_string(), docs);
|
state.snapshots.insert(ns.to_string(), docs);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Number of currently active transactions.
|
||||||
|
pub fn len(&self) -> usize {
|
||||||
|
self.transactions.len()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Whether there are no active transactions.
|
||||||
|
pub fn is_empty(&self) -> bool {
|
||||||
|
self.transactions.is_empty()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn merge_write_entry(
|
||||||
|
existing: WriteEntry,
|
||||||
|
next_op: WriteOp,
|
||||||
|
next_doc: Option<Document>,
|
||||||
|
next_original: Option<Document>,
|
||||||
|
) -> Option<WriteEntry> {
|
||||||
|
match (existing.op, next_op) {
|
||||||
|
(WriteOp::Insert, WriteOp::Update) => Some(WriteEntry {
|
||||||
|
op: WriteOp::Insert,
|
||||||
|
doc: next_doc,
|
||||||
|
original_doc: None,
|
||||||
|
}),
|
||||||
|
(WriteOp::Insert, WriteOp::Delete) => None,
|
||||||
|
(WriteOp::Insert, WriteOp::Insert) => Some(WriteEntry {
|
||||||
|
op: WriteOp::Insert,
|
||||||
|
doc: next_doc,
|
||||||
|
original_doc: None,
|
||||||
|
}),
|
||||||
|
(WriteOp::Update, WriteOp::Update) => Some(WriteEntry {
|
||||||
|
op: WriteOp::Update,
|
||||||
|
doc: next_doc,
|
||||||
|
original_doc: existing.original_doc,
|
||||||
|
}),
|
||||||
|
(WriteOp::Update, WriteOp::Delete) => Some(WriteEntry {
|
||||||
|
op: WriteOp::Delete,
|
||||||
|
doc: None,
|
||||||
|
original_doc: existing.original_doc,
|
||||||
|
}),
|
||||||
|
(WriteOp::Update, WriteOp::Insert) => Some(WriteEntry {
|
||||||
|
op: WriteOp::Update,
|
||||||
|
doc: next_doc,
|
||||||
|
original_doc: existing.original_doc,
|
||||||
|
}),
|
||||||
|
(WriteOp::Delete, WriteOp::Insert) => Some(WriteEntry {
|
||||||
|
op: if existing.original_doc.is_some() {
|
||||||
|
WriteOp::Update
|
||||||
|
} else {
|
||||||
|
WriteOp::Insert
|
||||||
|
},
|
||||||
|
doc: next_doc,
|
||||||
|
original_doc: existing.original_doc,
|
||||||
|
}),
|
||||||
|
(WriteOp::Delete, WriteOp::Update) => Some(WriteEntry {
|
||||||
|
op: WriteOp::Update,
|
||||||
|
doc: next_doc,
|
||||||
|
original_doc: existing.original_doc.or(next_original),
|
||||||
|
}),
|
||||||
|
(WriteOp::Delete, WriteOp::Delete) => Some(existing),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Default for TransactionEngine {
|
impl Default for TransactionEngine {
|
||||||
|
|||||||
@@ -255,6 +255,10 @@ async fn handle_get_metrics(
|
|||||||
"collections": total_collections,
|
"collections": total_collections,
|
||||||
"oplogEntries": oplog_stats.total_entries,
|
"oplogEntries": oplog_stats.total_entries,
|
||||||
"oplogCurrentSeq": oplog_stats.current_seq,
|
"oplogCurrentSeq": oplog_stats.current_seq,
|
||||||
|
"sessions": ctx.sessions.len(),
|
||||||
|
"activeTransactions": ctx.transactions.len(),
|
||||||
|
"authEnabled": ctx.auth.enabled(),
|
||||||
|
"authUsers": ctx.auth.user_count(),
|
||||||
"uptimeSeconds": uptime_secs,
|
"uptimeSeconds": uptime_secs,
|
||||||
}),
|
}),
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -88,6 +88,11 @@ tap.test('auth: should authenticate valid credentials', async () => {
|
|||||||
await authedClient.connect();
|
await authedClient.connect();
|
||||||
const result = await authedClient.db('admin').command({ ping: 1 });
|
const result = await authedClient.db('admin').command({ ping: 1 });
|
||||||
expect(result.ok).toEqual(1);
|
expect(result.ok).toEqual(1);
|
||||||
|
|
||||||
|
const status = await authedClient.db('admin').command({ connectionStatus: 1 });
|
||||||
|
expect(status.ok).toEqual(1);
|
||||||
|
expect(status.authInfo.authenticatedUsers[0]).toEqual({ user: 'root', db: 'admin' });
|
||||||
|
expect(status.authInfo.authenticatedUserRoles[0]).toEqual({ role: 'root', db: 'admin' });
|
||||||
});
|
});
|
||||||
|
|
||||||
tap.test('auth: should allow CRUD after authentication', async () => {
|
tap.test('auth: should allow CRUD after authentication', async () => {
|
||||||
|
|||||||
+60
-15
@@ -44,7 +44,7 @@ tap.test('transactions: should still support explicit sessions', async () => {
|
|||||||
expect(end.ok).toEqual(1);
|
expect(end.ok).toEqual(1);
|
||||||
});
|
});
|
||||||
|
|
||||||
tap.test('transactions: should reject raw transaction-scoped writes before mutation', async () => {
|
tap.test('transactions: should reject transaction-scoped writes without txnNumber before mutation', async () => {
|
||||||
const db = client.db('txntest');
|
const db = client.db('txntest');
|
||||||
const coll = db.collection('docs');
|
const coll = db.collection('docs');
|
||||||
await coll.insertOne({ key: 'outside', value: 1 });
|
await coll.insertOne({ key: 'outside', value: 1 });
|
||||||
@@ -59,8 +59,8 @@ tap.test('transactions: should reject raw transaction-scoped writes before mutat
|
|||||||
});
|
});
|
||||||
} catch (err: any) {
|
} catch (err: any) {
|
||||||
threw = true;
|
threw = true;
|
||||||
expect(err.code).toEqual(20);
|
expect(err.code).toEqual(14);
|
||||||
expect(err.codeName).toEqual('IllegalOperation');
|
expect(err.codeName).toEqual('TypeMismatch');
|
||||||
}
|
}
|
||||||
expect(threw).toBeTrue();
|
expect(threw).toBeTrue();
|
||||||
|
|
||||||
@@ -68,44 +68,89 @@ tap.test('transactions: should reject raw transaction-scoped writes before mutat
|
|||||||
expect(await coll.countDocuments({ key: 'outside' })).toEqual(1);
|
expect(await coll.countDocuments({ key: 'outside' })).toEqual(1);
|
||||||
});
|
});
|
||||||
|
|
||||||
tap.test('transactions: official driver transaction should fail without committing writes', async () => {
|
tap.test('transactions: official driver transaction should commit buffered writes', async () => {
|
||||||
const coll = client.db('txntest').collection('driverdocs');
|
const coll = client.db('txntest').collection('driverdocs');
|
||||||
await coll.insertOne({ key: 'outside-driver', value: 0 });
|
await coll.insertOne({ key: 'outside-driver', value: 0 });
|
||||||
const session = client.startSession();
|
const session = client.startSession();
|
||||||
|
|
||||||
let threw = false;
|
|
||||||
try {
|
try {
|
||||||
session.startTransaction();
|
session.startTransaction();
|
||||||
await coll.insertOne({ key: 'inside-driver', value: 1 }, { session });
|
await coll.insertOne({ key: 'inside-driver', value: 1 }, { session });
|
||||||
|
const inTxn = await coll.findOne({ key: 'inside-driver' }, { session });
|
||||||
|
expect(inTxn).toBeTruthy();
|
||||||
|
expect(await coll.countDocuments({ key: 'inside-driver' })).toEqual(0);
|
||||||
await session.commitTransaction();
|
await session.commitTransaction();
|
||||||
} catch (err: any) {
|
|
||||||
threw = true;
|
|
||||||
expect(err.code).toEqual(20);
|
|
||||||
expect(err.codeName).toEqual('IllegalOperation');
|
|
||||||
await session.abortTransaction().catch(() => undefined);
|
|
||||||
} finally {
|
} finally {
|
||||||
await session.endSession();
|
await session.endSession();
|
||||||
}
|
}
|
||||||
|
|
||||||
expect(threw).toBeTrue();
|
expect(await coll.countDocuments({ key: 'inside-driver' })).toEqual(1);
|
||||||
expect(await coll.countDocuments({ key: 'inside-driver' })).toEqual(0);
|
|
||||||
expect(await coll.countDocuments({ key: 'outside-driver' })).toEqual(1);
|
expect(await coll.countDocuments({ key: 'outside-driver' })).toEqual(1);
|
||||||
});
|
});
|
||||||
|
|
||||||
tap.test('transactions: commit and abort commands should be explicit unsupported errors', async () => {
|
tap.test('transactions: abort should discard buffered writes', async () => {
|
||||||
|
const coll = client.db('txntest').collection('abortdocs');
|
||||||
|
const session = client.startSession();
|
||||||
|
|
||||||
|
try {
|
||||||
|
session.startTransaction();
|
||||||
|
await coll.insertOne({ key: 'abort-me', value: 1 }, { session });
|
||||||
|
expect(await coll.findOne({ key: 'abort-me' }, { session })).toBeTruthy();
|
||||||
|
await session.abortTransaction();
|
||||||
|
} finally {
|
||||||
|
await session.endSession();
|
||||||
|
}
|
||||||
|
|
||||||
|
expect(await coll.findOne({ key: 'abort-me' })).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('transactions: update and delete should commit atomically', async () => {
|
||||||
|
const coll = client.db('txntest').collection('mutations');
|
||||||
|
await coll.insertMany([
|
||||||
|
{ key: 'update-me', value: 1 },
|
||||||
|
{ key: 'delete-me', value: 2 },
|
||||||
|
]);
|
||||||
|
const session = client.startSession();
|
||||||
|
|
||||||
|
try {
|
||||||
|
session.startTransaction();
|
||||||
|
await coll.updateOne({ key: 'update-me' }, { $set: { value: 10 } }, { session });
|
||||||
|
await coll.deleteOne({ key: 'delete-me' }, { session });
|
||||||
|
expect((await coll.findOne({ key: 'update-me' }, { session }))!.value).toEqual(10);
|
||||||
|
expect(await coll.findOne({ key: 'delete-me' }, { session })).toBeNull();
|
||||||
|
expect((await coll.findOne({ key: 'update-me' }))!.value).toEqual(1);
|
||||||
|
expect(await coll.findOne({ key: 'delete-me' })).toBeTruthy();
|
||||||
|
await session.commitTransaction();
|
||||||
|
} finally {
|
||||||
|
await session.endSession();
|
||||||
|
}
|
||||||
|
|
||||||
|
expect((await coll.findOne({ key: 'update-me' }))!.value).toEqual(10);
|
||||||
|
expect(await coll.findOne({ key: 'delete-me' })).toBeNull();
|
||||||
|
});
|
||||||
|
|
||||||
|
tap.test('transactions: commit and abort without active transaction should be explicit errors', async () => {
|
||||||
for (const command of [{ commitTransaction: 1 }, { abortTransaction: 1 }]) {
|
for (const command of [{ commitTransaction: 1 }, { abortTransaction: 1 }]) {
|
||||||
let threw = false;
|
let threw = false;
|
||||||
try {
|
try {
|
||||||
await client.db('admin').command(command);
|
await client.db('admin').command(command);
|
||||||
} catch (err: any) {
|
} catch (err: any) {
|
||||||
threw = true;
|
threw = true;
|
||||||
expect(err.code).toEqual(20);
|
expect(err.code).toEqual(251);
|
||||||
expect(err.codeName).toEqual('IllegalOperation');
|
expect(err.codeName).toEqual('NoSuchTransaction');
|
||||||
}
|
}
|
||||||
expect(threw).toBeTrue();
|
expect(threw).toBeTrue();
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
tap.test('transactions: serverStatus should expose transaction and oplog metrics', async () => {
|
||||||
|
const status = await client.db('admin').command({ serverStatus: 1 });
|
||||||
|
expect(status.ok).toEqual(1);
|
||||||
|
expect(status.transactions.currentActive).toEqual(0);
|
||||||
|
expect(status.logicalSessionRecordCache.activeSessionsCount).toBeGreaterThanOrEqual(0);
|
||||||
|
expect(status.oplog.totalEntries).toBeGreaterThan(0);
|
||||||
|
});
|
||||||
|
|
||||||
tap.test('transactions: cleanup', async () => {
|
tap.test('transactions: cleanup', async () => {
|
||||||
await client.close();
|
await client.close();
|
||||||
await server.stop();
|
await server.stop();
|
||||||
|
|||||||
@@ -3,6 +3,6 @@
|
|||||||
*/
|
*/
|
||||||
export const commitinfo = {
|
export const commitinfo = {
|
||||||
name: '@push.rocks/smartdb',
|
name: '@push.rocks/smartdb',
|
||||||
version: '2.7.1',
|
version: '2.8.0',
|
||||||
description: 'A MongoDB-compatible embedded database server with wire protocol support, backed by a high-performance Rust engine.'
|
description: 'A MongoDB-compatible embedded database server with wire protocol support, backed by a high-performance Rust engine.'
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -76,6 +76,10 @@ export interface ISmartDbMetrics {
|
|||||||
collections: number;
|
collections: number;
|
||||||
oplogEntries: number;
|
oplogEntries: number;
|
||||||
oplogCurrentSeq: number;
|
oplogCurrentSeq: number;
|
||||||
|
sessions: number;
|
||||||
|
activeTransactions: number;
|
||||||
|
authEnabled: boolean;
|
||||||
|
authUsers: number;
|
||||||
uptimeSeconds: number;
|
uptimeSeconds: number;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user