2 Commits
v5.2.0 ... main

Author SHA1 Message Date
d437ffc226 v5.3.0
Some checks failed
Default (tags) / security (push) Successful in 37s
Default (tags) / test (push) Failing after 26s
Default (tags) / release (push) Has been skipped
Default (tags) / metadata (push) Has been skipped
2026-02-17 16:50:04 +00:00
e36758f183 feat(auth): add AWS SigV4 authentication and bucket policy support 2026-02-17 16:50:04 +00:00
8 changed files with 73 additions and 86 deletions

View File

@@ -1,5 +1,13 @@
# Changelog
## 2026-02-17 - 5.3.0 - feat(auth)
add AWS SigV4 authentication and bucket policy support
- Implement AWS SigV4 full verification (constant-time comparison, 15-minute clock skew enforcement) and expose default signing region (server.region = 'us-east-1').
- Add IAM-style bucket policy engine with Put/Get/Delete policy APIs (GetBucketPolicy/PutBucketPolicy/DeleteBucketPolicy), wildcard action/resource matching, Allow/Deny evaluation, and on-disk persistence under .policies/{bucket}.policy.json.
- Documentation and README expanded with policy usage, examples, API table entries, and notes about policy CRUD and behavior for anonymous/authenticated requests.
- Rust code refactors: simplify storage/server result structs and multipart handling (removed several unused size/key/bucket fields), remove S3Error::to_response and error_xml helpers, and other internal cleanup to support new auth/policy features.
## 2026-02-17 - 5.2.0 - feat(auth,policy)
add AWS SigV4 authentication and S3 bucket policy support

View File

@@ -1,6 +1,6 @@
{
"name": "@push.rocks/smarts3",
"version": "5.2.0",
"version": "5.3.0",
"private": false,
"description": "A Node.js TypeScript package to create a local S3 endpoint for simulating AWS S3 operations using mapped local directories for development and testing purposes.",
"main": "dist_ts/index.js",

View File

@@ -16,7 +16,8 @@ For reporting bugs, issues, or security vulnerabilities, please visit [community
| Range requests | ✅ Seek-based | ✅ | ❌ Full read |
| Language | Rust + TypeScript | Go | JavaScript |
| Multipart uploads | ✅ Full support | ✅ | ❌ |
| Auth | AWS v2/v4 key extraction | Full IAM | Basic |
| Auth | AWS SigV4 (full verification) | Full IAM | Basic |
| Bucket policies | ✅ IAM-style evaluation | ✅ | ❌ |
### Core Features
@@ -25,7 +26,8 @@ For reporting bugs, issues, or security vulnerabilities, please visit [community
- 📂 **Filesystem-backed storage** — buckets map to directories, objects to files
- 📤 **Streaming multipart uploads** — large files without memory pressure
- 🎯 **Byte-range requests**`seek()` directly to the requested byte offset
- 🔐 **Authentication**AWS v2/v4 signature key extraction
- 🔐 **AWS SigV4 authentication**full signature verification with constant-time comparison and 15-min clock skew enforcement
- 📜 **Bucket policies** — IAM-style JSON policies with Allow/Deny evaluation, wildcard matching, and anonymous access support
- 🌐 **CORS middleware** — configurable cross-origin support
- 📊 **Structured logging** — tracing-based, error through debug levels
- 🧹 **Clean slate mode** — wipe storage on startup for test isolation
@@ -73,6 +75,7 @@ const config: ISmarts3Config = {
port: 3000, // Default: 3000
address: '0.0.0.0', // Default: '0.0.0.0'
silent: false, // Default: false
region: 'us-east-1', // Default: 'us-east-1' — used for SigV4 signing
},
storage: {
directory: './my-data', // Default: .nogit/bucketsDir
@@ -241,6 +244,56 @@ await client.send(new CompleteMultipartUploadCommand({
}));
```
## 📜 Bucket Policies
smarts3 supports AWS-style bucket policies for fine-grained access control. Policies use the same IAM JSON format as real S3 — so you can develop and test your policy logic locally before deploying.
When `auth.enabled` is `true`, the auth pipeline works as follows:
1. **Authenticate** — verify the AWS SigV4 signature (anonymous requests skip this step)
2. **Authorize** — evaluate bucket policies against the request action, resource, and caller identity
3. **Default** — authenticated users get full access; anonymous requests are denied unless a policy explicitly allows them
### Setting a Bucket Policy
Use the S3 `PutBucketPolicy` API (or any S3 client that supports it):
```typescript
import { PutBucketPolicyCommand } from '@aws-sdk/client-s3';
// Allow anonymous read access to all objects in a bucket
await client.send(new PutBucketPolicyCommand({
Bucket: 'public-assets',
Policy: JSON.stringify({
Version: '2012-10-17',
Statement: [{
Sid: 'PublicRead',
Effect: 'Allow',
Principal: '*',
Action: ['s3:GetObject'],
Resource: ['arn:aws:s3:::public-assets/*'],
}],
}),
}));
```
### Policy Features
- **Effect**: `Allow` and `Deny` (explicit Deny always wins)
- **Principal**: `"*"` (everyone) or `{ "AWS": ["arn:..."] }` for specific identities
- **Action**: IAM-style actions like `s3:GetObject`, `s3:PutObject`, `s3:*`, or prefix wildcards like `s3:Get*`
- **Resource**: ARN patterns with `*` and `?` wildcards (e.g. `arn:aws:s3:::my-bucket/*`)
- **Persistence**: Policies survive server restarts — stored as JSON on disk alongside your data
### Policy CRUD Operations
| Operation | AWS SDK Command | HTTP |
|-----------|----------------|------|
| Get policy | `GetBucketPolicyCommand` | `GET /{bucket}?policy` |
| Set policy | `PutBucketPolicyCommand` | `PUT /{bucket}?policy` |
| Delete policy | `DeleteBucketPolicyCommand` | `DELETE /{bucket}?policy` |
Deleting a bucket automatically removes its associated policy.
## 🧪 Testing Integration
```typescript
@@ -314,7 +367,8 @@ smarts3 uses a **hybrid Rust + TypeScript** architecture:
│ ├─ S3 path-style routing │
│ ├─ Streaming storage layer │
│ ├─ Multipart manager │
│ ├─ CORS / Auth middleware
│ ├─ SigV4 auth + policy engine
│ ├─ CORS middleware │
│ └─ S3 XML response builder │
├─────────────────────────────────┤
│ TypeScript (thin IPC wrapper) │
@@ -347,6 +401,9 @@ smarts3 uses a **hybrid Rust + TypeScript** architecture:
| CompleteMultipartUpload | `POST /{bucket}/{key}?uploadId` | |
| AbortMultipartUpload | `DELETE /{bucket}/{key}?uploadId` | |
| ListMultipartUploads | `GET /{bucket}?uploads` | |
| GetBucketPolicy | `GET /{bucket}?policy` | |
| PutBucketPolicy | `PUT /{bucket}?policy` | |
| DeleteBucketPolicy | `DELETE /{bucket}?policy` | |
### On-Disk Format
@@ -362,6 +419,8 @@ smarts3 uses a **hybrid Rust + TypeScript** architecture:
part-1 # Part data files
part-2
...
.policies/
{bucket}.policy.json # Bucket policy (IAM JSON format)
```
## 🔗 Related Packages

View File

@@ -1,6 +1,4 @@
use hyper::{Response, StatusCode};
use http_body_util::Full;
use bytes::Bytes;
use hyper::StatusCode;
#[derive(Debug, thiserror::Error)]
#[error("S3Error({code}): {message}")]
@@ -105,14 +103,4 @@ impl S3Error {
self.code, self.message
)
}
pub fn to_response(&self, request_id: &str) -> Response<Full<Bytes>> {
let xml = self.to_xml();
Response::builder()
.status(self.status)
.header("content-type", "application/xml")
.header("x-amz-request-id", request_id)
.body(Full::new(Bytes::from(xml)))
.unwrap()
}
}

View File

@@ -28,7 +28,6 @@ use crate::xml_response;
pub struct S3Server {
store: Arc<FileStore>,
config: S3Config,
shutdown_tx: watch::Sender<bool>,
server_handle: tokio::task::JoinHandle<()>,
}
@@ -110,7 +109,6 @@ impl S3Server {
Ok(Self {
store,
config,
shutdown_tx,
server_handle,
})

View File

@@ -17,12 +17,10 @@ use crate::s3_error::S3Error;
// ============================
pub struct PutResult {
pub size: u64,
pub md5: String,
}
pub struct GetResult {
pub key: String,
pub size: u64,
pub last_modified: DateTime<Utc>,
pub md5: String,
@@ -32,7 +30,6 @@ pub struct GetResult {
}
pub struct HeadResult {
pub key: String,
pub size: u64,
pub last_modified: DateTime<Utc>,
pub md5: String,
@@ -40,7 +37,6 @@ pub struct HeadResult {
}
pub struct CopyResult {
pub size: u64,
pub md5: String,
pub last_modified: DateTime<Utc>,
}
@@ -69,14 +65,12 @@ pub struct BucketInfo {
pub struct MultipartUploadInfo {
pub upload_id: String,
pub bucket: String,
pub key: String,
pub initiated: DateTime<Utc>,
}
pub struct CompleteMultipartResult {
pub etag: String,
pub size: u64,
}
// ============================
@@ -126,10 +120,6 @@ impl FileStore {
self.root_dir.join(".policies")
}
pub fn policy_path(&self, bucket: &str) -> PathBuf {
self.policies_dir().join(format!("{}.policy.json", bucket))
}
pub async fn reset(&self) -> Result<()> {
if self.root_dir.exists() {
fs::remove_dir_all(&self.root_dir).await?;
@@ -220,7 +210,6 @@ impl FileStore {
let file = fs::File::create(&object_path).await?;
let mut writer = BufWriter::new(file);
let mut hasher = Md5::new();
let mut total_size: u64 = 0;
// Stream body frames directly to file
let mut body = body;
@@ -229,7 +218,6 @@ impl FileStore {
Some(Ok(frame)) => {
if let Ok(data) = frame.into_data() {
hasher.update(&data);
total_size += data.len() as u64;
writer.write_all(&data).await?;
}
}
@@ -255,44 +243,6 @@ impl FileStore {
fs::write(&metadata_path, metadata_json).await?;
Ok(PutResult {
size: total_size,
md5: md5_hex,
})
}
pub async fn put_object_bytes(
&self,
bucket: &str,
key: &str,
data: &[u8],
metadata: HashMap<String, String>,
) -> Result<PutResult> {
if !self.bucket_exists(bucket).await {
return Err(S3Error::no_such_bucket().into());
}
let object_path = self.object_path(bucket, key);
if let Some(parent) = object_path.parent() {
fs::create_dir_all(parent).await?;
}
let mut hasher = Md5::new();
hasher.update(data);
let md5_hex = format!("{:x}", hasher.finalize());
fs::write(&object_path, data).await?;
// Write MD5 sidecar
let md5_path = format!("{}.md5", object_path.display());
fs::write(&md5_path, &md5_hex).await?;
// Write metadata sidecar
let metadata_path = format!("{}.metadata.json", object_path.display());
let metadata_json = serde_json::to_string_pretty(&metadata)?;
fs::write(&metadata_path, metadata_json).await?;
Ok(PutResult {
size: data.len() as u64,
md5: md5_hex,
})
}
@@ -326,7 +276,6 @@ impl FileStore {
};
Ok(GetResult {
key: key.to_string(),
size,
last_modified,
md5,
@@ -352,7 +301,6 @@ impl FileStore {
let metadata = self.read_metadata(&object_path).await;
Ok(HeadResult {
key: key.to_string(),
size,
last_modified,
md5,
@@ -439,7 +387,6 @@ impl FileStore {
let last_modified: DateTime<Utc> = file_meta.modified()?.into();
Ok(CopyResult {
size: file_meta.len(),
md5,
last_modified,
})
@@ -672,7 +619,6 @@ impl FileStore {
let dest_file = fs::File::create(&object_path).await?;
let mut writer = BufWriter::new(dest_file);
let mut hasher = Md5::new();
let mut total_size: u64 = 0;
for (part_number, _etag) in parts {
let part_path = upload_dir.join(format!("part-{}", part_number));
@@ -689,7 +635,6 @@ impl FileStore {
}
hasher.update(&buf[..n]);
writer.write_all(&buf[..n]).await?;
total_size += n as u64;
}
}
@@ -712,7 +657,6 @@ impl FileStore {
Ok(CompleteMultipartResult {
etag,
size: total_size,
})
}
@@ -752,7 +696,6 @@ impl FileStore {
uploads.push(MultipartUploadInfo {
upload_id: meta.upload_id,
bucket: meta.bucket,
key: meta.key,
initiated,
});

View File

@@ -132,15 +132,6 @@ pub fn list_objects_v2_xml(bucket: &str, result: &ListObjectsResult) -> String {
xml
}
pub fn error_xml(code: &str, message: &str) -> String {
format!(
"{}\n<Error><Code>{}</Code><Message>{}</Message></Error>",
XML_DECL,
xml_escape(code),
xml_escape(message)
)
}
pub fn copy_object_result_xml(etag: &str, last_modified: &str) -> String {
format!(
"{}\n<CopyObjectResult>\

View File

@@ -3,6 +3,6 @@
*/
export const commitinfo = {
name: '@push.rocks/smarts3',
version: '5.2.0',
version: '5.3.0',
description: 'A Node.js TypeScript package to create a local S3 endpoint for simulating AWS S3 operations using mapped local directories for development and testing purposes.'
}