@push.rocks/smartmetrics
Powerful system metrics collection for Node.js applications with Prometheus integration
What is SmartMetrics?
SmartMetrics is a comprehensive metrics collection library that monitors your Node.js application's resource usage in real-time. It tracks CPU usage, memory consumption, and system metrics across your main process and all child processes, providing insights through both JSON and Prometheus formats.
Key Features
- 📊 Real-time Metrics Collection - Monitor CPU and memory usage across all processes
- 🔄 Automatic Child Process Tracking - Aggregates metrics from main and child processes
- 🐳 Docker-Aware - Detects container memory limits automatically
- 📈 Prometheus Integration - Built-in HTTP endpoint for Prometheus scraping
- 🔧 Flexible Output Formats - Get metrics as JSON objects or Prometheus text
- 📝 Automatic Heartbeat Logging - Optional periodic metrics logging
- 🚀 Zero Configuration - Works out of the box with sensible defaults
Installation
npm install @push.rocks/smartmetrics
Quick Start
import { SmartMetrics } from '@push.rocks/smartmetrics';
import { Smartlog } from '@push.rocks/smartlog';
// Create a logger instance
const logger = new Smartlog({
logContext: 'my-app',
minimumLogLevel: 'info'
});
// Initialize SmartMetrics
const metrics = new SmartMetrics(logger, 'my-service');
// Get metrics on-demand
const currentMetrics = await metrics.getMetrics();
console.log(`CPU Usage: ${currentMetrics.cpuUsageText}`);
console.log(`Memory: ${currentMetrics.memoryUsageText}`);
// Enable automatic heartbeat logging (every 20 seconds)
metrics.start();
// Enable Prometheus endpoint
metrics.enablePrometheusEndpoint(9090); // Metrics available at http://localhost:9090/metrics
// Clean shutdown
metrics.stop();
Core Concepts
Process Aggregation
SmartMetrics doesn't just monitor your main process - it automatically discovers and aggregates metrics from all child processes spawned by your application. This gives you a complete picture of your application's resource footprint.
Memory Limit Detection
The library automatically detects available memory whether running on bare metal, in Docker containers, or with Node.js heap restrictions. It uses the most restrictive limit to ensure accurate percentage calculations.
Dual Output Formats
- JSON Format: Ideal for application monitoring, custom dashboards, and programmatic access
- Prometheus Format: Perfect for integration with Prometheus/Grafana monitoring stacks
API Reference
Constructor
new SmartMetrics(logger: Smartlog, sourceName: string)
logger
: Smartlog instance for outputsourceName
: Identifier for your service/application
Methods
async getMetrics(): Promise<IMetricsSnapshot>
Retrieves current system metrics as a JSON object.
Returns:
{
process_cpu_seconds_total: number; // Total CPU time in seconds
nodejs_active_handles_total: number; // Active handles count
nodejs_active_requests_total: number; // Active requests count
nodejs_heap_size_total_bytes: number; // Heap size in bytes
cpuPercentage: number; // Current CPU usage (0-100)
cpuUsageText: string; // Human-readable CPU usage
memoryPercentage: number; // Memory usage percentage
memoryUsageBytes: number; // Memory usage in bytes
memoryUsageText: string; // Human-readable memory usage
}
Example:
const metrics = await smartMetrics.getMetrics();
if (metrics.cpuPercentage > 80) {
console.warn('High CPU usage detected!');
}
start(): void
Starts automatic metrics collection and heartbeat logging. Logs metrics every 20 seconds.
Example:
smartMetrics.start();
// Logs: "sending heartbeat for my-service with metrics" every 20 seconds
stop(): void
Stops automatic metrics collection and closes any open endpoints.
async getPrometheusFormattedMetrics(): Promise<string>
Returns metrics in Prometheus text exposition format.
Example:
const promMetrics = await smartMetrics.getPrometheusFormattedMetrics();
// Returns:
// # HELP smartmetrics_cpu_percentage Current CPU usage percentage
// # TYPE smartmetrics_cpu_percentage gauge
// smartmetrics_cpu_percentage 15.2
// ...
enablePrometheusEndpoint(port?: number): void
Starts an HTTP server that exposes metrics for Prometheus scraping.
Parameters:
port
: Port number (default: 9090)
Example:
smartMetrics.enablePrometheusEndpoint(3000);
// Metrics now available at http://localhost:3000/metrics
disablePrometheusEndpoint(): void
Stops the Prometheus HTTP endpoint.
Use Cases
1. Application Performance Monitoring
// Monitor performance during critical operations
const metricsBefore = await smartMetrics.getMetrics();
await performHeavyOperation();
const metricsAfter = await smartMetrics.getMetrics();
console.log(`Operation consumed ${
metricsAfter.process_cpu_seconds_total - metricsBefore.process_cpu_seconds_total
} CPU seconds`);
2. Resource Limit Enforcement
// Prevent operations when resources are constrained
async function checkResources() {
const metrics = await smartMetrics.getMetrics();
if (metrics.memoryPercentage > 90) {
throw new Error('Memory usage too high, refusing new operations');
}
if (metrics.cpuPercentage > 95) {
await delay(1000); // Back off when CPU is stressed
}
}
3. Prometheus + Grafana Monitoring
// Expose metrics for Prometheus
smartMetrics.enablePrometheusEndpoint();
// In your Prometheus config:
// scrape_configs:
// - job_name: 'my-app'
// static_configs:
// - targets: ['localhost:9090']
4. Development and Debugging
// Track memory leaks during development
setInterval(async () => {
const metrics = await smartMetrics.getMetrics();
console.log(`Heap: ${metrics.nodejs_heap_size_total_bytes / 1024 / 1024}MB`);
}, 5000);
5. Container Resource Monitoring
// Automatically detects container limits
const metrics = await smartMetrics.getMetrics();
console.log(metrics.memoryUsageText);
// Output: "45% | 920 MB / 2 GB" (detects container limit)
Integration Examples
With Express
import express from 'express';
const app = express();
app.get('/health', async (req, res) => {
const metrics = await smartMetrics.getMetrics();
res.json({
status: metrics.memoryPercentage < 90 ? 'healthy' : 'degraded',
metrics: {
cpu: metrics.cpuUsageText,
memory: metrics.memoryUsageText
}
});
});
With PM2
// Graceful shutdown on high memory
setInterval(async () => {
const metrics = await smartMetrics.getMetrics();
if (metrics.memoryPercentage > 95) {
console.error('Memory limit reached, requesting restart');
process.exit(0); // PM2 will restart the process
}
}, 10000);
With Custom Dashboards
// Stream metrics to your monitoring service
setInterval(async () => {
const metrics = await smartMetrics.getMetrics();
await sendToMonitoringService({
timestamp: Date.now(),
service: 'my-service',
cpu: metrics.cpuPercentage,
memory: metrics.memoryUsageBytes,
memoryLimit: metrics.memoryUsageBytes / (metrics.memoryPercentage / 100)
});
}, 60000);
License and Legal Information
This repository contains open-source code that is licensed under the MIT License. A copy of the MIT License can be found in the license file within this repository.
Please note: The MIT License does not grant permission to use the trade names, trademarks, service marks, or product names of the project, except as required for reasonable and customary use in describing the origin of the work and reproducing the content of the NOTICE file.
Trademarks
This project is owned and maintained by Task Venture Capital GmbH. The names and logos associated with Task Venture Capital GmbH and any related products or services are trademarks of Task Venture Capital GmbH and are not included within the scope of the MIT license granted herein. Use of these trademarks must comply with Task Venture Capital GmbH's Trademark Guidelines, and any usage must be approved in writing by Task Venture Capital GmbH.
Company Information
Task Venture Capital GmbH
Registered at District court Bremen HRB 35230 HB, Germany
For any legal inquiries or if you require further information, please contact us via email at hello@task.vc.
By using this repository, you acknowledge that you have read this section, agree to comply with its terms, and understand that the licensing of the code does not imply endorsement by Task Venture Capital GmbH of any derivative works.