Compare commits
No commits in common. "master" and "v1.0.110" have entirely different histories.
164
changelog.md
164
changelog.md
@ -1,164 +0,0 @@
|
||||
# Changelog
|
||||
|
||||
## 2024-12-23 - 1.3.0 - feat(core)
|
||||
Initial release of Docker client with TypeScript support
|
||||
|
||||
- Provides easy communication with Docker's remote API from Node.js
|
||||
- Includes implementations for managing Docker services, networks, secrets, containers, and images
|
||||
|
||||
## 2024-12-23 - 1.2.8 - fix(core)
|
||||
Improved the image creation process from tar stream in DockerImage class.
|
||||
|
||||
- Enhanced `DockerImage.createFromTarStream` method to handle streamed response and parse imported image details.
|
||||
- Fixed the dependency version for `@push.rocks/smartarchive` in package.json.
|
||||
|
||||
## 2024-10-13 - 1.2.7 - fix(core)
|
||||
Prepare patch release with minor fixes and improvements
|
||||
|
||||
|
||||
## 2024-10-13 - 1.2.6 - fix(core)
|
||||
Minor refactoring and code quality improvements.
|
||||
|
||||
|
||||
## 2024-10-13 - 1.2.5 - fix(dependencies)
|
||||
Update dependencies for stability improvements
|
||||
|
||||
- Updated @push.rocks/smartstream to version ^3.0.46
|
||||
- Updated @push.rocks/tapbundle to version ^5.3.0
|
||||
- Updated @types/node to version 22.7.5
|
||||
|
||||
## 2024-10-13 - 1.2.4 - fix(core)
|
||||
Refactored DockerImageStore constructor to remove DockerHost dependency
|
||||
|
||||
- Adjusted DockerImageStore constructor to remove dependency on DockerHost
|
||||
- Updated ts/classes.host.ts to align with DockerImageStore's new constructor signature
|
||||
|
||||
## 2024-08-21 - 1.2.3 - fix(dependencies)
|
||||
Update dependencies to the latest versions and fix image export test
|
||||
|
||||
- Updated several dependencies to their latest versions in package.json.
|
||||
- Enabled the previously skipped 'should export images' test.
|
||||
|
||||
## 2024-06-10 - 1.2.1-1.2.2 - Core/General
|
||||
General updates and fixes.
|
||||
|
||||
- Fix core update
|
||||
|
||||
## 2024-06-10 - 1.2.0 - Core
|
||||
Core updates and bug fixes.
|
||||
|
||||
- Fix core update
|
||||
|
||||
## 2024-06-08 - 1.2.0 - General/Core
|
||||
Major release with core enhancements.
|
||||
|
||||
- Processing images with extraction, retagging, repackaging, and long-term storage
|
||||
|
||||
## 2024-06-06 - 1.1.4 - General/Imagestore
|
||||
Significant feature addition.
|
||||
|
||||
- Add feature to process images with extraction, retagging, repackaging, and long-term storage
|
||||
|
||||
## 2024-05-08 - 1.0.112 - Images
|
||||
Add new functionality for image handling.
|
||||
|
||||
- Can now import and export images
|
||||
- Start work on local 100% JS OCI image registry
|
||||
|
||||
## 2024-06-05 - 1.1.0-1.1.3 - Core
|
||||
Regular updates and fixes.
|
||||
|
||||
- Fix core update
|
||||
|
||||
## 2024-02-02 - 1.0.105-1.0.110 - Core
|
||||
Routine core updates and fixes.
|
||||
|
||||
- Fix core update
|
||||
|
||||
## 2022-10-17 - 1.0.103-1.0.104 - Core
|
||||
Routine core updates.
|
||||
|
||||
- Fix core update
|
||||
|
||||
## 2020-10-01 - 1.0.99-1.0.102 - Core
|
||||
Routine core updates.
|
||||
|
||||
- Fix core update
|
||||
|
||||
## 2019-09-22 - 1.0.73-1.0.78 - Core
|
||||
Routine updates and core fixes.
|
||||
|
||||
- Fix core update
|
||||
|
||||
## 2019-09-13 - 1.0.60-1.0.72 - Core
|
||||
Routine updates and core fixes.
|
||||
|
||||
- Fix core update
|
||||
|
||||
## 2019-08-16 - 1.0.43-1.0.59 - Core
|
||||
Routine updates and core fixes.
|
||||
|
||||
- Fix core update
|
||||
|
||||
## 2019-08-15 - 1.0.37-1.0.42 - Core
|
||||
Routine updates and core fixes.
|
||||
|
||||
- Fix core update
|
||||
|
||||
## 2019-08-14 - 1.0.31-1.0.36 - Core
|
||||
Routine updates and core fixes.
|
||||
|
||||
- Fix core update
|
||||
|
||||
## 2019-01-10 - 1.0.27-1.0.30 - Core
|
||||
Routine updates and core fixes.
|
||||
|
||||
- Fix core update
|
||||
|
||||
## 2018-07-16 - 1.0.23-1.0.24 - Core
|
||||
Routine updates and core fixes.
|
||||
|
||||
- Fix core shift to new style
|
||||
|
||||
## 2017-07-16 - 1.0.20-1.0.22 - General
|
||||
Routine updates and fixes.
|
||||
|
||||
- Update node_modules within npmdocker
|
||||
|
||||
## 2017-04-02 - 1.0.18-1.0.19 - General
|
||||
Routine updates and fixes.
|
||||
|
||||
- Work with npmdocker and npmts 7.x.x
|
||||
- CI updates
|
||||
|
||||
## 2016-07-31 - 1.0.17 - General
|
||||
Enhancements and fixes.
|
||||
|
||||
- Now waiting for response to be stored before ending streaming request
|
||||
- Cosmetic fix
|
||||
|
||||
## 2016-07-29 - 1.0.14-1.0.16 - General
|
||||
Multiple updates and features added.
|
||||
|
||||
- Fix request for change observable and add npmdocker
|
||||
- Add request typings
|
||||
|
||||
## 2016-07-28 - 1.0.13 - Core
|
||||
Fixes and preparations.
|
||||
|
||||
- Fixed request for newer docker
|
||||
- Prepare for npmdocker
|
||||
|
||||
|
||||
## 2016-06-16 - 1.0.0-1.0.2 - General
|
||||
Initial sequence of releases, significant feature additions and CI setups.
|
||||
|
||||
- Implement container start and stop
|
||||
- Implement list containers and related functions
|
||||
- Add tests with in docker environment
|
||||
|
||||
## 2016-04-12 - unknown - Initial Commit
|
||||
Initial project setup.
|
||||
|
||||
- Initial commit
|
||||
|
37
package.json
37
package.json
@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@apiclient.xyz/docker",
|
||||
"version": "1.3.0",
|
||||
"version": "1.0.110",
|
||||
"description": "Provides easy communication with Docker remote API from Node.js, with TypeScript support.",
|
||||
"private": false,
|
||||
"main": "dist_ts/index.js",
|
||||
@ -33,30 +33,25 @@
|
||||
},
|
||||
"homepage": "https://gitlab.com/mojoio/docker#readme",
|
||||
"dependencies": {
|
||||
"@push.rocks/lik": "^6.0.15",
|
||||
"@push.rocks/smartarchive": "^4.0.39",
|
||||
"@push.rocks/smartbucket": "^3.0.22",
|
||||
"@push.rocks/smartfile": "^11.0.21",
|
||||
"@push.rocks/smartjson": "^5.0.20",
|
||||
"@push.rocks/smartlog": "^3.0.7",
|
||||
"@push.rocks/lik": "^6.0.0",
|
||||
"@push.rocks/smartfile": "^11.0.4",
|
||||
"@push.rocks/smartjson": "^5.0.2",
|
||||
"@push.rocks/smartlog": "^3.0.1",
|
||||
"@push.rocks/smartnetwork": "^3.0.0",
|
||||
"@push.rocks/smartpath": "^5.0.18",
|
||||
"@push.rocks/smartpromise": "^4.0.4",
|
||||
"@push.rocks/smartrequest": "^2.0.22",
|
||||
"@push.rocks/smartstream": "^3.0.46",
|
||||
"@push.rocks/smartstring": "^4.0.15",
|
||||
"@push.rocks/smartunique": "^3.0.9",
|
||||
"@push.rocks/smartversion": "^3.0.5",
|
||||
"@tsclass/tsclass": "^4.1.2",
|
||||
"@push.rocks/smartpath": "^5.0.5",
|
||||
"@push.rocks/smartpromise": "^4.0.3",
|
||||
"@push.rocks/smartrequest": "^2.0.11",
|
||||
"@push.rocks/smartstring": "^4.0.5",
|
||||
"@push.rocks/smartversion": "^3.0.2",
|
||||
"@tsclass/tsclass": "^4.0.24",
|
||||
"rxjs": "^7.5.7"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@git.zone/tsbuild": "^2.1.84",
|
||||
"@git.zone/tsrun": "^1.2.49",
|
||||
"@git.zone/tstest": "^1.0.90",
|
||||
"@push.rocks/qenv": "^6.0.5",
|
||||
"@push.rocks/tapbundle": "^5.3.0",
|
||||
"@types/node": "22.7.5"
|
||||
"@git.zone/tsbuild": "^2.1.25",
|
||||
"@git.zone/tsrun": "^1.2.12",
|
||||
"@git.zone/tstest": "^1.0.52",
|
||||
"@push.rocks/tapbundle": "^5.0.4",
|
||||
"@types/node": "^20.11.16"
|
||||
},
|
||||
"files": [
|
||||
"ts/**/*",
|
||||
|
8833
pnpm-lock.yaml
generated
8833
pnpm-lock.yaml
generated
File diff suppressed because it is too large
Load Diff
6
qenv.yml
6
qenv.yml
@ -1,6 +0,0 @@
|
||||
required:
|
||||
- S3_ENDPOINT
|
||||
- S3_ACCESSKEY
|
||||
- S3_ACCESSSECRET
|
||||
- S3_BUCKET
|
||||
|
@ -1,18 +1,10 @@
|
||||
import { expect, tap } from '@push.rocks/tapbundle';
|
||||
import { Qenv } from '@push.rocks/qenv';
|
||||
|
||||
const testQenv = new Qenv('./', './.nogit/');
|
||||
|
||||
import * as plugins from '../ts/plugins.js';
|
||||
import * as paths from '../ts/paths.js';
|
||||
|
||||
import * as docker from '../ts/index.js';
|
||||
|
||||
let testDockerHost: docker.DockerHost;
|
||||
|
||||
tap.test('should create a new Dockersock instance', async () => {
|
||||
testDockerHost = new docker.DockerHost({});
|
||||
await testDockerHost.start();
|
||||
testDockerHost = new docker.DockerHost();
|
||||
return expect(testDockerHost).toBeInstanceOf(docker.DockerHost);
|
||||
});
|
||||
|
||||
@ -48,10 +40,8 @@ tap.test('should remove a network', async () => {
|
||||
// Images
|
||||
tap.test('should pull an image from imagetag', async () => {
|
||||
const image = await docker.DockerImage.createFromRegistry(testDockerHost, {
|
||||
creationObject: {
|
||||
imageUrl: 'hosttoday/ht-docker-node',
|
||||
imageTag: 'alpine',
|
||||
},
|
||||
});
|
||||
expect(image).toBeInstanceOf(docker.DockerImage);
|
||||
console.log(image);
|
||||
@ -103,9 +93,7 @@ tap.test('should create a service', async () => {
|
||||
contentArg: '{"hi": "wow"}',
|
||||
});
|
||||
const testImage = await docker.DockerImage.createFromRegistry(testDockerHost, {
|
||||
creationObject: {
|
||||
imageUrl: 'code.foss.global/host.today/ht-docker-node:latest',
|
||||
}
|
||||
imageUrl: 'registry.gitlab.com/hosttoday/ht-docker-static',
|
||||
});
|
||||
const testService = await docker.DockerService.createService(testDockerHost, {
|
||||
image: testImage,
|
||||
@ -122,48 +110,4 @@ tap.test('should create a service', async () => {
|
||||
await testSecret.remove();
|
||||
});
|
||||
|
||||
tap.test('should export images', async (toolsArg) => {
|
||||
const done = toolsArg.defer();
|
||||
const testImage = await docker.DockerImage.createFromRegistry(testDockerHost, {
|
||||
creationObject: {
|
||||
imageUrl: 'code.foss.global/host.today/ht-docker-node:latest',
|
||||
}
|
||||
});
|
||||
const fsWriteStream = plugins.smartfile.fsStream.createWriteStream(
|
||||
plugins.path.join(paths.nogitDir, 'testimage.tar')
|
||||
);
|
||||
const exportStream = await testImage.exportToTarStream();
|
||||
exportStream.pipe(fsWriteStream).on('finish', () => {
|
||||
done.resolve();
|
||||
});
|
||||
await done.promise;
|
||||
});
|
||||
|
||||
tap.test('should import images', async (toolsArg) => {
|
||||
const done = toolsArg.defer();
|
||||
const fsReadStream = plugins.smartfile.fsStream.createReadStream(
|
||||
plugins.path.join(paths.nogitDir, 'testimage.tar')
|
||||
);
|
||||
await docker.DockerImage.createFromTarStream(testDockerHost, {
|
||||
tarStream: fsReadStream,
|
||||
creationObject: {
|
||||
imageUrl: 'code.foss.global/host.today/ht-docker-node:latest',
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
tap.test('should expose a working DockerImageStore', async () => {
|
||||
// lets first add am s3 target
|
||||
const s3Descriptor = {
|
||||
endpoint: await testQenv.getEnvVarOnDemand('S3_ENDPOINT'),
|
||||
accessKey: await testQenv.getEnvVarOnDemand('S3_ACCESSKEY'),
|
||||
accessSecret: await testQenv.getEnvVarOnDemand('S3_ACCESSSECRET'),
|
||||
bucketName: await testQenv.getEnvVarOnDemand('S3_BUCKET'),
|
||||
};
|
||||
await testDockerHost.addS3Storage(s3Descriptor);
|
||||
|
||||
//
|
||||
await testDockerHost.imageStore.storeImage('hello', plugins.smartfile.fsStream.createReadStream(plugins.path.join(paths.nogitDir, 'testimage.tar')));
|
||||
})
|
||||
|
||||
export default tap.start();
|
||||
tap.start();
|
||||
|
@ -1,8 +1,8 @@
|
||||
/**
|
||||
* autocreated commitinfo by @push.rocks/commitinfo
|
||||
* autocreated commitinfo by @pushrocks/commitinfo
|
||||
*/
|
||||
export const commitinfo = {
|
||||
name: '@apiclient.xyz/docker',
|
||||
version: '1.3.0',
|
||||
version: '1.0.110',
|
||||
description: 'Provides easy communication with Docker remote API from Node.js, with TypeScript support.'
|
||||
}
|
||||
|
@ -1,275 +0,0 @@
|
||||
import * as plugins from './plugins.js';
|
||||
import * as interfaces from './interfaces/index.js';
|
||||
import { DockerHost } from './classes.host.js';
|
||||
import { logger } from './logger.js';
|
||||
|
||||
/**
|
||||
* represents a docker image on the remote docker host
|
||||
*/
|
||||
export class DockerImage {
|
||||
// STATIC
|
||||
public static async getImages(dockerHost: DockerHost) {
|
||||
const images: DockerImage[] = [];
|
||||
const response = await dockerHost.request('GET', '/images/json');
|
||||
for (const imageObject of response.body) {
|
||||
images.push(new DockerImage(dockerHost, imageObject));
|
||||
}
|
||||
return images;
|
||||
}
|
||||
|
||||
public static async getImageByName(dockerHost: DockerHost, imageNameArg: string) {
|
||||
const images = await this.getImages(dockerHost);
|
||||
const result = images.find((image) => {
|
||||
if (image.RepoTags) {
|
||||
return image.RepoTags.includes(imageNameArg);
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
});
|
||||
return result;
|
||||
}
|
||||
|
||||
public static async createFromRegistry(
|
||||
dockerHostArg: DockerHost,
|
||||
optionsArg: {
|
||||
creationObject: interfaces.IImageCreationDescriptor
|
||||
}
|
||||
): Promise<DockerImage> {
|
||||
// lets create a sanatized imageUrlObject
|
||||
const imageUrlObject: {
|
||||
imageUrl: string;
|
||||
imageTag: string;
|
||||
imageOriginTag: string;
|
||||
} = {
|
||||
imageUrl: optionsArg.creationObject.imageUrl,
|
||||
imageTag: optionsArg.creationObject.imageTag,
|
||||
imageOriginTag: null,
|
||||
};
|
||||
if (imageUrlObject.imageUrl.includes(':')) {
|
||||
const imageUrl = imageUrlObject.imageUrl.split(':')[0];
|
||||
const imageTag = imageUrlObject.imageUrl.split(':')[1];
|
||||
if (imageUrlObject.imageTag) {
|
||||
throw new Error(
|
||||
`imageUrl ${imageUrlObject.imageUrl} can't be tagged with ${imageUrlObject.imageTag} because it is already tagged with ${imageTag}`
|
||||
);
|
||||
} else {
|
||||
imageUrlObject.imageUrl = imageUrl;
|
||||
imageUrlObject.imageTag = imageTag;
|
||||
}
|
||||
} else if (!imageUrlObject.imageTag) {
|
||||
imageUrlObject.imageTag = 'latest';
|
||||
}
|
||||
imageUrlObject.imageOriginTag = `${imageUrlObject.imageUrl}:${imageUrlObject.imageTag}`;
|
||||
|
||||
// lets actually create the image
|
||||
const response = await dockerHostArg.request(
|
||||
'POST',
|
||||
`/images/create?fromImage=${encodeURIComponent(
|
||||
imageUrlObject.imageUrl
|
||||
)}&tag=${encodeURIComponent(imageUrlObject.imageTag)}`
|
||||
);
|
||||
if (response.statusCode < 300) {
|
||||
logger.log('info', `Successfully pulled image ${imageUrlObject.imageUrl} from the registry`);
|
||||
const image = await DockerImage.getImageByName(dockerHostArg, imageUrlObject.imageOriginTag);
|
||||
return image;
|
||||
} else {
|
||||
logger.log('error', `Failed at the attempt of creating a new image`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
*
|
||||
* @param dockerHostArg
|
||||
* @param tarStreamArg
|
||||
*/
|
||||
public static async createFromTarStream(
|
||||
dockerHostArg: DockerHost,
|
||||
optionsArg: {
|
||||
creationObject: interfaces.IImageCreationDescriptor;
|
||||
tarStream: plugins.smartstream.stream.Readable;
|
||||
}
|
||||
): Promise<DockerImage> {
|
||||
// Start the request for importing an image
|
||||
const response = await dockerHostArg.requestStreaming(
|
||||
'POST',
|
||||
'/images/load',
|
||||
optionsArg.tarStream
|
||||
);
|
||||
|
||||
/**
|
||||
* Docker typically returns lines like:
|
||||
* {"stream":"Loaded image: myrepo/myimage:latest"}
|
||||
*
|
||||
* So we will collect those lines and parse out the final image name.
|
||||
*/
|
||||
let rawOutput = '';
|
||||
response.on('data', (chunk) => {
|
||||
rawOutput += chunk.toString();
|
||||
});
|
||||
|
||||
// Wrap the end event in a Promise for easier async/await usage
|
||||
await new Promise<void>((resolve, reject) => {
|
||||
response.on('end', () => {
|
||||
resolve();
|
||||
});
|
||||
response.on('error', (err) => {
|
||||
reject(err);
|
||||
});
|
||||
});
|
||||
|
||||
// Attempt to parse each line to find something like "Loaded image: ..."
|
||||
let loadedImageTag: string | undefined;
|
||||
const lines = rawOutput.trim().split('\n').filter(Boolean);
|
||||
|
||||
for (const line of lines) {
|
||||
try {
|
||||
const jsonLine = JSON.parse(line);
|
||||
if (
|
||||
jsonLine.stream &&
|
||||
(jsonLine.stream.startsWith('Loaded image:') ||
|
||||
jsonLine.stream.startsWith('Loaded image ID:'))
|
||||
) {
|
||||
// Examples:
|
||||
// "Loaded image: your-image:latest"
|
||||
// "Loaded image ID: sha256:...."
|
||||
loadedImageTag = jsonLine.stream
|
||||
.replace('Loaded image: ', '')
|
||||
.replace('Loaded image ID: ', '')
|
||||
.trim();
|
||||
}
|
||||
} catch {
|
||||
// not valid JSON, ignore
|
||||
}
|
||||
}
|
||||
|
||||
if (!loadedImageTag) {
|
||||
throw new Error(
|
||||
`Could not parse the loaded image info from Docker response.\nResponse was:\n${rawOutput}`
|
||||
);
|
||||
}
|
||||
|
||||
// Now try to look up that image by the "loadedImageTag".
|
||||
// Depending on Docker’s response, it might be something like:
|
||||
// "myrepo/myimage:latest" OR "sha256:someHash..."
|
||||
// If Docker gave you an ID (e.g. "sha256:..."), you may need a separate
|
||||
// DockerImage.getImageById method; or if you prefer, you can treat it as a name.
|
||||
const newlyImportedImage = await DockerImage.getImageByName(dockerHostArg, loadedImageTag);
|
||||
|
||||
if (!newlyImportedImage) {
|
||||
throw new Error(
|
||||
`Image load succeeded, but no local reference found for "${loadedImageTag}".`
|
||||
);
|
||||
}
|
||||
|
||||
logger.log(
|
||||
'info',
|
||||
`Successfully imported image "${loadedImageTag}".`
|
||||
);
|
||||
|
||||
return newlyImportedImage;
|
||||
}
|
||||
|
||||
|
||||
public static async tagImageByIdOrName(
|
||||
dockerHost: DockerHost,
|
||||
idOrNameArg: string,
|
||||
newTagArg: string
|
||||
) {
|
||||
const response = await dockerHost.request(
|
||||
'POST',
|
||||
`/images/${encodeURIComponent(idOrNameArg)}/${encodeURIComponent(newTagArg)}`
|
||||
);
|
||||
|
||||
|
||||
}
|
||||
|
||||
public static async buildImage(dockerHostArg: DockerHost, dockerImageTag) {
|
||||
// TODO: implement building an image
|
||||
}
|
||||
|
||||
// INSTANCE
|
||||
// references
|
||||
public dockerHost: DockerHost;
|
||||
|
||||
// properties
|
||||
/**
|
||||
* the tags for an image
|
||||
*/
|
||||
public Containers: number;
|
||||
public Created: number;
|
||||
public Id: string;
|
||||
public Labels: interfaces.TLabels;
|
||||
public ParentId: string;
|
||||
public RepoDigests: string[];
|
||||
public RepoTags: string[];
|
||||
public SharedSize: number;
|
||||
public Size: number;
|
||||
public VirtualSize: number;
|
||||
|
||||
constructor(dockerHostArg, dockerImageObjectArg: any) {
|
||||
this.dockerHost = dockerHostArg;
|
||||
Object.keys(dockerImageObjectArg).forEach((keyArg) => {
|
||||
this[keyArg] = dockerImageObjectArg[keyArg];
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* tag an image
|
||||
* @param newTag
|
||||
*/
|
||||
public async tagImage(newTag) {
|
||||
throw new Error('.tagImage is not yet implemented');
|
||||
}
|
||||
|
||||
/**
|
||||
* pulls the latest version from the registry
|
||||
*/
|
||||
public async pullLatestImageFromRegistry(): Promise<boolean> {
|
||||
const updatedImage = await DockerImage.createFromRegistry(this.dockerHost, {
|
||||
creationObject: {
|
||||
imageUrl: this.RepoTags[0],
|
||||
},
|
||||
});
|
||||
Object.assign(this, updatedImage);
|
||||
// TODO: Compare image digists before and after
|
||||
return true;
|
||||
}
|
||||
|
||||
// get stuff
|
||||
public async getVersion() {
|
||||
if (this.Labels && this.Labels.version) {
|
||||
return this.Labels.version;
|
||||
} else {
|
||||
return '0.0.0';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* exports an image to a tar ball
|
||||
*/
|
||||
public async exportToTarStream(): Promise<plugins.smartstream.stream.Readable> {
|
||||
logger.log('info', `Exporting image ${this.RepoTags[0]} to tar stream.`);
|
||||
const response = await this.dockerHost.requestStreaming('GET', `/images/${encodeURIComponent(this.RepoTags[0])}/get`);
|
||||
let counter = 0;
|
||||
const webduplexStream = new plugins.smartstream.SmartDuplex({
|
||||
writeFunction: async (chunk, tools) => {
|
||||
if (counter % 1000 === 0)
|
||||
console.log(`Got chunk: ${counter}`);
|
||||
counter++;
|
||||
return chunk;
|
||||
}
|
||||
});
|
||||
response.on('data', (chunk) => {
|
||||
if (!webduplexStream.write(chunk)) {
|
||||
response.pause();
|
||||
webduplexStream.once('drain', () => {
|
||||
response.resume();
|
||||
})
|
||||
};
|
||||
});
|
||||
response.on('end', () => {
|
||||
webduplexStream.end();
|
||||
})
|
||||
return webduplexStream;
|
||||
}
|
||||
}
|
@ -1,114 +0,0 @@
|
||||
import * as plugins from './plugins.js';
|
||||
import * as paths from './paths.js';
|
||||
import { logger } from './logger.js';
|
||||
import type { DockerHost } from './classes.host.js';
|
||||
|
||||
export interface IDockerImageStoreConstructorOptions {
|
||||
/**
|
||||
* used for preparing images for longer term storage
|
||||
*/
|
||||
localDirPath: string;
|
||||
/**
|
||||
* a smartbucket dir for longer term storage.
|
||||
*/
|
||||
bucketDir: plugins.smartbucket.Directory;
|
||||
}
|
||||
|
||||
export class DockerImageStore {
|
||||
public options: IDockerImageStoreConstructorOptions;
|
||||
|
||||
constructor(optionsArg: IDockerImageStoreConstructorOptions) {
|
||||
this.options = optionsArg;
|
||||
}
|
||||
|
||||
// Method to store tar stream
|
||||
public async storeImage(imageName: string, tarStream: plugins.smartstream.stream.Readable): Promise<void> {
|
||||
logger.log('info', `Storing image ${imageName}...`);
|
||||
const uniqueProcessingId = plugins.smartunique.shortId();
|
||||
|
||||
const initialTarDownloadPath = plugins.path.join(this.options.localDirPath, `${uniqueProcessingId}.tar`);
|
||||
const extractionDir = plugins.path.join(this.options.localDirPath, uniqueProcessingId);
|
||||
// Create a write stream to store the tar file
|
||||
const writeStream = plugins.smartfile.fsStream.createWriteStream(initialTarDownloadPath);
|
||||
|
||||
// lets wait for the write stream to finish
|
||||
await new Promise((resolve, reject) => {
|
||||
tarStream.pipe(writeStream);
|
||||
writeStream.on('finish', resolve);
|
||||
writeStream.on('error', reject);
|
||||
});
|
||||
logger.log('info', `Image ${imageName} stored locally for processing. Extracting...`);
|
||||
|
||||
// lets process the image
|
||||
const tarArchive = await plugins.smartarchive.SmartArchive.fromArchiveFile(initialTarDownloadPath);
|
||||
await tarArchive.exportToFs(extractionDir);
|
||||
logger.log('info', `Image ${imageName} extracted.`);
|
||||
await plugins.smartfile.fs.remove(initialTarDownloadPath);
|
||||
logger.log('info', `deleted original tar to save space.`);
|
||||
logger.log('info', `now repackaging for s3...`);
|
||||
const smartfileIndexJson = await plugins.smartfile.SmartFile.fromFilePath(plugins.path.join(extractionDir, 'index.json'));
|
||||
const smartfileManifestJson = await plugins.smartfile.SmartFile.fromFilePath(plugins.path.join(extractionDir, 'manifest.json'));
|
||||
const smartfileOciLayoutJson = await plugins.smartfile.SmartFile.fromFilePath(plugins.path.join(extractionDir, 'oci-layout'));
|
||||
const smartfileRepositoriesJson = await plugins.smartfile.SmartFile.fromFilePath(plugins.path.join(extractionDir, 'repositories'));
|
||||
const indexJson = JSON.parse(smartfileIndexJson.contents.toString());
|
||||
const manifestJson = JSON.parse(smartfileManifestJson.contents.toString());
|
||||
const ociLayoutJson = JSON.parse(smartfileOciLayoutJson.contents.toString());
|
||||
const repositoriesJson = JSON.parse(smartfileRepositoriesJson.contents.toString());
|
||||
|
||||
indexJson.manifests[0].annotations['io.containerd.image.name'] = imageName;
|
||||
manifestJson[0].RepoTags[0] = imageName;
|
||||
const repoFirstKey = Object.keys(repositoriesJson)[0];
|
||||
const repoFirstValue = repositoriesJson[repoFirstKey];
|
||||
repositoriesJson[imageName] = repoFirstValue;
|
||||
delete repositoriesJson[repoFirstKey];
|
||||
|
||||
smartfileIndexJson.contents = Buffer.from(JSON.stringify(indexJson, null, 2));
|
||||
smartfileManifestJson.contents = Buffer.from(JSON.stringify(manifestJson, null, 2));
|
||||
smartfileOciLayoutJson.contents = Buffer.from(JSON.stringify(ociLayoutJson, null, 2));
|
||||
smartfileRepositoriesJson.contents = Buffer.from(JSON.stringify(repositoriesJson, null, 2));
|
||||
await Promise.all([
|
||||
smartfileIndexJson.write(),
|
||||
smartfileManifestJson.write(),
|
||||
smartfileOciLayoutJson.write(),
|
||||
smartfileRepositoriesJson.write(),
|
||||
]);
|
||||
|
||||
logger.log('info', 'repackaging archive for s3...');
|
||||
const tartools = new plugins.smartarchive.TarTools();
|
||||
const newTarPack = await tartools.packDirectory(extractionDir);
|
||||
const finalTarName = `${uniqueProcessingId}.processed.tar`;
|
||||
const finalTarPath = plugins.path.join(this.options.localDirPath, finalTarName);
|
||||
const finalWriteStream = plugins.smartfile.fsStream.createWriteStream(finalTarPath);
|
||||
await new Promise((resolve, reject) => {
|
||||
newTarPack.finalize();
|
||||
newTarPack.pipe(finalWriteStream);
|
||||
finalWriteStream.on('finish', resolve);
|
||||
finalWriteStream.on('error', reject);
|
||||
});
|
||||
logger.log('ok', `Repackaged image ${imageName} for s3.`);
|
||||
await plugins.smartfile.fs.remove(extractionDir);
|
||||
const finalTarReadStream = plugins.smartfile.fsStream.createReadStream(finalTarPath);
|
||||
await this.options.bucketDir.fastPutStream({
|
||||
stream: finalTarReadStream,
|
||||
path: `${imageName}.tar`,
|
||||
});
|
||||
await plugins.smartfile.fs.remove(finalTarPath);
|
||||
}
|
||||
|
||||
public async start() {
|
||||
await plugins.smartfile.fs.ensureEmptyDir(this.options.localDirPath);
|
||||
}
|
||||
|
||||
public async stop() {}
|
||||
|
||||
// Method to retrieve tar stream
|
||||
public async getImage(imageName: string): Promise<plugins.smartstream.stream.Readable> {
|
||||
const imagePath = plugins.path.join(this.options.localDirPath, `${imageName}.tar`);
|
||||
|
||||
if (!(await plugins.smartfile.fs.fileExists(imagePath))) {
|
||||
throw new Error(`Image ${imageName} does not exist.`);
|
||||
}
|
||||
|
||||
return plugins.smartfile.fsStream.createReadStream(imagePath);
|
||||
}
|
||||
}
|
@ -1,8 +1,8 @@
|
||||
import * as plugins from './plugins.js';
|
||||
import * as plugins from './docker.plugins.js';
|
||||
import * as interfaces from './interfaces/index.js';
|
||||
|
||||
import { DockerHost } from './classes.host.js';
|
||||
import { logger } from './logger.js';
|
||||
import { DockerHost } from './docker.classes.host.js';
|
||||
import { logger } from './docker.logging.js';
|
||||
|
||||
export class DockerContainer {
|
||||
// STATIC
|
@ -1,12 +1,9 @@
|
||||
import * as plugins from './plugins.js';
|
||||
import * as paths from './paths.js';
|
||||
import { DockerContainer } from './classes.container.js';
|
||||
import { DockerNetwork } from './classes.network.js';
|
||||
import { DockerService } from './classes.service.js';
|
||||
import { logger } from './logger.js';
|
||||
import * as plugins from './docker.plugins.js';
|
||||
import { DockerContainer } from './docker.classes.container.js';
|
||||
import { DockerNetwork } from './docker.classes.network.js';
|
||||
import { DockerService } from './docker.classes.service.js';
|
||||
import { logger } from './docker.logging.js';
|
||||
import path from 'path';
|
||||
import { DockerImageStore } from './classes.imagestore.js';
|
||||
import { DockerImage } from './classes.image.js';
|
||||
|
||||
export interface IAuthData {
|
||||
serveraddress: string;
|
||||
@ -14,36 +11,21 @@ export interface IAuthData {
|
||||
password: string;
|
||||
}
|
||||
|
||||
export interface IDockerHostConstructorOptions {
|
||||
dockerSockPath?: string;
|
||||
imageStoreDir?: string;
|
||||
}
|
||||
|
||||
export class DockerHost {
|
||||
public options: IDockerHostConstructorOptions;
|
||||
|
||||
/**
|
||||
* the path where the docker sock can be found
|
||||
*/
|
||||
public socketPath: string;
|
||||
private registryToken: string = '';
|
||||
public imageStore: DockerImageStore;
|
||||
public smartBucket: plugins.smartbucket.SmartBucket;
|
||||
|
||||
/**
|
||||
* the constructor to instantiate a new docker sock instance
|
||||
* @param pathArg
|
||||
*/
|
||||
constructor(optionsArg: IDockerHostConstructorOptions) {
|
||||
this.options = {
|
||||
...{
|
||||
imageStoreDir: plugins.path.join(paths.nogitDir, 'temp-docker-image-store'),
|
||||
},
|
||||
...optionsArg,
|
||||
}
|
||||
constructor(pathArg?: string) {
|
||||
let pathToUse: string;
|
||||
if (optionsArg.dockerSockPath) {
|
||||
pathToUse = optionsArg.dockerSockPath;
|
||||
if (pathArg) {
|
||||
pathToUse = pathArg;
|
||||
} else if (process.env.DOCKER_HOST) {
|
||||
pathToUse = process.env.DOCKER_HOST;
|
||||
} else if (process.env.CI) {
|
||||
@ -59,17 +41,6 @@ export class DockerHost {
|
||||
}
|
||||
console.log(`using docker sock at ${pathToUse}`);
|
||||
this.socketPath = pathToUse;
|
||||
this.imageStore = new DockerImageStore({
|
||||
bucketDir: null,
|
||||
localDirPath: this.options.imageStoreDir,
|
||||
})
|
||||
}
|
||||
|
||||
public async start() {
|
||||
await this.imageStore.start();
|
||||
}
|
||||
public async stop() {
|
||||
await this.imageStore.stop();
|
||||
}
|
||||
|
||||
/**
|
||||
@ -90,22 +61,19 @@ export class DockerHost {
|
||||
/**
|
||||
* gets the token from the .docker/config.json file for GitLab registry
|
||||
*/
|
||||
public async getAuthTokenFromDockerConfig(registryUrlArg: string) {
|
||||
public async getGitlabComTokenFromDockerConfig() {
|
||||
const dockerConfigPath = plugins.smartpath.get.home('~/.docker/config.json');
|
||||
const configObject = plugins.smartfile.fs.toObjectSync(dockerConfigPath);
|
||||
const gitlabAuthBase64 = configObject.auths[registryUrlArg].auth;
|
||||
const gitlabAuthBase64 = configObject.auths['registry.gitlab.com'].auth;
|
||||
const gitlabAuth: string = plugins.smartstring.base64.decode(gitlabAuthBase64);
|
||||
const gitlabAuthArray = gitlabAuth.split(':');
|
||||
await this.auth({
|
||||
username: gitlabAuthArray[0],
|
||||
password: gitlabAuthArray[1],
|
||||
serveraddress: registryUrlArg,
|
||||
serveraddress: 'registry.gitlab.com',
|
||||
});
|
||||
}
|
||||
|
||||
// ==============
|
||||
// NETWORKS
|
||||
// ==============
|
||||
/**
|
||||
* gets all networks
|
||||
*/
|
||||
@ -114,23 +82,9 @@ export class DockerHost {
|
||||
}
|
||||
|
||||
/**
|
||||
* create a network
|
||||
*
|
||||
*/
|
||||
public async createNetwork(optionsArg: Parameters<typeof DockerNetwork.createNetwork>[1]) {
|
||||
return await DockerNetwork.createNetwork(this, optionsArg);
|
||||
}
|
||||
|
||||
/**
|
||||
* get a network by name
|
||||
*/
|
||||
public async getNetworkByName(networkNameArg: string) {
|
||||
return await DockerNetwork.getNetworkByName(this, networkNameArg);
|
||||
}
|
||||
|
||||
|
||||
// ==============
|
||||
// CONTAINERS
|
||||
// ==============
|
||||
/**
|
||||
* gets all containers
|
||||
*/
|
||||
@ -139,10 +93,6 @@ export class DockerHost {
|
||||
return containerArray;
|
||||
}
|
||||
|
||||
// ==============
|
||||
// SERVICES
|
||||
// ==============
|
||||
|
||||
/**
|
||||
* gets all services
|
||||
*/
|
||||
@ -151,24 +101,6 @@ export class DockerHost {
|
||||
return serviceArray;
|
||||
}
|
||||
|
||||
// ==============
|
||||
// IMAGES
|
||||
// ==============
|
||||
|
||||
/**
|
||||
* get all images
|
||||
*/
|
||||
public async getImages() {
|
||||
return await DockerImage.getImages(this);
|
||||
}
|
||||
|
||||
/**
|
||||
* get an image by name
|
||||
*/
|
||||
public async getImageByName(imageNameArg: string) {
|
||||
return await DockerImage.getImageByName(this, imageNameArg);
|
||||
}
|
||||
|
||||
/**
|
||||
*
|
||||
*/
|
||||
@ -242,7 +174,7 @@ export class DockerHost {
|
||||
return response;
|
||||
}
|
||||
|
||||
public async requestStreaming(methodArg: string, routeArg: string, readStream?: plugins.smartstream.stream.Readable) {
|
||||
public async requestStreaming(methodArg: string, routeArg: string, dataArg = {}) {
|
||||
const requestUrl = `${this.socketPath}${routeArg}`;
|
||||
const response = await plugins.smartrequest.request(
|
||||
requestUrl,
|
||||
@ -256,40 +188,10 @@ export class DockerHost {
|
||||
requestBody: null,
|
||||
keepAlive: false,
|
||||
},
|
||||
true,
|
||||
(readStream ? reqArg => {
|
||||
let counter = 0;
|
||||
const smartduplex = new plugins.smartstream.SmartDuplex({
|
||||
writeFunction: async (chunkArg) => {
|
||||
if (counter % 1000 === 0) {
|
||||
console.log(`posting chunk ${counter}`);
|
||||
}
|
||||
counter++;
|
||||
return chunkArg;
|
||||
}
|
||||
});
|
||||
readStream.pipe(smartduplex).pipe(reqArg);
|
||||
} : null),
|
||||
true
|
||||
);
|
||||
console.log(response.statusCode);
|
||||
console.log(response.body);
|
||||
return response;
|
||||
}
|
||||
|
||||
/**
|
||||
* add s3 storage
|
||||
* @param optionsArg
|
||||
*/
|
||||
public async addS3Storage(optionsArg: plugins.tsclass.storage.IS3Descriptor) {
|
||||
this.smartBucket = new plugins.smartbucket.SmartBucket(optionsArg);
|
||||
if (!optionsArg.bucketName) {
|
||||
throw new Error('bucketName is required');
|
||||
}
|
||||
const bucket = await this.smartBucket.getBucketByName(optionsArg.bucketName);
|
||||
let wantedDirectory = await bucket.getBaseDirectory();
|
||||
if (optionsArg.directoryPath) {
|
||||
wantedDirectory = await wantedDirectory.getSubDirectoryByName(optionsArg.directoryPath);
|
||||
}
|
||||
this.imageStore.options.bucketDir = wantedDirectory;
|
||||
}
|
||||
}
|
144
ts/docker.classes.image.ts
Normal file
144
ts/docker.classes.image.ts
Normal file
@ -0,0 +1,144 @@
|
||||
import * as plugins from './docker.plugins.js';
|
||||
import * as interfaces from './interfaces/index.js';
|
||||
import { DockerHost } from './docker.classes.host.js';
|
||||
import { logger } from './docker.logging.js';
|
||||
|
||||
export class DockerImage {
|
||||
// STATIC
|
||||
public static async getImages(dockerHost: DockerHost) {
|
||||
const images: DockerImage[] = [];
|
||||
const response = await dockerHost.request('GET', '/images/json');
|
||||
for (const imageObject of response.body) {
|
||||
images.push(new DockerImage(dockerHost, imageObject));
|
||||
}
|
||||
return images;
|
||||
}
|
||||
|
||||
public static async findImageByName(dockerHost: DockerHost, imageNameArg: string) {
|
||||
const images = await this.getImages(dockerHost);
|
||||
const result = images.find((image) => {
|
||||
if (image.RepoTags) {
|
||||
return image.RepoTags.includes(imageNameArg);
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
});
|
||||
return result;
|
||||
}
|
||||
|
||||
public static async createFromRegistry(
|
||||
dockerHostArg: DockerHost,
|
||||
creationObject: interfaces.IImageCreationDescriptor
|
||||
): Promise<DockerImage> {
|
||||
// lets create a sanatized imageUrlObject
|
||||
const imageUrlObject: {
|
||||
imageUrl: string;
|
||||
imageTag: string;
|
||||
imageOriginTag: string;
|
||||
} = {
|
||||
imageUrl: creationObject.imageUrl,
|
||||
imageTag: creationObject.imageTag,
|
||||
imageOriginTag: null,
|
||||
};
|
||||
if (imageUrlObject.imageUrl.includes(':')) {
|
||||
const imageUrl = imageUrlObject.imageUrl.split(':')[0];
|
||||
const imageTag = imageUrlObject.imageUrl.split(':')[1];
|
||||
if (imageUrlObject.imageTag) {
|
||||
throw new Error(
|
||||
`imageUrl ${imageUrlObject.imageUrl} can't be tagged with ${imageUrlObject.imageTag} because it is already tagged with ${imageTag}`
|
||||
);
|
||||
} else {
|
||||
imageUrlObject.imageUrl = imageUrl;
|
||||
imageUrlObject.imageTag = imageTag;
|
||||
}
|
||||
} else if (!imageUrlObject.imageTag) {
|
||||
imageUrlObject.imageTag = 'latest';
|
||||
}
|
||||
imageUrlObject.imageOriginTag = `${imageUrlObject.imageUrl}:${imageUrlObject.imageTag}`;
|
||||
|
||||
// lets actually create the image
|
||||
const response = await dockerHostArg.request(
|
||||
'POST',
|
||||
`/images/create?fromImage=${encodeURIComponent(
|
||||
imageUrlObject.imageUrl
|
||||
)}&tag=${encodeURIComponent(imageUrlObject.imageTag)}`
|
||||
);
|
||||
if (response.statusCode < 300) {
|
||||
logger.log('info', `Successfully pulled image ${imageUrlObject.imageUrl} from the registry`);
|
||||
const image = await DockerImage.findImageByName(dockerHostArg, imageUrlObject.imageOriginTag);
|
||||
return image;
|
||||
} else {
|
||||
logger.log('error', `Failed at the attempt of creating a new image`);
|
||||
}
|
||||
}
|
||||
|
||||
public static async tagImageByIdOrName(
|
||||
dockerHost: DockerHost,
|
||||
idOrNameArg: string,
|
||||
newTagArg: string
|
||||
) {
|
||||
const response = await dockerHost.request(
|
||||
'POST',
|
||||
`/images/${encodeURIComponent(idOrNameArg)}/${encodeURIComponent(newTagArg)}`
|
||||
);
|
||||
}
|
||||
|
||||
public static async buildImage(dockerHostArg: DockerHost, dockerImageTag) {
|
||||
// TODO: implement building an image
|
||||
}
|
||||
|
||||
// INSTANCE
|
||||
// references
|
||||
public dockerHost: DockerHost;
|
||||
|
||||
// properties
|
||||
/**
|
||||
* the tags for an image
|
||||
*/
|
||||
public Containers: number;
|
||||
public Created: number;
|
||||
public Id: string;
|
||||
public Labels: interfaces.TLabels;
|
||||
public ParentId: string;
|
||||
public RepoDigests: string[];
|
||||
public RepoTags: string[];
|
||||
public SharedSize: number;
|
||||
public Size: number;
|
||||
public VirtualSize: number;
|
||||
|
||||
constructor(dockerHostArg, dockerImageObjectArg: any) {
|
||||
this.dockerHost = dockerHostArg;
|
||||
Object.keys(dockerImageObjectArg).forEach((keyArg) => {
|
||||
this[keyArg] = dockerImageObjectArg[keyArg];
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* tag an image
|
||||
* @param newTag
|
||||
*/
|
||||
public async tagImage(newTag) {
|
||||
throw new Error('.tagImage is not yet implemented');
|
||||
}
|
||||
|
||||
/**
|
||||
* pulls the latest version from the registry
|
||||
*/
|
||||
public async pullLatestImageFromRegistry(): Promise<boolean> {
|
||||
const updatedImage = await DockerImage.createFromRegistry(this.dockerHost, {
|
||||
imageUrl: this.RepoTags[0],
|
||||
});
|
||||
Object.assign(this, updatedImage);
|
||||
// TODO: Compare image digists before and after
|
||||
return true;
|
||||
}
|
||||
|
||||
// get stuff
|
||||
public async getVersion() {
|
||||
if (this.Labels && this.Labels.version) {
|
||||
return this.Labels.version;
|
||||
} else {
|
||||
return '0.0.0';
|
||||
}
|
||||
}
|
||||
}
|
@ -1,9 +1,9 @@
|
||||
import * as plugins from './plugins.js';
|
||||
import * as plugins from './docker.plugins.js';
|
||||
import * as interfaces from './interfaces/index.js';
|
||||
|
||||
import { DockerHost } from './classes.host.js';
|
||||
import { DockerService } from './classes.service.js';
|
||||
import { logger } from './logger.js';
|
||||
import { DockerHost } from './docker.classes.host.js';
|
||||
import { DockerService } from './docker.classes.service.js';
|
||||
import { logger } from './docker.logging.js';
|
||||
|
||||
export class DockerNetwork {
|
||||
public static async getNetworks(dockerHost: DockerHost): Promise<DockerNetwork[]> {
|
@ -1,5 +1,5 @@
|
||||
import * as plugins from './plugins.js';
|
||||
import { DockerHost } from './classes.host.js';
|
||||
import * as plugins from './docker.plugins.js';
|
||||
import { DockerHost } from './docker.classes.host.js';
|
||||
|
||||
// interfaces
|
||||
import * as interfaces from './interfaces/index.js';
|
@ -1,10 +1,10 @@
|
||||
import * as plugins from './plugins.js';
|
||||
import * as plugins from './docker.plugins.js';
|
||||
import * as interfaces from './interfaces/index.js';
|
||||
|
||||
import { DockerHost } from './classes.host.js';
|
||||
import { DockerImage } from './classes.image.js';
|
||||
import { DockerSecret } from './classes.secret.js';
|
||||
import { logger } from './logger.js';
|
||||
import { DockerHost } from './docker.classes.host.js';
|
||||
import { DockerImage } from './docker.classes.image.js';
|
||||
import { DockerSecret } from './docker.classes.secret.js';
|
||||
import { logger } from './docker.logging.js';
|
||||
|
||||
export class DockerService {
|
||||
// STATIC
|
||||
@ -232,9 +232,7 @@ export class DockerService {
|
||||
|
||||
await this.reReadFromDockerEngine();
|
||||
const dockerImage = await DockerImage.createFromRegistry(this.dockerHostRef, {
|
||||
creationObject: {
|
||||
imageUrl: this.Spec.TaskTemplate.ContainerSpec.Image,
|
||||
}
|
||||
});
|
||||
|
||||
const imageVersion = new plugins.smartversion.SmartVersion(dockerImage.Labels.version);
|
3
ts/docker.logging.ts
Normal file
3
ts/docker.logging.ts
Normal file
@ -0,0 +1,3 @@
|
||||
import * as plugins from './docker.plugins.js';
|
||||
|
||||
export const logger = new plugins.smartlog.ConsoleLog();
|
@ -5,8 +5,6 @@ export { path };
|
||||
|
||||
// @pushrocks scope
|
||||
import * as lik from '@push.rocks/lik';
|
||||
import * as smartarchive from '@push.rocks/smartarchive';
|
||||
import * as smartbucket from '@push.rocks/smartbucket';
|
||||
import * as smartfile from '@push.rocks/smartfile';
|
||||
import * as smartjson from '@push.rocks/smartjson';
|
||||
import * as smartlog from '@push.rocks/smartlog';
|
||||
@ -15,14 +13,10 @@ import * as smartpath from '@push.rocks/smartpath';
|
||||
import * as smartpromise from '@push.rocks/smartpromise';
|
||||
import * as smartrequest from '@push.rocks/smartrequest';
|
||||
import * as smartstring from '@push.rocks/smartstring';
|
||||
import * as smartstream from '@push.rocks/smartstream';
|
||||
import * as smartunique from '@push.rocks/smartunique';
|
||||
import * as smartversion from '@push.rocks/smartversion';
|
||||
|
||||
export {
|
||||
lik,
|
||||
smartarchive,
|
||||
smartbucket,
|
||||
smartfile,
|
||||
smartjson,
|
||||
smartlog,
|
||||
@ -31,8 +25,6 @@ export {
|
||||
smartpromise,
|
||||
smartrequest,
|
||||
smartstring,
|
||||
smartstream,
|
||||
smartunique,
|
||||
smartversion,
|
||||
};
|
||||
|
13
ts/index.ts
13
ts/index.ts
@ -1,7 +1,6 @@
|
||||
export * from './classes.host.js';
|
||||
export * from './classes.container.js';
|
||||
export * from './classes.image.js';
|
||||
export * from './classes.imagestore.js';
|
||||
export * from './classes.network.js';
|
||||
export * from './classes.secret.js';
|
||||
export * from './classes.service.js';
|
||||
export * from './docker.classes.host.js';
|
||||
export * from './docker.classes.container.js';
|
||||
export * from './docker.classes.image.js';
|
||||
export * from './docker.classes.network.js';
|
||||
export * from './docker.classes.secret.js';
|
||||
export * from './docker.classes.service.js';
|
||||
|
@ -1,4 +1,4 @@
|
||||
import { DockerNetwork } from '../classes.network.js';
|
||||
import { DockerNetwork } from '../docker.classes.network.js';
|
||||
|
||||
export interface IContainerCreationDescriptor {
|
||||
Hostname: string;
|
||||
|
@ -1,9 +1,9 @@
|
||||
import * as plugins from '../plugins.js';
|
||||
import * as plugins from '../docker.plugins.js';
|
||||
|
||||
import * as interfaces from './index.js';
|
||||
import { DockerNetwork } from '../classes.network.js';
|
||||
import { DockerSecret } from '../classes.secret.js';
|
||||
import { DockerImage } from '../classes.image.js';
|
||||
import { DockerNetwork } from '../docker.classes.network.js';
|
||||
import { DockerSecret } from '../docker.classes.secret.js';
|
||||
import { DockerImage } from '../docker.classes.image.js';
|
||||
|
||||
export interface IServiceCreationDescriptor {
|
||||
name: string;
|
||||
|
@ -1,5 +0,0 @@
|
||||
import * as plugins from './plugins.js';
|
||||
import { commitinfo } from './00_commitinfo_data.js';
|
||||
|
||||
export const logger = plugins.smartlog.Smartlog.createForCommitinfo(commitinfo);
|
||||
logger.enableConsole();
|
@ -1,9 +0,0 @@
|
||||
import * as plugins from './plugins.js';
|
||||
|
||||
export const packageDir = plugins.path.resolve(
|
||||
plugins.smartpath.get.dirnameFromImportMetaUrl(import.meta.url),
|
||||
'../'
|
||||
);
|
||||
|
||||
export const nogitDir = plugins.path.resolve(packageDir, '.nogit/');
|
||||
plugins.smartfile.fs.ensureDir(nogitDir);
|
Loading…
x
Reference in New Issue
Block a user