Compare commits
11 Commits
c8a695663c
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ca046ef160 | ||
|
|
76c166cc01 | ||
|
|
ed1027e5c9 | ||
|
|
c843a58ca8 | ||
|
|
a812cce68d | ||
|
|
b897cc1002 | ||
|
|
50994b567a | ||
|
|
8550024659 | ||
|
|
9065826e43 | ||
|
|
d1eb441498 | ||
|
|
69164b66e2 |
@@ -1,2 +1,7 @@
|
||||
.git*
|
||||
.DS_Store
|
||||
.idea/
|
||||
*.log
|
||||
node_modules/
|
||||
test/
|
||||
.vscode
|
||||
145
README.md
145
README.md
@@ -1,58 +1,117 @@
|
||||
# hexo-deployer-s3
|
||||
# Hexo Deployer for S3-Compatible Services
|
||||
|
||||
Amazon S3 deployer plugin for [Hexo](http://hexo.io/)
|
||||
[](https://www.npmjs.com/package/hexo-deployer-s3-plus)
|
||||
[](https://www.npmjs.com/package/hexo-deployer-s3-plus)
|
||||
|
||||
English|[简体中文](README_zh.md)
|
||||
|
||||
This is a deployment plugin for [Hexo](https://hexo.io) that allows you to deploy your static site to any S3-compatible object storage service. It is built using the AWS SDK v3, ensuring modern features and robust performance.
|
||||
|
||||
This plugin is perfect for:
|
||||
* **AWS S3**
|
||||
* **Tebi.io**
|
||||
* **MinIO**
|
||||
* **Cloudflare R2**
|
||||
* **DigitalOcean Spaces**
|
||||
* And any other storage provider that exposes an S3-compatible API.
|
||||
|
||||
## Features
|
||||
|
||||
- **Broad Compatibility**: Deploy to any S3-compatible service by simply providing an endpoint.
|
||||
- **Concurrent Uploads**: Utilizes `p-limit` to upload multiple files in parallel, significantly speeding up deployment.
|
||||
- **Sync with Deletion**: Automatically detects and deletes files from the bucket that are no longer present in your local build (`delete_removed`).
|
||||
- **Custom Headers**: Set custom HTTP headers (e.g., `Cache-Control`) for your files.
|
||||
- **Sub-directory Support**: Deploy your site into a specific prefix (sub-directory) within your bucket.
|
||||
- **Flexible Credential Handling**: Reads credentials from your `_config.yml`, environment variables, or AWS CLI profiles.
|
||||
|
||||
## Installation
|
||||
|
||||
``` bash
|
||||
$ npm install hexo-deployer-s3 --save
|
||||
```bash
|
||||
npm install hexo-deployer-s3-plus --save
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Add the following configuration to your `_config.yml` file.
|
||||
|
||||
### Example for a Generic S3 Service (like Teby.io, MinIO, R2)
|
||||
|
||||
This is the recommended configuration for any non-AWS S3 service.
|
||||
|
||||
```yaml
|
||||
# _config.yml
|
||||
deploy:
|
||||
type: s3
|
||||
bucket: your-bucket-name
|
||||
endpoint: https://s3.your-service-provider.com
|
||||
access_key_id: YOUR_ACCESS_KEY
|
||||
secret_access_key: YOUR_SECRET_KEY
|
||||
region: us-east-1 # This is often required by the SDK, but can be any string for non-AWS services.
|
||||
|
||||
# Optional settings:
|
||||
concurrency: 20
|
||||
delete_removed: true
|
||||
prefix: blog/
|
||||
```
|
||||
|
||||
### Example for AWS S3
|
||||
|
||||
```yaml
|
||||
# _config.yml
|
||||
deploy:
|
||||
type: s3
|
||||
bucket: your-aws-s3-bucket-name
|
||||
region: your-aws-region # e.g., us-west-2
|
||||
endpoint: https://s3.your-aws-region.amazonaws.com # The AWS S3 endpoint for your region
|
||||
|
||||
# Credentials can be omitted if they are set as environment variables
|
||||
# (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) or via an AWS profile.
|
||||
# access_key_id: YOUR_AWS_ACCESS_KEY_ID
|
||||
# secret_access_key: YOUR_AWS_SECRET_ACCESS_KEY
|
||||
|
||||
# Optional settings:
|
||||
aws_cli_profile: my-work-profile # Use a specific profile from ~/.aws/credentials
|
||||
concurrency: 20
|
||||
delete_removed: true
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
After configuring, you can deploy your site with the following command:
|
||||
|
||||
```bash
|
||||
hexo clean && hexo generate && hexo deploy
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
You can configure this plugin in `_config.yml`.
|
||||
| Parameter | Required / Optional | Description |
|
||||
| --------------------- | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `bucket` | **Required** | The name of your S3 bucket. |
|
||||
| `endpoint` | **Required** | The S3 API endpoint URL of your storage provider. For AWS, this would be like `https://s3.us-east-1.amazonaws.com`. |
|
||||
| `access_key_id` | Optional | Your access key. Can also be set via `aws_key`. Omit if using environment variables or an AWS profile. |
|
||||
| `secret_access_key` | Optional | Your secret key. Can also be set via `aws_secret`. Omit if using environment variables or an AWS profile. |
|
||||
| `region` | Optional | The region of your bucket. **Crucial for AWS S3**. For other S3 services, this can often be a placeholder string like `us-east-1`, but is still recommended. |
|
||||
| `prefix` | Optional | A sub-directory inside your bucket where the files will be uploaded. E.g., `blog/`. |
|
||||
| `concurrency` | Optional | The number of files to upload in parallel. Defaults to `20`. |
|
||||
| `delete_removed` | Optional | If `true`, files in the bucket that don't exist in your local `public` folder will be deleted upon deployment. **Defaults to `true`**. Set to `false` to disable this synchronization. |
|
||||
| `force_path_style` | Optional | If `true`, S3 endpoint URL will use Path Style instead of Virtual Host Style. **Defaults to `true`**. Set to `false` to disable. |
|
||||
| `headers` | Optional | A JSON object of HTTP headers to apply to all uploaded files. Useful for setting caching policies. Example: `headers: {"Cache-Control": "max-age=31536000"}`. |
|
||||
| `aws_cli_profile` | Optional | The name of a profile in your `~/.aws/credentials` file to use for authentication. Ignored if `access_key_id` and `secret_access_key` are provided directly. |
|
||||
| `aws_key`, `aws_secret` | Optional | Legacy aliases for `access_key_id` and `secret_access_key` for backward compatibility. |
|
||||
|
||||
``` yaml
|
||||
# You can use this:
|
||||
deploy:
|
||||
type: s3
|
||||
bucket: <S3 bucket>
|
||||
aws_key: <AWS id key> // Optional, if the environment variable `AWS_ACCESS_KEY_ID` is set
|
||||
aws_secret: <AWS secret key> // Optional, if the environment variable `AWS_SECRET_ACCESS_KEY` is set
|
||||
aws_cli_profile: <an AWS CLI profile name, e.g. 'default'> // Optional
|
||||
concurrency: <number of connections> // Optional
|
||||
region: <region> // Optional, see https://github.com/LearnBoost/knox#region
|
||||
headers: <headers in JSON format> // pass any headers to S3, usefull for metadata cache setting of Hexo assets
|
||||
prefix: <prefix> // Optional, prefix ending in /
|
||||
delete_removed: <true|false> // if true will delete removed files from S3. Default: true
|
||||
```
|
||||
## Troubleshooting
|
||||
|
||||
If you installed the AWS command-line tool and provided your credentials via `aws configure`,
|
||||
you can re-use those credentials. Specify a value for `aws_cli_profile`, such as "default",
|
||||
and leave `aws_key`, `aws_secret`, and `region` blank.
|
||||
If you provide key, secret, and/or region explicitly or via the environment,
|
||||
they will override what's in your AWS CLI profile.
|
||||
- **`TypeError: ... is not a function`**: This often happens with dependencies like `chalk` or `p-limit` due to module system conflicts (CommonJS vs. ES Modules). Ensure you are requiring them correctly, for example: `const pLimit = require('p-limit').default;`. If the problem persists, try installing a specific compatible version (e.g., `npm install chalk@4`).
|
||||
|
||||
#### Example: header Cache-Control
|
||||
- **`Access Denied` / `403 Forbidden`**: This is almost always a permissions issue. Check that the API key (Access Key) you are using has the required permissions on the bucket:
|
||||
- `s3:PutObject` (for uploading)
|
||||
- `s3:ListBucket` (for checking files to delete)
|
||||
- `s3:DeleteObject` (for deleting removed files)
|
||||
- `s3:GetObject` (if you have any read operations, though not required for deploy)
|
||||
|
||||
``` yaml
|
||||
deploy:
|
||||
type: s3
|
||||
bucket: my-site-bucket
|
||||
headers: {CacheControl: 'max-age=604800, public'}
|
||||
```
|
||||
|
||||
This will set "Cache-Control" header in every file deployed to max-age 1 week. This solves "Leverage browser caching" on most page speed analyzers. For custom metadata use:
|
||||
|
||||
``` yaml
|
||||
headers: {Metadata : { x-amz-meta-mykey: "my value" }}
|
||||
```
|
||||
|
||||
## Contributors
|
||||
|
||||
- Josh Strange ([joshstrange](https://github.com/joshstrange); original implementation)
|
||||
- Josenivaldo Benito Jr. ([JrBenito](https://github.com/jrbenito))
|
||||
- **Connection Errors**: Double-check your `endpoint` URL for typos. Ensure there is no firewall blocking the connection to the endpoint.
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
[MIT](https://opensource.org/licenses/MIT)
|
||||
|
||||
113
README_zh.md
Normal file
113
README_zh.md
Normal file
@@ -0,0 +1,113 @@
|
||||
[](https://www.npmjs.com/package/hexo-deployer-s3-plus)
|
||||
[](https://www.npmjs.com/package/hexo-deployer-s3-plus)
|
||||
|
||||
这是一款为 [Hexo](https://hexo.io) 设计的部署插件,它允许您将静态网站部署到任何与 S3 兼容的对象存储服务。它基于 AWS SDK v3 构建,确保了现代化的功能和强大的性能。
|
||||
|
||||
这款插件完美适用于:
|
||||
* **AWS S3**
|
||||
* **Teby.io**
|
||||
* **MinIO**
|
||||
* **Cloudflare R2**
|
||||
* **DigitalOcean Spaces**
|
||||
* 以及任何其他提供 S3 兼容 API 的存储服务。
|
||||
|
||||
## 功能特性
|
||||
|
||||
- **广泛的兼容性**: 只需提供一个端点(endpoint),即可部署到任何兼容 S3 的服务。
|
||||
- **并发上传**: 利用 `p-limit` 并行上传多个文件,显著提升部署速度。
|
||||
- **同步删除**: 自动检测并删除存储桶中那些在本地构建目录已不存在的文件 (`delete_removed`)。
|
||||
- **自定义头信息**: 为您的文件设置自定义 HTTP 头(例如 `Cache-Control`)。
|
||||
- **支持子目录**: 将您的网站部署到存储桶内的指定前缀(子目录)下。
|
||||
- **灵活的凭证处理**: 可从 `_config.yml`、环境变量或 AWS CLI 配置文件中读取凭证。
|
||||
|
||||
## 安装
|
||||
|
||||
```bash
|
||||
npm install hexo-deployer-s3-plus --save
|
||||
```
|
||||
|
||||
## 配置
|
||||
|
||||
将以下配置添加到您的 `_config.yml` 文件中。
|
||||
|
||||
### 通用 S3 服务配置示例 (如 Teby.io, MinIO, R2)
|
||||
|
||||
对于任何非 AWS 的 S3 服务,推荐使用此配置。
|
||||
|
||||
```yaml
|
||||
# _config.yml
|
||||
deploy:
|
||||
type: s3
|
||||
bucket: your-bucket-name # 你的存储桶名称
|
||||
endpoint: https://s3.your-service-provider.com # 你的服务商提供的 S3 端点
|
||||
access_key_id: YOUR_ACCESS_KEY # 你的 Access Key
|
||||
secret_access_key: YOUR_SECRET_KEY # 你的 Secret Key
|
||||
region: us-east-1 # SDK 通常需要这个字段,但对于非 AWS 服务,它可以是任意字符串
|
||||
|
||||
# 可选设置:
|
||||
concurrency: 20 # 并发上传数量
|
||||
delete_removed: true # 是否删除云端多余文件
|
||||
prefix: blog/ # 上传到存储桶的子目录
|
||||
```
|
||||
|
||||
### AWS S3 配置示例
|
||||
|
||||
```yaml
|
||||
# _config.yml
|
||||
deploy:
|
||||
type: s3
|
||||
bucket: your-aws-s3-bucket-name # 你的 AWS S3 存储桶名称
|
||||
region: your-aws-region # 你的 AWS 区域, e.g., us-west-2
|
||||
endpoint: https://s3.your-aws-region.amazonaws.com # 对应区域的 AWS S3 端点
|
||||
|
||||
# 如果凭证已设置为环境变量 (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
|
||||
# 或通过 AWS profile 配置,则此处可以省略
|
||||
# access_key_id: YOUR_AWS_ACCESS_KEY_ID
|
||||
# secret_access_key: YOUR_AWS_SECRET_ACCESS_KEY
|
||||
|
||||
# 可选设置:
|
||||
aws_cli_profile: my-work-profile # 使用 ~/.aws/credentials 文件中的特定 profile
|
||||
concurrency: 20
|
||||
delete_removed: true
|
||||
```
|
||||
|
||||
## 使用方法
|
||||
|
||||
配置完成后,您可以通过以下命令来部署您的网站:
|
||||
|
||||
```bash
|
||||
hexo clean && hexo generate && hexo deploy
|
||||
```
|
||||
|
||||
## 配置选项
|
||||
|
||||
| 参数 | 必需 / 可选 | 描述 |
|
||||
| --------------------- | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `bucket` | **必需** | 您的 S3 存储桶的名称。 |
|
||||
| `endpoint` | **必需** | 您的存储服务商提供的 S3 API 端点 URL。对于 AWS,它类似于 `https://s3.us-east-1.amazonaws.com`。 |
|
||||
| `access_key_id` | 可选 | 您的访问密钥。也可以通过 `aws_key` 设置。如果使用环境变量或 AWS profile,则可省略。 |
|
||||
| `secret_access_key` | 可选 | 您的私有密钥。也可以通过 `aws_secret` 设置。如果使用环境变量或 AWS profile,则可省略。 |
|
||||
| `region` | 可选 | 您的存储桶所在的区域。**对于 AWS S3 至关重要**。对于其他 S3 服务,这通常可以是一个占位符字符串(如 `us-east-1`),但仍建议填写。 |
|
||||
| `prefix` | 可选 | 文件将被上传到的存储桶内的子目录。例如:`blog/`。 |
|
||||
| `concurrency` | 可选 | 并行上传的文件数量。默认为 `20`。 |
|
||||
| `delete_removed` | 可选 | 如果为 `true`,部署时将自动删除存储桶中存在但本地 `public` 文件夹中不存在的文件。**默认为 `true`**。设置为 `false` 可禁用此同步功能。 |
|
||||
| `force_path_style` | 可选 | 如果为`true`,S3 API 端点 URL 会使用 Path Style 而非 Virtual Host Style。**默认为 `true`**。设置为 `false` 可禁用此功能。 |
|
||||
| `headers` | 可选 | 应用于所有已上传文件的 HTTP 头的 JSON 对象。可用于设置缓存策略。示例:`headers: {"Cache-Control": "max-age=31536000"}`。 |
|
||||
| `aws_cli_profile` | 可选 | 您 `~/.aws/credentials` 文件中用于身份验证的 profile 名称。如果直接提供了 `access_key_id` 和 `secret_access_key`,则此项将被忽略。 |
|
||||
| `aws_key`, `aws_secret` | 可选 | `access_key_id` 和 `secret_access_key` 的旧别名,用于向后兼容。 |
|
||||
|
||||
## 问题排查
|
||||
|
||||
- **`TypeError: ... is not a function`**: 此类错误通常由 `chalk` 或 `p-limit` 等依赖的模块系统冲突 (CommonJS vs. ES Modules) 引起。请确保您以正确的方式引入它们,例如:`const pLimit = require('p-limit').default;`。如果问题依旧存在,请尝试安装一个特定的兼容版本(例如 `npm install chalk@4`)。
|
||||
|
||||
- **`Access Denied` / `403 Forbidden` (访问被拒绝)**: 这几乎总是权限问题。请检查您使用的 API 密钥 (Access Key) 是否在存储桶上具有所需的权限:
|
||||
- `s3:PutObject` (用于上传)
|
||||
- `s3:ListBucket` (用于检查待删除文件)
|
||||
- `s3:DeleteObject` (用于删除文件)
|
||||
- `s3:GetObject` (如果涉及读取操作,尽管部署时非必需)
|
||||
|
||||
- **Connection Errors (连接错误)**: 仔细检查您的 `endpoint` URL 是否有拼写错误。确保没有防火墙阻止与该端点的连接。
|
||||
|
||||
## 许可证
|
||||
|
||||
[MIT](https://opensource.org/licenses/MIT)
|
||||
255
lib/deployer.js
255
lib/deployer.js
@@ -1,81 +1,196 @@
|
||||
var fs = require('fs');
|
||||
var ini = require('ini');
|
||||
var path = require('path');
|
||||
var s3 = require('s3');
|
||||
var chalk = require('chalk');
|
||||
var xtend = require('xtend');
|
||||
const { S3Client, ListObjectsV2Command, DeleteObjectsCommand } = require('@aws-sdk/client-s3');
|
||||
const { Upload } = require('@aws-sdk/lib-storage');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const klawSync = require('klaw-sync');
|
||||
const mime = require('mime-types');
|
||||
const chalk = require('chalk').default;
|
||||
const pLimit = require('p-limit').default;
|
||||
|
||||
module.exports = function(args) {
|
||||
module.exports = async function(args) {
|
||||
const log = this.log;
|
||||
const publicDir = this.config.public_dir;
|
||||
|
||||
var config = {
|
||||
maxAsyncS3: args.concurrency,
|
||||
s3Options: {
|
||||
accessKeyId: args.aws_key || process.env.AWS_ACCESS_KEY_ID || process.env.AWS_KEY,
|
||||
secretAccessKey: args.aws_secret || process.env.AWS_SECRET_ACCESS_KEY || process.env.AWS_SECRET,
|
||||
region: args.region
|
||||
}
|
||||
};
|
||||
if (!config.s3Options.accessKeyId && !config.s3Options.secretAccessKey && args.aws_cli_profile) {
|
||||
/* User configured their access and secret keys in ~/.aws/credentials, check there */
|
||||
var iniFile = path.join(process.env.HOME, '.aws');
|
||||
var iniCredentials = ini.parse(fs.readFileSync(path.join(iniFile, 'credentials'), 'utf-8'));
|
||||
config.s3Options.accessKeyId = (iniCredentials[args.aws_cli_profile] || {}).aws_access_key_id;
|
||||
config.s3Options.secretAccessKey = (iniCredentials[args.aws_cli_profile] || {}).aws_secret_access_key;
|
||||
if (!config.s3Options.region) {
|
||||
var iniConfig = ini.parse(fs.readFileSync(path.join(iniFile, 'config'), 'utf-8'));
|
||||
var profilePath = (args.aws_cli_profile === 'default') ? args.aws_cli_profile : "profile " + args.aws_cli_profile;
|
||||
config.s3Options.region = (iniConfig[profilePath] || {}).region;
|
||||
}
|
||||
}
|
||||
var client = s3.createClient(config);
|
||||
// --- 1. 配置检查 ---
|
||||
const {
|
||||
bucket,
|
||||
region,
|
||||
concurrency = 20,
|
||||
prefix,
|
||||
aws_cli_profile,
|
||||
headers,
|
||||
delete_removed,
|
||||
force_path_style,
|
||||
endpoint,
|
||||
access_key_id,
|
||||
secret_access_key,
|
||||
aws_key,
|
||||
aws_secret
|
||||
} = args;
|
||||
|
||||
var publicDir = this.config.public_dir;
|
||||
var log = this.log;
|
||||
|
||||
var customHeaders = args.headers || {};
|
||||
var deleteRemoved = args.hasOwnProperty('delete_removed')
|
||||
? Boolean(args.delete_removed)
|
||||
: true;
|
||||
|
||||
if (!args.bucket || !config.s3Options.accessKeyId || !config.s3Options.secretAccessKey) {
|
||||
var help = '';
|
||||
|
||||
help += 'You should configure deployment settings in _config.yml first!\n\n';
|
||||
help += 'Example:\n';
|
||||
help += ' deploy:\n';
|
||||
help += ' type: s3\n';
|
||||
help += ' bucket: <bucket>\n';
|
||||
help += ' [aws_key]: <aws_key> # Optional, if provided as environment variable\n';
|
||||
help += ' [aws_secret]: <aws_secret> # Optional, if provided as environment variable\n';
|
||||
help += ' [concurrency]: <concurrency>\n';
|
||||
help += ' [region]: <region> # See https://github.com/LearnBoost/knox#region\n',
|
||||
help += ' [headers]: <JSON headers> # Optional, see README.md file\n';
|
||||
help += ' [prefix]: <prefix> # Optional, prefix ending in /\n';
|
||||
help += ' [delete_removed]: <delete> # Optional, if true will delete removed files from S3 /\n\n';
|
||||
help += 'For more help, you can check the docs: ' + chalk.underline('https://github.com/nt3rp/hexo-deployer-s3');
|
||||
|
||||
console.log(help);
|
||||
if (!bucket) {
|
||||
log.error('Bucket and Endpoint must be configured in _config.yml');
|
||||
log.info(chalk.bold('--- Generic S3-Compatible Service Example (like Teby, MinIO, Cloudflare R2) ---'));
|
||||
log.info(' deploy:');
|
||||
log.info(' type: s3');
|
||||
log.info(' bucket: <your-bucket-name>');
|
||||
log.info(' endpoint: <your-s3-endpoint>');
|
||||
log.info(' access_key_id: <your-access-key>');
|
||||
log.info(' secret_access_key: <your-secret-key>');
|
||||
log.info(' region: <any-string-is-ok-e.g.-us-east-1>');
|
||||
log.info(' [prefix]: <prefix>');
|
||||
log.info(' [concurrency]: 20');
|
||||
log.info(' [delete_removed]: true');
|
||||
log.info(' [force_path_style]: true');
|
||||
log.info('');
|
||||
log.info(chalk.bold('--- AWS S3 Example ---'));
|
||||
log.info(' deploy:');
|
||||
log.info(' type: s3');
|
||||
log.info(' bucket: <your-aws-bucket-name>');
|
||||
log.info(' region: <your-aws-region>');
|
||||
log.info(' endpoint: <s3.your-aws-region.amazonaws.com>');
|
||||
log.info(' # Credentials can be from env vars, ~/.aws/credentials, or here:');
|
||||
log.info(' # access_key_id: <your-aws-key>');
|
||||
log.info(' # secret_access_key: <your-aws-secret>');
|
||||
return;
|
||||
}
|
||||
|
||||
var params = {
|
||||
localDir: publicDir,
|
||||
deleteRemoved: deleteRemoved,
|
||||
s3Params: xtend({
|
||||
Bucket: args.bucket,
|
||||
Prefix: args.prefix
|
||||
},customHeaders)
|
||||
const filledRegion = region || 'us-east-1';
|
||||
const filledEndpoint = endpoint || `https://s3.${filledRegion}.amazonaws.com`;
|
||||
if (!region) {
|
||||
log.warn('No region specified. Using default region: us-east-1');
|
||||
}
|
||||
if (!endpoint) {
|
||||
log.warn(`No endpoint specified. Using default AWS S3 endpoint: ${filledEndpoint}`);
|
||||
}
|
||||
|
||||
var uploader = client.uploadDir(params);
|
||||
log.info('Uploading...');
|
||||
// --- 2. 创建 S3 客户端 ---
|
||||
const s3Config = {
|
||||
region: filledRegion,
|
||||
endpoint: filledEndpoint,
|
||||
forcePathStyle: force_path_style !== false, // 默认为 true
|
||||
};
|
||||
|
||||
return uploader
|
||||
.on('progress', function() {
|
||||
// log.info(uploader.progressAmount + ' / ' + uploader.progressTotal);
|
||||
}).on('end', function() {
|
||||
log.info('Done!');
|
||||
}).on('error', function(err) {
|
||||
log.error(err)
|
||||
const keyId = access_key_id || aws_key;
|
||||
const secret = secret_access_key || aws_secret;
|
||||
|
||||
if (keyId && secret) {
|
||||
s3Config.credentials = {
|
||||
accessKeyId: keyId,
|
||||
secretAccessKey: secret
|
||||
};
|
||||
log.info('Using credentials from _config.yml.');
|
||||
} else if (aws_cli_profile) {
|
||||
process.env.AWS_PROFILE = aws_cli_profile;
|
||||
log.info(`Using AWS profile: ${aws_cli_profile}`);
|
||||
} else {
|
||||
log.info('Using credentials from environment variables or IAM role.');
|
||||
}
|
||||
|
||||
const client = new S3Client(s3Config);
|
||||
|
||||
// --- 3. 准备文件列表 ---
|
||||
const filesToUpload = klawSync(publicDir, { nodir: true });
|
||||
const remotePrefix = prefix || '';
|
||||
const shouldDeleteRemoved = delete_removed !== false;
|
||||
|
||||
if (!fs.existsSync(publicDir)) {
|
||||
log.error(`Public folder not found: ${publicDir}. Run 'hexo generate' first.`);
|
||||
return;
|
||||
}
|
||||
|
||||
log.info(`Found ${filesToUpload.length} files in ${publicDir}`);
|
||||
|
||||
// --- 4. 实现 delete_removed (可选) ---
|
||||
if (shouldDeleteRemoved) {
|
||||
log.info('Checking for files to delete on S3...');
|
||||
try {
|
||||
const s3Objects = await listAllObjects(client, bucket, remotePrefix);
|
||||
const localFilesSet = new Set(
|
||||
filesToUpload.map(file => path.join(remotePrefix, path.relative(publicDir, file.path)).replace(/\\/g, '/'))
|
||||
);
|
||||
|
||||
const objectsToDelete = s3Objects
|
||||
.filter(obj => !localFilesSet.has(obj.Key))
|
||||
.map(obj => ({ Key: obj.Key }));
|
||||
|
||||
if (objectsToDelete.length > 0) {
|
||||
log.info(`Deleting ${objectsToDelete.length} removed files from S3...`);
|
||||
for (let i = 0; i < objectsToDelete.length; i += 1000) {
|
||||
const chunk = objectsToDelete.slice(i, i + 1000);
|
||||
await client.send(new DeleteObjectsCommand({
|
||||
Bucket: bucket,
|
||||
Delete: { Objects: chunk },
|
||||
}));
|
||||
}
|
||||
} else {
|
||||
log.info('No files to delete.');
|
||||
}
|
||||
} catch (err) {
|
||||
log.error('Failed to check/delete removed files. Please check your permissions.');
|
||||
log.error(err);
|
||||
}
|
||||
}
|
||||
|
||||
// --- 5. 执行上传 ---
|
||||
const limit = pLimit(concurrency);
|
||||
log.info(`Uploading to bucket: ${chalk.cyan(bucket)} via endpoint: ${chalk.cyan(endpoint)}`);
|
||||
|
||||
const uploadPromises = filesToUpload.map(file => {
|
||||
return limit(() => {
|
||||
const key = path.join(remotePrefix, path.relative(publicDir, file.path)).replace(/\\/g, '/');
|
||||
const body = fs.createReadStream(file.path);
|
||||
const contentType = mime.lookup(file.path) || 'application/octet-stream';
|
||||
|
||||
const upload = new Upload({
|
||||
client,
|
||||
params: {
|
||||
Bucket: bucket,
|
||||
Key: key,
|
||||
Body: body,
|
||||
ContentType: contentType,
|
||||
...headers
|
||||
},
|
||||
});
|
||||
|
||||
return upload.done().then(() => {
|
||||
log.info(`Uploaded: ${key}`);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
try {
|
||||
await Promise.all(uploadPromises);
|
||||
log.info(chalk.green('All files uploaded successfully!'));
|
||||
} catch (err) {
|
||||
log.error('An error occurred during upload:');
|
||||
log.error(err);
|
||||
throw new Error('S3 deployment failed.');
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Helper function to list all objects in an S3 bucket with a given prefix,
|
||||
* handling pagination automatically.
|
||||
*/
|
||||
async function listAllObjects(client, bucket, prefix) {
|
||||
const allObjects = [];
|
||||
let isTruncated = true;
|
||||
let continuationToken;
|
||||
|
||||
while (isTruncated) {
|
||||
const command = new ListObjectsV2Command({
|
||||
Bucket: bucket,
|
||||
Prefix: prefix,
|
||||
ContinuationToken: continuationToken,
|
||||
});
|
||||
const { Contents, IsTruncated, NextContinuationToken } = await client.send(command);
|
||||
|
||||
if (Contents) {
|
||||
allObjects.push(...Contents);
|
||||
}
|
||||
isTruncated = IsTruncated;
|
||||
continuationToken = NextContinuationToken;
|
||||
}
|
||||
return allObjects;
|
||||
}
|
||||
3094
package-lock.json
generated
Normal file
3094
package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
39
package.json
39
package.json
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"name": "hexo-deployer-s3",
|
||||
"version": "0.2.2",
|
||||
"description": "Amazon S3 deployer plugin for Hexo",
|
||||
"name": "hexo-deployer-s3-plus",
|
||||
"version": "1.0.1",
|
||||
"description": "A flexible Hexo deployer for AWS S3 and other S3-compatible services like Tebi.io, MinIO, and Cloudflare R2. Updated for Hexo 7.",
|
||||
"main": "index",
|
||||
"keywords": [
|
||||
"hexo",
|
||||
@@ -9,35 +9,30 @@
|
||||
"aws",
|
||||
"deployer"
|
||||
],
|
||||
"author": "Nicholas Terwoord <nicholas.terwoord+code@gmail.com>",
|
||||
"author": "Yunxiao Xu <xuyunxiao2001@gmail.com>",
|
||||
"contributors": [
|
||||
{
|
||||
"name": "Josh Strange",
|
||||
"email": "josh@joshstrange.com"
|
||||
},
|
||||
{
|
||||
"name": "Jack Guy",
|
||||
"email": "jack@thatguyjackguy.com"
|
||||
},
|
||||
{
|
||||
"name": "Josenivaldo Benito Jr.",
|
||||
"email": "jrbenito@benito.qsl.br"
|
||||
"name": "tianjincai",
|
||||
"email": "tianjincai@hotmail.com"
|
||||
}
|
||||
],
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "http://github.com/nt3rp/hexo-deployer-s3.git"
|
||||
"url": "git+https://git.yunxiao.xyz/YunxiaoXu/hexo-deployer-s3-plus.git"
|
||||
},
|
||||
"license": {
|
||||
"type": "MIT"
|
||||
"bugs": {
|
||||
"url": "https://git.yunxiao.xyz/YunxiaoXu/hexo-deployer-s3-plus/issues"
|
||||
},
|
||||
"license": "MIT",
|
||||
"peerDependencies": {
|
||||
"hexo": "3.x"
|
||||
"hexo": "^7.0.0"
|
||||
},
|
||||
"dependencies": {
|
||||
"chalk": "^1.1.1",
|
||||
"ini": "^1.3.4",
|
||||
"s3": "^4.4.0",
|
||||
"xtend": "^4.0.1"
|
||||
"@aws-sdk/client-s3": "^3.887.0",
|
||||
"@aws-sdk/lib-storage": "^3.887.0",
|
||||
"chalk": "^5.6.2",
|
||||
"klaw-sync": "^7.0.0",
|
||||
"mime-types": "^3.0.1",
|
||||
"p-limit": "^7.1.1"
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user