Documentation Index
Fetch the complete documentation index at: https://mintlify.com/toeverything/AFFiNE/llms.txt
Use this file to discover all available pages before exploring further.
Overview
AFFiNE’s blob storage system handles binary assets like images, attachments, and files. It supports multiple storage backends including S3-compatible services, Cloudflare R2, and local filesystem storage.
Storage Architecture
Blobs are stored with SHA-256 checksums as keys, ensuring content-addressable storage and automatic deduplication.
Supported Backends
- S3-Compatible Storage: AWS S3, MinIO, DigitalOcean Spaces
- Cloudflare R2: Low-cost object storage
- Local Filesystem: Development and self-hosted deployments
Blob Upload
Direct Upload
Small files can be uploaded directly via POST request:
interface BlobUploadRequest {
workspaceId: string;
blob: Buffer;
}
interface BlobUploadResponse {
key: string; // SHA-256 hash
size: number;
mime: string;
}
Example:
curl -X POST \
https://app.affine.pro/api/workspaces/{workspaceId}/blobs \
-H "Authorization: Bearer YOUR_TOKEN" \
-F "blob=@image.png"
Presigned Upload
For larger files, use presigned URLs to upload directly to storage:
import { AFFiNEClient } from '@affine/sdk';
const client = new AFFiNEClient({ token: 'YOUR_TOKEN' });
// Request presigned upload URL
const { url, headers } = await client.blobs.presignPut(
workspaceId,
blobKey,
{
contentType: 'image/png',
contentLength: fileSize
}
);
// Upload directly to storage
await fetch(url, {
method: 'PUT',
headers,
body: fileBuffer
});
// Complete the upload
await client.blobs.complete(workspaceId, blobKey, {
size: fileSize,
mime: 'image/png'
});
Multipart Upload
For very large files (>5MB), use multipart upload:
Blob key (SHA-256 hash in base64url format)
Upload Flow:
// 1. Create multipart upload
const { uploadId } = await client.blobs.createMultipartUpload(
workspaceId,
blobKey,
{ contentType: 'video/mp4' }
);
// 2. Upload parts (5MB minimum per part)
const parts = [];
for (let i = 0; i < totalParts; i++) {
const { url } = await client.blobs.presignUploadPart(
workspaceId,
blobKey,
uploadId,
i + 1
);
const response = await fetch(url, {
method: 'PUT',
body: partBuffer
});
parts.push({
partNumber: i + 1,
etag: response.headers.get('etag')
});
}
// 3. Complete multipart upload
await client.blobs.completeMultipartUpload(
workspaceId,
blobKey,
uploadId,
parts
);
Multipart uploads require a minimum part size of 5MB (except the last part). The maximum number of parts is 10,000.
Blob Download
Direct Download
curl -X GET \
https://app.affine.pro/api/workspaces/{workspaceId}/blobs/{blobKey} \
-H "Authorization: Bearer YOUR_TOKEN" \
-o downloaded-file.png
Signed URL Download
Generate temporary signed URLs for direct browser access:
const { redirectUrl } = await client.blobs.get(
workspaceId,
blobKey,
{ signedUrl: true }
);
// Use in browser
window.location.href = redirectUrl;
Signed URLs expire after 3600 seconds (1 hour) by default. Configure expiration via presign.expiresInSeconds in storage config.
Blob Management
List Blobs
Retrieve all blobs in a workspace:
const blobs = await client.blobs.list(workspaceId);
for (const blob of blobs) {
console.log(blob.key, blob.size, blob.mime, blob.createdAt);
}
Blob identifier (SHA-256 hash)
Delete Blob
// Soft delete (mark as deleted)
await client.blobs.delete(workspaceId, blobKey);
// Permanent delete
await client.blobs.delete(workspaceId, blobKey, { permanently: true });
const metadata = await client.blobs.head(workspaceId, blobKey);
console.log(metadata.contentType);
console.log(metadata.contentLength);
console.log(metadata.lastModified);
console.log(metadata.checksumCRC32);
Storage Providers
S3 Configuration
interface S3StorageConfig {
endpoint?: string;
region: string;
credentials: {
accessKeyId: string;
secretAccessKey: string;
};
forcePathStyle?: boolean;
requestTimeoutMs?: number; // Default: 60000
minPartSize?: number; // Minimum size for multipart parts
presign?: {
expiresInSeconds?: number; // Default: 3600
signContentTypeForPut?: boolean; // Default: true
};
usePresignedURL?: {
enabled: boolean; // Enable presigned URL for downloads
};
}
Filesystem Configuration
interface FsStorageConfig {
path: string; // Absolute path or ~/relative/path
}
Example:
# Store blobs in ~/affine-data/blobs
STORAGE_PATH="~/affine-data"
R2 Configuration
interface R2StorageConfig extends S3StorageConfig {
accountId: string;
// Cloudflare R2 uses S3-compatible API
endpoint: `https://${accountId}.r2.cloudflarestorage.com`;
}
Checksum Verification
All uploads are verified using SHA-256 checksums:
import { createHash } from 'crypto';
function calculateBlobKey(buffer: Buffer): string {
const hash = createHash('sha256');
hash.update(buffer);
const base64 = hash.digest('base64');
// Convert to base64url format
return base64.replace(/\+/g, '-').replace(/\//g, '_');
}
// Verify after upload
const expectedKey = calculateBlobKey(fileBuffer);
await client.blobs.complete(workspaceId, expectedKey, { ... });
If checksums don’t match, the blob is automatically deleted and the upload fails with checksum_mismatch error.
Error Handling
Upload Errors
try {
await client.blobs.complete(workspaceId, blobKey, {
size: fileSize,
mime: 'image/png'
});
} catch (error) {
if (error.reason === 'checksum_mismatch') {
console.error('File corrupted during upload');
} else if (error.reason === 'size_mismatch') {
console.error('Size does not match expected value');
} else if (error.reason === 'not_found') {
console.error('Blob not found in storage');
}
}
Download Errors
const result = await client.blobs.get(workspaceId, blobKey);
if (!result.body) {
console.error('Blob not found');
}
Events
The blob storage system emits events for monitoring:
eventBus.on('workspace.blob.sync', ({ workspaceId, key }) => {
console.log(`Syncing blob metadata: ${key}`);
});
eventBus.on('workspace.blob.delete', ({ workspaceId, key }) => {
console.log(`Deleting blob: ${key}`);
});
Best Practices
- Use multipart upload for files >5MB to improve reliability and enable resume capability
- Calculate checksums client-side before upload to detect corruption early
- Enable presigned URLs for better download performance (reduces server load)
- Set appropriate MIME types for proper browser handling
- Implement retry logic with exponential backoff for transient failures
- Use soft delete by default to enable recovery of accidentally deleted blobs
Rate Limits
- Upload: 100 requests/minute per workspace
- Download: 500 requests/minute per workspace
- List: 20 requests/minute per workspace
- Multipart operations: 1000 parts per upload
Storage Quotas
Storage limits vary by plan. Check your workspace settings for current quota and usage.
const totalSize = await client.blobs.totalSize(workspaceId);
console.log(`Current usage: ${totalSize} bytes`);