Compare commits

...

30 Commits

Author SHA1 Message Date
Mauricio Siu
fa6baa0c1a Merge pull request #1786 from Dokploy/feat/add-flush-redis
Some checks are pending
Auto PR to main when version changes / create-pr (push) Waiting to run
Build Docker images / build-and-push-cloud-image (push) Waiting to run
Build Docker images / build-and-push-schedule-image (push) Waiting to run
Build Docker images / build-and-push-server-image (push) Waiting to run
Dokploy Docker Build / docker-amd (push) Waiting to run
Dokploy Docker Build / docker-arm (push) Waiting to run
Dokploy Docker Build / combine-manifests (push) Blocked by required conditions
Dokploy Docker Build / generate-release (push) Blocked by required conditions
autofix.ci / format (push) Waiting to run
Dokploy Monitoring Build / docker-amd (push) Waiting to run
Dokploy Monitoring Build / docker-arm (push) Waiting to run
Dokploy Monitoring Build / combine-manifests (push) Blocked by required conditions
Add Redis management actions to server settings
2025-04-27 00:18:40 -06:00
Mauricio Siu
5b43df92c1 Add Redis management actions to server settings
Implement 'Clean Redis' and 'Reload Redis' actions in the dashboard settings. These actions allow users to flush all data in Redis and restart the Redis service, respectively. Update the API to handle these new mutations with appropriate error handling and success notifications.
2025-04-27 00:18:25 -06:00
Mauricio Siu
f3032bc94f Update dokploy version to v0.21.8 in package.json 2025-04-26 23:38:43 -06:00
Mauricio Siu
eef874ecd4 Merge pull request #1784 from Dokploy/1782-valid-certificate-shows-how-expired
Refactor expiration date extraction logic in certificate utility to i…
2025-04-26 23:38:09 -06:00
Mauricio Siu
d6daa5677a Refactor expiration date extraction logic in certificate utility to improve handling of ASN.1 date formats. Update to correctly identify and parse "not after" dates for both UTCTime and GeneralizedTime formats. 2025-04-26 23:37:49 -06:00
Mauricio Siu
dc03ba73b3 Merge branch 'main' into canary 2025-04-26 23:29:51 -06:00
Mauricio Siu
5c2159f7b2 Merge pull request #1783 from Dokploy/1747-backup-issues-doesnt-list-all-files
Enhance backup restoration UI and API by adding file size formatting,…
2025-04-26 23:26:35 -06:00
Mauricio Siu
ffcdbcf046 Enhance backup restoration UI and API by adding file size formatting, improving search debounce timing, and updating file listing to include additional metadata. Refactor file handling to ensure proper path resolution and error handling during JSON parsing. 2025-04-26 23:23:51 -06:00
Mauricio Siu
c0b35efaca Merge pull request #1781 from Dokploy/1760-docker-compose-raw-editor-autocomplete-appending-to-previously-typed-characters
Refactor code editor completion logic to use explicit from/to paramet…
2025-04-26 19:25:27 -06:00
Mauricio Siu
22dee88e51 Refactor code editor completion logic to use explicit from/to parameters for insertion and selection handling 2025-04-26 19:25:05 -06:00
Mauricio Siu
79796185d6 Merge pull request #1744 from barbarbar338/fix/web-server-pg-backup
Some checks are pending
Build Docker images / build-and-push-cloud-image (push) Waiting to run
Build Docker images / build-and-push-schedule-image (push) Waiting to run
Build Docker images / build-and-push-server-image (push) Waiting to run
Dokploy Docker Build / docker-amd (push) Waiting to run
Dokploy Docker Build / docker-arm (push) Waiting to run
Dokploy Docker Build / combine-manifests (push) Blocked by required conditions
Dokploy Docker Build / generate-release (push) Blocked by required conditions
Dokploy Monitoring Build / docker-amd (push) Waiting to run
Dokploy Monitoring Build / docker-arm (push) Waiting to run
Dokploy Monitoring Build / combine-manifests (push) Blocked by required conditions
fix(backup): handle multiple container IDs in backup command
2025-04-26 16:08:46 -06:00
Mauricio Siu
461d7c530a fix(restore): streamline container ID retrieval for database operations
Refactor the database restore process to consistently use a single container ID for the PostgreSQL container. This change enhances reliability by ensuring that commands are executed against the correct container, preventing potential errors from multiple matches.

Co-authored-by: Merloss 54235902+Merloss@users.noreply.github.com
2025-04-26 16:07:50 -06:00
Barış DEMİRCİ
8d28a50a17 fix(backup): handle multiple container IDs in backup command
Ensure only one container ID is used when running `docker exec` for pg_dump to avoid errors caused by multiple matching containers.

Fixes INTERNAL_SERVER_ERROR from backup.manualBackupWebServer path.

Co-authored-by: Merloss 54235902+Merloss@users.noreply.github.com
2025-04-20 12:14:41 +00:00
Mauricio Siu
da60c4f3a8 Merge pull request #1707 from Dokploy/canary
Some checks failed
Build Docker images / build-and-push-cloud-image (push) Has been cancelled
Build Docker images / build-and-push-schedule-image (push) Has been cancelled
Build Docker images / build-and-push-server-image (push) Has been cancelled
Dokploy Docker Build / docker-amd (push) Has been cancelled
Dokploy Docker Build / docker-arm (push) Has been cancelled
Dokploy Monitoring Build / docker-amd (push) Has been cancelled
Dokploy Monitoring Build / docker-arm (push) Has been cancelled
Dokploy Docker Build / combine-manifests (push) Has been cancelled
Dokploy Docker Build / generate-release (push) Has been cancelled
Dokploy Monitoring Build / combine-manifests (push) Has been cancelled
🚀 Release v0.21.7
2025-04-17 02:30:26 -06:00
Mauricio Siu
764f8ec993 Merge pull request #1695 from Dokploy/canary
🚀 Release v0.21.6
2025-04-13 00:05:04 -06:00
Mauricio Siu
ef7918a33a Merge pull request #1665 from Dokploy/canary
🚀 Release v0.21.5
2025-04-08 23:24:19 -06:00
Mauricio Siu
af4511040f Merge pull request #1645 from Dokploy/canary
🚀 Release v0.21.4
2025-04-06 17:07:04 -06:00
Mauricio Siu
1bbbdfba60 Merge pull request #1618 from Dokploy/canary
🚀 Release v0.21.3
2025-04-03 00:24:36 -06:00
Mauricio Siu
116e33ce37 Merge pull request #1609 from Dokploy/canary
🚀 Release v0.21.2
2025-04-02 07:22:22 -06:00
Mauricio Siu
e9b92d2641 Merge pull request #1600 from Dokploy/canary
🚀 Release v0.21.1
2025-04-02 00:39:10 -06:00
Mauricio Siu
3e07be38df Merge pull request #1583 from Dokploy/canary
🚀 Release v0.21.0
2025-03-30 04:01:46 -06:00
Mauricio Siu
b1d1763988 Merge pull request #1535 from Dokploy/canary
🚀 Release v0.20.8
2025-03-19 00:57:24 -06:00
Mauricio Siu
b5d199057d Merge pull request #1532 from Dokploy/canary
🚀 Release v0.20.7
2025-03-18 21:38:47 -06:00
Mauricio Siu
bfb6baf572 Merge pull request #1529 from Dokploy/canary
🚀 Release v0.20.6
2025-03-18 01:01:21 -06:00
Mauricio Siu
1f81794904 Merge pull request #1517 from Dokploy/canary
refactor: improve button structure and tooltip integration across das…
2025-03-16 20:54:15 -06:00
Mauricio Siu
d5d3831d54 Merge pull request #1515 from Dokploy/canary
🚀 Release v0.20.5
2025-03-16 20:32:45 -06:00
Mauricio Siu
856399550a Merge pull request #1509 from Dokploy/canary
🚀 Release v0.20.4
2025-03-16 03:34:37 -06:00
Mauricio Siu
86b8b0987b Merge pull request #1505 from Dokploy/canary
🚀 Release v0.20.3
2025-03-16 00:18:54 -06:00
Mauricio Siu
0dac1fefe6 Merge pull request #1460 from Dokploy/canary
🚀 Release v0.20.2
2025-03-11 00:51:50 -06:00
Mauricio Siu
633ba899e0 Merge pull request #1454 from Dokploy/canary
🚀 Release v0.20.1
2025-03-10 03:26:01 -06:00
9 changed files with 243 additions and 90 deletions

View File

@@ -77,6 +77,14 @@ const RestoreBackupSchema = z.object({
type RestoreBackup = z.infer<typeof RestoreBackupSchema>;
const formatBytes = (bytes: number): string => {
if (bytes === 0) return "0 Bytes";
const k = 1024;
const sizes = ["Bytes", "KB", "MB", "GB", "TB"];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return `${Number.parseFloat((bytes / k ** i).toFixed(2))} ${sizes[i]}`;
};
export const RestoreBackup = ({
databaseId,
databaseType,
@@ -101,7 +109,7 @@ export const RestoreBackup = ({
const debouncedSetSearch = debounce((value: string) => {
setDebouncedSearchTerm(value);
}, 150);
}, 350);
const handleSearchChange = (value: string) => {
setSearch(value);
@@ -271,7 +279,7 @@ export const RestoreBackup = ({
</Badge>
)}
</FormLabel>
<Popover>
<Popover modal>
<PopoverTrigger asChild>
<FormControl>
<Button
@@ -308,28 +316,51 @@ export const RestoreBackup = ({
</div>
) : (
<ScrollArea className="h-64">
<CommandGroup>
{files.map((file) => (
<CommandGroup className="w-96">
{files?.map((file) => (
<CommandItem
value={file}
key={file}
value={file.Path}
key={file.Path}
onSelect={() => {
form.setValue("backupFile", file);
setSearch(file);
setDebouncedSearchTerm(file);
form.setValue("backupFile", file.Path);
if (file.IsDir) {
setSearch(`${file.Path}/`);
setDebouncedSearchTerm(`${file.Path}/`);
} else {
setSearch(file.Path);
setDebouncedSearchTerm(file.Path);
}
}}
>
<div className="flex w-full justify-between">
<span>{file}</span>
<div className="flex w-full flex-col gap-1">
<div className="flex w-full justify-between">
<span className="font-medium">
{file.Path}
</span>
<CheckIcon
className={cn(
"ml-auto h-4 w-4",
file.Path === field.value
? "opacity-100"
: "opacity-0",
)}
/>
</div>
<div className="flex items-center gap-4 text-xs text-muted-foreground">
<span>
Size: {formatBytes(file.Size)}
</span>
{file.IsDir && (
<span className="text-blue-500">
Directory
</span>
)}
{file.Hashes?.MD5 && (
<span>MD5: {file.Hashes.MD5}</span>
)}
</div>
</div>
<CheckIcon
className={cn(
"ml-auto h-4 w-4",
file === field.value
? "opacity-100"
: "opacity-0",
)}
/>
</CommandItem>
))}
</CommandGroup>

View File

@@ -13,53 +13,65 @@ export const extractExpirationDate = (certData: string): Date | null => {
bytes[i] = binaryStr.charCodeAt(i);
}
let dateFound = 0;
// ASN.1 tag for UTCTime is 0x17, GeneralizedTime is 0x18
// We need to find the second occurrence of either tag as it's the "not after" (expiration) date
let dateFound = false;
for (let i = 0; i < bytes.length - 2; i++) {
if (bytes[i] === 0x17 || bytes[i] === 0x18) {
const dateType = bytes[i];
const dateLength = bytes[i + 1];
if (typeof dateLength === "undefined") continue;
// Look for sequence containing validity period (0x30)
if (bytes[i] === 0x30) {
// Check next bytes for UTCTime or GeneralizedTime
let j = i + 1;
while (j < bytes.length - 2) {
if (bytes[j] === 0x17 || bytes[j] === 0x18) {
const dateType = bytes[j];
const dateLength = bytes[j + 1];
if (typeof dateLength === "undefined") break;
if (dateFound === 0) {
dateFound++;
i += dateLength + 1;
continue;
if (!dateFound) {
// Skip "not before" date
dateFound = true;
j += dateLength + 2;
continue;
}
// Found "not after" date
let dateStr = "";
for (let k = 0; k < dateLength; k++) {
const charCode = bytes[j + 2 + k];
if (typeof charCode === "undefined") continue;
dateStr += String.fromCharCode(charCode);
}
if (dateType === 0x17) {
// UTCTime (YYMMDDhhmmssZ)
const year = Number.parseInt(dateStr.slice(0, 2));
const fullYear = year >= 50 ? 1900 + year : 2000 + year;
return new Date(
Date.UTC(
fullYear,
Number.parseInt(dateStr.slice(2, 4)) - 1,
Number.parseInt(dateStr.slice(4, 6)),
Number.parseInt(dateStr.slice(6, 8)),
Number.parseInt(dateStr.slice(8, 10)),
Number.parseInt(dateStr.slice(10, 12)),
),
);
}
// GeneralizedTime (YYYYMMDDhhmmssZ)
return new Date(
Date.UTC(
Number.parseInt(dateStr.slice(0, 4)),
Number.parseInt(dateStr.slice(4, 6)) - 1,
Number.parseInt(dateStr.slice(6, 8)),
Number.parseInt(dateStr.slice(8, 10)),
Number.parseInt(dateStr.slice(10, 12)),
Number.parseInt(dateStr.slice(12, 14)),
),
);
}
j++;
}
let dateStr = "";
for (let j = 0; j < dateLength; j++) {
const charCode = bytes[i + 2 + j];
if (typeof charCode === "undefined") continue;
dateStr += String.fromCharCode(charCode);
}
if (dateType === 0x17) {
// UTCTime (YYMMDDhhmmssZ)
const year = Number.parseInt(dateStr.slice(0, 2));
const fullYear = year >= 50 ? 1900 + year : 2000 + year;
return new Date(
Date.UTC(
fullYear,
Number.parseInt(dateStr.slice(2, 4)) - 1,
Number.parseInt(dateStr.slice(4, 6)),
Number.parseInt(dateStr.slice(6, 8)),
Number.parseInt(dateStr.slice(8, 10)),
Number.parseInt(dateStr.slice(10, 12)),
),
);
}
// GeneralizedTime (YYYYMMDDhhmmssZ)
return new Date(
Date.UTC(
Number.parseInt(dateStr.slice(0, 4)),
Number.parseInt(dateStr.slice(4, 6)) - 1,
Number.parseInt(dateStr.slice(6, 8)),
Number.parseInt(dateStr.slice(8, 10)),
Number.parseInt(dateStr.slice(10, 12)),
Number.parseInt(dateStr.slice(12, 14)),
),
);
}
}
return null;

View File

@@ -22,6 +22,9 @@ export const ShowDokployActions = () => {
const { mutateAsync: reloadServer, isLoading } =
api.settings.reloadServer.useMutation();
const { mutateAsync: cleanRedis } = api.settings.cleanRedis.useMutation();
const { mutateAsync: reloadRedis } = api.settings.reloadRedis.useMutation();
return (
<DropdownMenu>
<DropdownMenuTrigger asChild disabled={isLoading}>
@@ -69,6 +72,36 @@ export const ShowDokployActions = () => {
{t("settings.server.webServer.updateServerIp")}
</DropdownMenuItem>
</UpdateServerIp>
<DropdownMenuItem
className="cursor-pointer"
onClick={async () => {
await cleanRedis()
.then(async () => {
toast.success("Redis cleaned");
})
.catch(() => {
toast.error("Error cleaning Redis");
});
}}
>
Clean Redis
</DropdownMenuItem>
<DropdownMenuItem
className="cursor-pointer"
onClick={async () => {
await reloadRedis()
.then(async () => {
toast.success("Redis reloaded");
})
.catch(() => {
toast.error("Error reloading Redis");
});
}}
>
Reload Redis
</DropdownMenuItem>
</DropdownMenuGroup>
</DropdownMenuContent>
</DropdownMenu>

View File

@@ -26,15 +26,20 @@ const dockerComposeServices = [
{ label: "secrets", type: "keyword", info: "Define secrets" },
].map((opt) => ({
...opt,
apply: (view: EditorView, completion: Completion) => {
apply: (
view: EditorView,
completion: Completion,
from: number,
to: number,
) => {
const insert = `${completion.label}:`;
view.dispatch({
changes: {
from: view.state.selection.main.from,
to: view.state.selection.main.to,
from,
to,
insert,
},
selection: { anchor: view.state.selection.main.from + insert.length },
selection: { anchor: from + insert.length },
});
},
}));
@@ -74,15 +79,20 @@ const dockerComposeServiceOptions = [
{ label: "networks", type: "keyword", info: "Networks to join" },
].map((opt) => ({
...opt,
apply: (view: EditorView, completion: Completion) => {
apply: (
view: EditorView,
completion: Completion,
from: number,
to: number,
) => {
const insert = `${completion.label}: `;
view.dispatch({
changes: {
from: view.state.selection.main.from,
to: view.state.selection.main.to,
from,
to,
insert,
},
selection: { anchor: view.state.selection.main.from + insert.length },
selection: { anchor: from + insert.length },
});
},
}));
@@ -99,6 +109,7 @@ function dockerComposeComplete(
const line = context.state.doc.lineAt(context.pos);
const indentation = /^\s*/.exec(line.text)?.[0].length || 0;
// If we're at the root level
if (indentation === 0) {
return {
from: word.from,

View File

@@ -1,6 +1,6 @@
{
"name": "dokploy",
"version": "v0.21.7",
"version": "v0.21.8",
"private": true,
"license": "Apache-2.0",
"type": "module",

View File

@@ -50,6 +50,18 @@ import { TRPCError } from "@trpc/server";
import { observable } from "@trpc/server/observable";
import { z } from "zod";
interface RcloneFile {
Path: string;
Name: string;
Size: number;
IsDir: boolean;
Tier?: string;
Hashes?: {
MD5?: string;
SHA1?: string;
};
}
export const backupRouter = createTRPCRouter({
create: protectedProcedure
.input(apiCreateBackup)
@@ -268,7 +280,7 @@ export const backupRouter = createTRPCRouter({
: input.search;
const searchPath = baseDir ? `${bucketPath}/${baseDir}` : bucketPath;
const listCommand = `rclone lsf ${rcloneFlags.join(" ")} "${searchPath}" | head -n 100`;
const listCommand = `rclone lsjson ${rcloneFlags.join(" ")} "${searchPath}" --no-mimetype --no-modtime 2>/dev/null`;
let stdout = "";
@@ -280,20 +292,35 @@ export const backupRouter = createTRPCRouter({
stdout = result.stdout;
}
const files = stdout.split("\n").filter(Boolean);
let files: RcloneFile[] = [];
try {
files = JSON.parse(stdout) as RcloneFile[];
} catch (error) {
console.error("Error parsing JSON response:", error);
console.error("Raw stdout:", stdout);
throw new Error("Failed to parse backup files list");
}
// Limit to first 100 files
const results = baseDir
? files.map((file) => `${baseDir}${file}`)
? files.map((file) => ({
...file,
Path: `${baseDir}${file.Path}`,
}))
: files;
if (searchTerm) {
return results.filter((file) =>
file.toLowerCase().includes(searchTerm.toLowerCase()),
);
return results
.filter((file) =>
file.Path.toLowerCase().includes(searchTerm.toLowerCase()),
)
.slice(0, 100);
}
return results;
return results.slice(0, 100);
} catch (error) {
console.error("Error in listBackupFiles:", error);
throw new TRPCError({
code: "BAD_REQUEST",
message:

View File

@@ -79,6 +79,33 @@ export const settingsRouter = createTRPCRouter({
await execAsync(`docker service update --force ${stdout.trim()}`);
return true;
}),
cleanRedis: adminProcedure.mutation(async () => {
if (IS_CLOUD) {
return true;
}
const { stdout: containerId } = await execAsync(
`docker ps --filter "name=dokploy-redis" --filter "status=running" -q | head -n 1`,
);
if (!containerId) {
throw new Error("Redis container not found");
}
const redisContainerId = containerId.trim();
await execAsync(`docker exec -i ${redisContainerId} redis-cli flushall`);
return true;
}),
reloadRedis: adminProcedure.mutation(async () => {
if (IS_CLOUD) {
return true;
}
await execAsync("docker service scale dokploy-redis=0");
await execAsync("docker service scale dokploy-redis=1");
return true;
}),
reloadTraefik: adminProcedure
.input(apiServerSchema)
.mutation(async ({ input }) => {

View File

@@ -25,21 +25,23 @@ export const runWebServerBackup = async (backup: BackupSchedule) => {
// First get the container ID
const { stdout: containerId } = await execAsync(
"docker ps --filter 'name=dokploy-postgres' -q",
`docker ps --filter "name=dokploy-postgres" --filter "status=running" -q | head -n 1`,
);
if (!containerId) {
throw new Error("PostgreSQL container not found");
}
// Then run pg_dump with the container ID
const postgresCommand = `docker exec ${containerId.trim()} pg_dump -v -Fc -U dokploy -d dokploy > '${tempDir}/database.sql'`;
const postgresContainerId = containerId.trim();
const postgresCommand = `docker exec ${postgresContainerId} pg_dump -v -Fc -U dokploy -d dokploy > '${tempDir}/database.sql'`;
await execAsync(postgresCommand);
await execAsync(`cp -r ${BASE_PATH}/* ${tempDir}/filesystem/`);
await execAsync(
`cd ${tempDir} && zip -r ${backupFileName} database.sql filesystem/ > /dev/null 2>&1`,
// Zip all .sql files since we created more than one
`cd ${tempDir} && zip -r ${backupFileName} *.sql filesystem/ > /dev/null 2>&1`,
);
const uploadCommand = `rclone copyto ${rcloneFlags.join(" ")} "${tempDir}/${backupFileName}" "${s3Path}"`;

View File

@@ -83,44 +83,54 @@ export const restoreWebServerBackup = async (
throw new Error("Database file not found after extraction");
}
const { stdout: postgresContainer } = await execAsync(
`docker ps --filter "name=dokploy-postgres" --filter "status=running" -q | head -n 1`,
);
if (!postgresContainer) {
throw new Error("Dokploy Postgres container not found");
}
const postgresContainerId = postgresContainer.trim();
// Drop and recreate database
emit("Disconnecting all users from database...");
await execAsync(
`docker exec $(docker ps --filter "name=dokploy-postgres" -q) psql -U dokploy postgres -c "SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity WHERE pg_stat_activity.datname = 'dokploy' AND pid <> pg_backend_pid();"`,
`docker exec ${postgresContainerId} psql -U dokploy postgres -c "SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity WHERE pg_stat_activity.datname = 'dokploy' AND pid <> pg_backend_pid();"`,
);
emit("Dropping existing database...");
await execAsync(
`docker exec $(docker ps --filter "name=dokploy-postgres" -q) psql -U dokploy postgres -c "DROP DATABASE IF EXISTS dokploy;"`,
`docker exec ${postgresContainerId} psql -U dokploy postgres -c "DROP DATABASE IF EXISTS dokploy;"`,
);
emit("Creating fresh database...");
await execAsync(
`docker exec $(docker ps --filter "name=dokploy-postgres" -q) psql -U dokploy postgres -c "CREATE DATABASE dokploy;"`,
`docker exec ${postgresContainerId} psql -U dokploy postgres -c "CREATE DATABASE dokploy;"`,
);
// Copy the backup file into the container
emit("Copying backup file into container...");
await execAsync(
`docker cp ${tempDir}/database.sql $(docker ps --filter "name=dokploy-postgres" -q):/tmp/database.sql`,
`docker cp ${tempDir}/database.sql ${postgresContainerId}:/tmp/database.sql`,
);
// Verify file in container
emit("Verifying file in container...");
await execAsync(
`docker exec $(docker ps --filter "name=dokploy-postgres" -q) ls -l /tmp/database.sql`,
`docker exec ${postgresContainerId} ls -l /tmp/database.sql`,
);
// Restore from the copied file
emit("Running database restore...");
await execAsync(
`docker exec $(docker ps --filter "name=dokploy-postgres" -q) pg_restore -v -U dokploy -d dokploy /tmp/database.sql`,
`docker exec ${postgresContainerId} pg_restore -v -U dokploy -d dokploy /tmp/database.sql`,
);
// Cleanup the temporary file in the container
emit("Cleaning up container temp file...");
await execAsync(
`docker exec $(docker ps --filter "name=dokploy-postgres" -q) rm /tmp/database.sql`,
`docker exec ${postgresContainerId} rm /tmp/database.sql`,
);
emit("Restore completed successfully!");