Skip to content

Enhancement: Implement Per-Storage Concurrency Limits and Maintenance Task Priority #154

@sainf

Description

@sainf

Description

Currently, the Concurrent Jobs limit appears to be a global setting. This presents two major efficiency issues:

  1. HDD Performance Degradation: When multiple backup jobs run simultaneously on the same physical HDD, the disk head must constantly seek between different sectors. This leads to heavy I/O contention, increased fragmentation, and significantly slower overall throughput compared to running the jobs sequentially.
  2. Maintenance Blockage: Essential "light" tasks (e.g., catalog rebuilds, purges, or software updates) are often queued behind heavy backup jobs if the global concurrency limit is reached, even though these tasks may use different resources (CPU/Network vs. Disk).

Use Case

Proposed Solution

  1. Concurrency Limits per Storage Location
    Instead of (or in addition to) a global limit, allow users to define a Max Concurrent Jobs setting for each specific Storage Location/Mount Point.
  • Mechanical HDDs: Could be set to 1 to ensure sequential writing.
  • SSD/NVMe or High-End NAS: Could be set to 4+ to utilize available bandwidth.
  1. Task Categorization
    Differentiate between Data-Intensive Jobs (Backups/Restores) and Administrative Tasks (Purges, Catalog Rebuilds, Updates).
  • Suggestion: Allow Administrative Tasks to bypass the global concurrency queue, or provide a separate "Maintenance Slot" so they aren't blocked by a 2TB backup job, or if possible pause the current queue.

Now I'm have 2 large HDD and one nvme for the bbs and db and I'm forced to set the queue to 1,

Alternatives Considered

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions