Chapter 18Project Generic Priority Queue

项目

概述

Zig中的泛型API使我们能够在编译时描述能力;优先级队列是这些能力与时间敏感调度现实相结合的地方。在这个项目中,我们用丰富的比较器和上下文感知的策略包装std.PriorityQueue,这些策略可以在不牺牲零成本抽象的情况下进行测试和调整。参见17priority_queue.zig

我们将构建三个工件:一个基础调度器,它在比较器中编码排序规则;一个公平性模拟器,它在改变策略上下文的同时重用相同的队列;以及一个分析包装器,它跟踪流中的主要违规者。在此过程中,我们重新审视了分配器选择,权衡了排空、重新调整和内省堆的策略。参见10sort.zig

学习目标

  • 将业务规则转换为编译时比较器契约,以驱动std.PriorityQueue的排序。
  • 使用队列的Context参数建模动态调度启发式算法,同时保持内存抖动可预测。10
  • 从同一个堆中派生流式分析(Top-K,滚动统计),而无需复制粘贴逻辑或牺牲稳定性。47

架构可重用的队列核心

优先级队列API接受一个值类型、一个用户定义的上下文和一个返回std.math.Order的比较器。这个函数决定了哪个元素被冒泡到最前面,所以我们将把它视为一个由测试支持的契约。

比较器设计作为API表面

我们的第一个示例构建了一个简单的构建和发布调度器。紧急性是主要关键字;提交时间打破了僵局,这样我们就避免了饿死旧任务。比较器是一个纯函数,在队列类型实例化时完全在编译时调用,但它足够富有表现力,可以捕获细致的排序逻辑。参见math.zig

Zig
/// Demo: Using std.PriorityQueue to dispatch tasks by priority.
/// Lower urgency values mean higher priority; ties are broken by earlier submission time.
/// This example prints the order in which tasks would be processed.
///
/// Notes:
/// - The comparator returns `.lt` when `a` should be dispatched before `b`.
/// - We also order by `submitted_at_ms` to ensure deterministic order among equal urgencies.
/// 演示:使用 std.PriorityQueue 按优先级分发任务。
/// 紧急程度值越低意味着优先级越高;平局时以较早提交时间打破。
/// 此示例打印任务将被处理的顺序。
///
/// 注意:
/// - 当 `a` 应该在 `b` 之前分发时,比较器返回 `.lt`。
/// - 我们还按 `submitted_at_ms` 排序,以确保在同等紧急程度下的确定性顺序。
const std = @import("std");
const Order = std.math.Order;

/// A single work item to schedule.
/// 要调度的单个工作项。
const Task = struct {
    /// Display name for the task.
    /// 任务的显示名称。
    name: []const u8,
    /// Priority indicator: lower value = more urgent.
    /// 优先级指示器:值越低越紧急。
    urgency: u8,
    /// Monotonic timestamp in milliseconds used to break ties (earlier wins).
    /// 用于打破平局的单调时间戳(毫秒),较早者胜出。
    submitted_at_ms: u64,
};

/// Comparator for the priority queue:
/// - Primary key: urgency (lower is dispatched first)
/// - Secondary key: submitted_at_ms (earlier is dispatched first)
/// 优先级队列的比较器:
/// - 主键:紧急程度(较低者先分发)
/// - 次要键:submitted_at_ms(较早者先分发)
fn taskOrder(_: void, a: Task, b: Task) Order {
    // Compare by urgency first.
    /// 首先按紧急程度比较。
    if (a.urgency < b.urgency) return .lt;
    if (a.urgency > b.urgency) return .gt;

    // Tie-breaker: earlier submission is higher priority.
    /// 平局打破器:较早提交者优先级更高。
    return std.math.order(a.submitted_at_ms, b.submitted_at_ms);
}

/// Program entry: builds a priority queue and prints dispatch order.
/// 程序入口:构建优先级队列并打印分发顺序。
pub fn main() !void {
    // Use the General Purpose Allocator (GPA) for simplicity in examples.
    /// 为了示例简便,使用通用分配器(GPA)。
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();
    const allocator = gpa.allocator();

    // Instantiate a priority queue of Task:
    // - Context type is `void` (no extra state needed by the comparator)
    // - `taskOrder` defines the ordering.
    /// 实例化一个任务优先级队列:
    /// - 上下文类型是 `void`(比较器不需要额外状态)
    /// - `taskOrder` 定义排序规则。
    var queue = std.PriorityQueue(Task, void, taskOrder).init(allocator, {});
    defer queue.deinit();

    // Enqueue tasks with varying urgency and submission times.
    // Expectation (by our ordering): lower urgency processed first;
    // within same urgency, earlier submitted_at_ms processed first.
    /// 将具有不同紧急程度和提交时间的任务入队。
    /// 期望(按我们的排序):较低紧急程度优先处理;
    /// 在相同紧急程度内,submitted_at_ms 较早者优先处理。
    try queue.add(.{ .name = "compile pointer.zig", .urgency = 0, .submitted_at_ms = 1 });
    try queue.add(.{ .name = "run tests", .urgency = 1, .submitted_at_ms = 2 });
    try queue.add(.{ .name = "deploy preview", .urgency = 2, .submitted_at_ms = 3 });
    try queue.add(.{ .name = "prepare changelog", .urgency = 1, .submitted_at_ms = 4 });

    std.debug.print("Dispatch order:\n", .{});

    // Remove tasks in priority order until the queue is empty.
    // removeOrNull() yields the next Task or null when empty.
    /// 按优先级顺序移除任务直到队列为空。
    /// removeOrNull() 返回下一个任务或空时返回 null。
    while (queue.removeOrNull()) |task| {
        std.debug.print("  - {s} (urgency {d})\n", .{ task.name, task.urgency });
    }
}
运行
Shell
$ zig run task_queue_basics.zig
输出
Shell
Dispatch order:
  - compile pointer.zig (urgency 0)
  - run tests (urgency 1)
  - prepare changelog (urgency 1)
  - deploy preview (urgency 2)

因为比较器返回std.math.Order,我们可以分层添加辅助关键字而无需更改队列类型;堆只是遵循你编码的契约。

增长和分配策略

如果底层切片需要更多容量,每次调用add都可能重新分配。对于热路径,请使用ensureUnusedCapacity保留或从预设大小的切片初始化,然后排空以摊销分配。队列的deinit很便宜,只要你明确分配器生命周期,这与我们分配器深入探讨中的内存卫生实践相符。10

策略驱动的重新优先级

接下来,我们将更丰富的数据输入到同一个队列中:带有SLA、时间上下文和VIP提示的服务请求。队列本身是无关的;所有的细微差别都存在于策略结构和比较器中。这种设计使得堆可重用,即使我们增加了公平性规则。17

老化和VIP权重

比较器通过测量松弛(距离截止日期的剩余时间)、将逾期请求乘以以升级它们以及减去VIP奖金来计算标量“分数”。因为Context只是一个结构体,所以策略被编译到队列中,可以通过构造一个具有不同权重的新实例来交换。我们前向声明助手函数以保持比较器可读和可测试。

模拟操作模式

我们运行两个场景:班中分流和后期升级。唯一的区别是我们传递给init的策略结构体;其他一切(任务,队列类型)都保持不变。打印的顺序显示了逾期乘法和VIP提升如何改变弹出序列。

Zig
const std = @import("std");
const Order = std.math.Order;

/// Represents an incoming support request with SLA constraints.
/// 表示带有 SLA 约束的传入支持请求。
const Request = struct {
    ticket: []const u8,
    submitted_at_ms: u64,
    sla_ms: u32,
    work_estimate_ms: u32,
    vip: bool,
};

/// Scheduling policy parameters that influence prioritization.
/// 影响优先级调度的策略参数。
const Policy = struct {
    now_ms: u64,             // Current time reference for slack calculation
    /// 用于计算宽松时间的当前时间参考点
    vip_boost: i64,          // Score reduction (boost) for VIP requests
    /// VIP 请求的分数减少(提升)
    overdue_multiplier: i64, // Penalty multiplier for overdue requests
    /// 超期请求的惩罚乘数
};

/// Computes the time slack for a request: positive means time remaining, negative means overdue.
/// Overdue requests are amplified by the policy's overdue_multiplier to increase urgency.
/// 计算请求的时间宽松度:正值表示剩余时间,负值表示已超期。
/// 超期请求会根据策略的 overdue_multiplier 进行放大以增加紧急程度。
fn slack(policy: Policy, request: Request) i64 {
    // Calculate absolute deadline from submission time + SLA window
    /// 从提交时间 + SLA 时间窗口计算绝对截止时间
    const deadline = request.submitted_at_ms + request.sla_ms;

    // Compute slack as deadline - now; use i128 to prevent overflow on subtraction
    /// 计算宽松度为 deadline - now;使用 i128 防止减法溢出
    const slack_signed = @as(i64, @intCast(@as(i128, deadline) - @as(i128, policy.now_ms)));

    if (slack_signed >= 0) {
        // Positive slack: request is still within SLA
        /// 正宽松度:请求仍在 SLA 内
        return slack_signed;
    }

    // Negative slack: request is overdue; amplify urgency by multiplying
    /// 负宽松度:请求已超期;通过乘法放大紧急程度
    return slack_signed * policy.overdue_multiplier;
}

/// Computes a weighted score for prioritization.
/// Lower scores = higher priority (processed first by min-heap).
/// 计算用于优先级的加权分数。
/// 分数越低 = 优先级越高(由最小堆优先处理)。
fn weightedScore(policy: Policy, request: Request) i64 {
    // Start with slack: negative (overdue) or positive (time remaining)
    /// 从宽松度开始:负值(已超期)或正值(剩余时间)
    var score = slack(policy, request);

    // Add work estimate: longer tasks get slightly lower priority (higher score)
    /// 添加工作估计:较长的任务获得稍低的优先级(更高分数)
    score += @as(i64, @intCast(request.work_estimate_ms));

    // VIP boost: reduce score to increase priority
    /// VIP 提升:减少分数以增加优先级
    if (request.vip) score -= policy.vip_boost;

    return score;
}

/// Comparison function for the priority queue.
/// Returns Order.lt if 'a' should be processed before 'b' (lower score = higher priority).
/// 优先级队列的比较函数。
/// 如果 'a' 应该在 'b' 之前处理,则返回 Order.lt(分数越低 = 优先级越高)。
fn requestOrder(policy: Policy, a: Request, b: Request) Order {
    const score_a = weightedScore(policy, a);
    const score_b = weightedScore(policy, b);
    return std.math.order(score_a, score_b);
}

/// Simulates a scheduling scenario by inserting all tasks into a priority queue,
/// then dequeuing and printing them in priority order.
/// 通过将所有任务插入优先级队列来模拟调度场景,
/// 然后按优先级顺序出队并打印。
fn simulateScenario(allocator: std.mem.Allocator, policy: Policy, label: []const u8) !void {
    // Define a set of incoming requests with varying SLA constraints and characteristics
    /// 定义一组具有不同 SLA 约束和特征的传入请求
    const tasks = [_]Request{
        .{ .ticket = "INC-482", .submitted_at_ms = 0, .sla_ms = 500, .work_estimate_ms = 120, .vip = false },
        .{ .ticket = "INC-993", .submitted_at_ms = 120, .sla_ms = 400, .work_estimate_ms = 60, .vip = true },
        .{ .ticket = "INC-511", .submitted_at_ms = 200, .sla_ms = 200, .work_estimate_ms = 45, .vip = false },
        .{ .ticket = "INC-742", .submitted_at_ms = 340, .sla_ms = 120, .work_estimate_ms = 30, .vip = false },
    };

    // Initialize priority queue with the given policy as context for comparison
    /// 使用给定策略作为比较上下文初始化优先级队列
    var queue = std.PriorityQueue(Request, Policy, requestOrder).init(allocator, policy);
    defer queue.deinit();

    // Add all tasks to the queue; they will be heap-ordered automatically
    /// 将所有任务添加到队列中;它们将自动按堆排序
    try queue.addSlice(&tasks);

    // Print scenario header
    /// 打印场景标题
    std.debug.print("{s} (now={d}ms)\n", .{ label, policy.now_ms });

    // Dequeue and print requests in priority order (lowest score first)
    /// 按优先级顺序出队并打印请求(分数最低的先出队)
    while (queue.removeOrNull()) |request| {
        // Recalculate score and deadline for display
        /// 重新计算分数和截止时间以供显示
        const score = weightedScore(policy, request);
        const deadline = request.submitted_at_ms + request.sla_ms;

        std.debug.print(
            "  -> {s} score={d} deadline={d} vip={}\n",
            .{ request.ticket, score, deadline, request.vip },
        );
    }
    std.debug.print("\n", .{});
}

pub fn main() !void {
    // Set up general-purpose allocator with leak detection
    /// 设置带有泄漏检测的通用分配器
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();
    const allocator = gpa.allocator();

    // Scenario 1: Mid-shift with moderate VIP boost and overdue penalty
    /// 场景 1:班中时段,适度 VIP 提升和超期惩罚
    try simulateScenario(
        allocator,
        .{ .now_ms = 350, .vip_boost = 250, .overdue_multiplier = 2 },
        "Mid-shift triage"
    );

    // Scenario 2: Escalation window with reduced VIP boost but higher overdue penalty
    /// 场景 2:升级窗口,减弱 VIP 提升但更高超期惩罚
    try simulateScenario(
        allocator,
        .{ .now_ms = 520, .vip_boost = 100, .overdue_multiplier = 4 },
        "Escalation window"
    );
}
运行
Shell
$ zig run sla_fairness.zig
输出
Shell
Mid-shift triage (now=350ms)
  -> INC-993 score=-20 deadline=520 vip=true
  -> INC-511 score=95 deadline=400 vip=false
  -> INC-742 score=140 deadline=460 vip=false
  -> INC-482 score=270 deadline=500 vip=false

Escalation window (now=520ms)
  -> INC-511 score=-435 deadline=400 vip=false
  -> INC-742 score=-210 deadline=460 vip=false
  -> INC-993 score=-40 deadline=520 vip=true
  -> INC-482 score=40 deadline=500 vip=false

在现有项目入队后更改策略需要重建堆——排空到一个切片中,更改策略,然后重新插入或调用fromOwnedSlice以在新比较器下重新堆化。10

分析和Top-K报告

优先级队列也是优秀的滚动聚合器。通过将“最差”元素保留在堆中并积极修剪,我们可以以最小的开销维护延迟峰值的Top-K视图。对当前堆快照进行排序使我们能够直接为仪表板或日志呈现结果。47

一个可组合的包装器

TopK包装了std.PriorityQueue,并使用比较器形成一个分数的小顶堆。当堆超出限制时,每次插入都会调用remove,确保我们只保留得分最高的元素。snapshotDescending助手将堆复制到临时缓冲区中,并使用std.sort.heap对其进行排序,使队列准备好进行进一步的插入。17

Zig
// Import the Zig standard library for allocator, sorting, debugging, etc.
/// 导入 Zig 标准库用于分配器、排序、调试等。
const std = @import("std");

const Order = std.math.Order;

// A single latency measurement for an endpoint.
// Fields:
//  - endpoint: UTF-8 byte slice identifying the endpoint.
//  - duration_ms: observed latency in milliseconds.
//  - payload_bytes: size of the request/response payload in bytes.
/// 端点的单个延迟测量。
/// 字段:
///  - endpoint:标识端点的 UTF-8 字节切片。
///  - duration_ms:以毫秒为单位的观察延迟。
///  - payload_bytes:请求/响应有效负载的大小(字节)。
const LatencySample = struct {
    endpoint: []const u8,
    duration_ms: u32,
    payload_bytes: u32,
};

// Compute a score for a latency sample.
// Higher scores represent more severe (worse) samples. The formula favors
// larger durations and applies a small penalty for larger payloads to reduce
// noisy high-latency large-payload samples.
//
// Returns an f64 so scores can be compared with fractional penalties.
/// 计算延迟样本的分数。
/// 更高的分数表示更严重(更差)的样本。公式偏向较大的持续时间,
/// 并对较大的有效负载应用小幅惩罚,以减少噪声高延迟大有效负载样本。
///
/// 返回 f64 以便分数可以与分数惩罚进行比较。
fn score(sample: LatencySample) f64 {
    // Convert integers to floating point explicitly to avoid implicit casts.
    // The penalty factor 0.005 was chosen empirically to be small.
    /// 显式将整数转换为浮点数以避免隐式转换。
    /// 惩罚因子 0.005 是凭经验选择的较小值。
    return @as(f64, @floatFromInt(sample.duration_ms)) - (@as(f64, @floatFromInt(sample.payload_bytes)) * 0.005);
}

// TopK is a compile-time generic producer that returns a fixed-capacity,
// score-driven top-K tracker for items of type T.
//
// Parameters:
//  - T: the element type stored in the tracker.
//  - scoreFn: a compile-time function that maps T -> f64 used to rank elements.
/// TopK 是一个编译时泛型生成器,返回一个固定容量的、
/// 基于分数的 T 类型项目顶级 K 跟踪器。
///
/// 参数:
///  - T:跟踪器中存储的元素类型。
///  - scoreFn:编译时函数,映射 T -> f64 用于排名元素。
fn TopK(comptime T: type, comptime scoreFn: fn (T) f64) type {
    const Error = error{InvalidLimit};

    // Comparator helpers used by the PriorityQueue and for sorting snapshots.
    /// PriorityQueue 和排序快照使用的比较器辅助函数。
    const Comparators = struct {
        // Comparator used by the PriorityQueue. The first parameter is the
        // user-provided context (unused here), hence the underscore name.
        // Returns an Order (Less/Equal/Greater) based on the score function.
        /// PriorityQueue 使用的比较器。第一个参数是用户提供的上下文(此处未使用),
        /// 因此使用下划线名称。基于分数函数返回 Order(Less/Equal/Greater)。
        fn heap(_: void, a: T, b: T) Order {
            return std.math.order(scoreFn(a), scoreFn(b));
        }

        // Boolean comparator used by the heap sort to produce descending order.
        // Returns true when `a` should come before `b` (i.e., a has higher score).
        /// 堆排序使用的布尔比较器以产生降序。
        /// 当 `a` 应该在 `b` 之前时返回 true(即 a 有更高的分数)。
        fn desc(_: void, a: T, b: T) bool {
            return scoreFn(a) > scoreFn(b);
        }
    };

    return struct {
        // A priority queue specialized for T using our heap comparator.
        /// 使用我们的堆比较器为 T 专门化的优先级队列。
        const Heap = std.PriorityQueue(T, void, Comparators.heap);
        const Self = @This();

        heap: Heap,
        limit: usize,

        // Initialize a TopK tracker with the provided allocator and positive limit.
        // Returns Error.InvalidLimit when limit == 0.
        /// 使用提供的分配器和正限制初始化 TopK 跟踪器。
        /// 当 limit == 0 时返回 Error.InvalidLimit。
        pub fn init(allocator: std.mem.Allocator, limit: usize) Error!Self {
            if (limit == 0) return Error.InvalidLimit;
            return .{ .heap = Heap.init(allocator, {}), .limit = limit };
        }

        // Deinitialize the underlying heap and free its resources.
        /// 释放底层堆并释放其资源。
        pub fn deinit(self: *Self) void {
            self.heap.deinit();
        }

        // Add a single value into the tracker. If adding causes the internal
        // count to exceed `limit`, the priority queue will evict the item it
        // considers lowest priority according to our comparator, keeping the
        // top-K scored items.
        /// 将单个值添加到跟踪器中。如果添加导致内部计数超过 `limit`,
        /// 优先级队列将根据我们的比较器逐出它认为优先级最低的项目,
        /// 保持顶级 K 分数项目。
        pub fn add(self: *Self, value: T) !void {
            try self.heap.add(value);
            if (self.heap.count() > self.limit) {
                // Evict the lowest-priority element (as defined by Comparators.heap).
                /// 逐出最低优先级元素(如 Comparators.heap 所定义)。
                _ = self.heap.remove();
            }
        }

        // Add multiple values from a slice into the tracker.
        // This simply forwards each element to `add`.
        /// 从切片将多个值添加到跟踪器中。
        /// 这只是将每个元素转发给 `add`。
        pub fn addSlice(self: *Self, values: []const T) !void {
            for (values) |value| try self.add(value);
        }

        // Produce a snapshot of the current tracked items in descending score order.
        //
        // The snapshot allocates a new array via `allocator` and copies the
        // internal heap's item storage into it. The result is then sorted
        // descending (highest score first) using Comparators.desc.
        //
        // Caller is responsible for freeing the returned slice.
        /// 按降序分数顺序生成当前跟踪项目的快照。
        ///
        /// 快照通过 `allocator` 分配一个新数组,并将内部堆的项目存储复制到其中。
        /// 然后使用 Comparators.desc 按降序(最高分数优先)排序结果。
        ///
        /// 调用者负责释放返回的切片。
        pub fn snapshotDescending(self: *Self, allocator: std.mem.Allocator) ![]T {
            const count = self.heap.count();
            const out = try allocator.alloc(T, count);
            // Copy the underlying items buffer into the newly allocated array.
            // This creates an independent snapshot so we can sort without mutating the heap.
            /// 将底层项目缓冲区复制到新分配的数组中。
            /// 这创建了一个独立的快照,因此我们可以在不改变堆的情况下排序。
            @memcpy(out, self.heap.items[0..count]);
            // Sort in-place so the highest-scored items appear first.
            /// 原地排序,以便最高分项目出现在前面。
            std.sort.heap(T, out, @as(void, {}), Comparators.desc);
            return out;
        }
    };
}

// Example program demonstrating TopK usage with LatencySample.
/// 演示 TopK 与 LatencySample 使用的示例程序。
pub fn main() !void {
    // Create a general-purpose allocator for example allocations.
    /// 为示例分配创建通用分配器。
    var gpa = std.heap.GeneralPurposeAllocator(.{}){};
    defer _ = gpa.deinit();
    const allocator = gpa.allocator();

    // Track the top 5 latency samples by computed score.
    /// 按计算分数跟踪前 5 个延迟样本。
    var tracker = try TopK(LatencySample, score).init(allocator, 5);
    defer tracker.deinit();

    // Example samples. These are small, stack-allocated literal records.
    /// 示例样本。这些是小的、栈分配的字面记录。
    const samples = [_]LatencySample{
        .{ .endpoint = "/v1/users", .duration_ms = 122, .payload_bytes = 850 },
        .{ .endpoint = "/v1/orders", .duration_ms = 210, .payload_bytes = 1200 },
        .{ .endpoint = "/v1/users", .duration_ms = 188, .payload_bytes = 640 },
        .{ .endpoint = "/v1/payments", .duration_ms = 305, .payload_bytes = 1500 },
        .{ .endpoint = "/v1/orders", .duration_ms = 154, .payload_bytes = 700 },
        .{ .endpoint = "/v1/ledger", .duration_ms = 420, .payload_bytes = 540 },
        .{ .endpoint = "/v1/users", .duration_ms = 275, .payload_bytes = 980 },
        .{ .endpoint = "/v1/health", .duration_ms = 34, .payload_bytes = 64 },
        .{ .endpoint = "/v1/ledger", .duration_ms = 362, .payload_bytes = 480 },
    };

    // Bulk-add the sample slice into the tracker.
    /// 批量将样本切片添加到跟踪器中。
    try tracker.addSlice(&samples);

    // Capture the current top-K samples in descending order and print them.
    /// 按降序捕获当前前 K 个样本并打印它们。
    const worst = try tracker.snapshotDescending(allocator);
    defer allocator.free(worst);

    std.debug.print("Top latency offenders (descending by score):\n", .{});
    for (worst, 0..) |sample, idx| {
        // Compute the score again for display purposes (identical to the ordering key).
        /// 出于显示目的重新计算分数(与排序键相同)。
        const computed_score = score(sample);
        std.debug.print(
            "  {d:>2}. {s: <12} latency={d}ms payload={d}B score={d:.2}\n",
            .{ idx + 1, sample.endpoint, sample.duration_ms, sample.payload_bytes, computed_score },
        );
    }
}
运行
Shell
$ zig run topk_latency.zig
输出
Shell
Top latency offenders (descending by score):
   1. /v1/ledger   latency=420ms payload=540B score=417.30
   2. /v1/ledger   latency=362ms payload=480B score=359.60
   3. /v1/payments latency=305ms payload=1500B score=297.50
   4. /v1/users    latency=275ms payload=980B score=270.10
   5. /v1/orders   latency=210ms payload=1200B score=204.00

快照复制堆是为了使未来的插入保持廉价;在高容量遥测作业中重用临时分配器或arena,以避免碎片化长期存在的堆。10

从队列到模块边界

我们现在有了可重用的队列包装器,它们可以存在于自己的模块中。下一章将正式化这一步骤,展示如何将队列作为包级模块公开,并通过@import边界公开策略。19

注意与警告

  • 在专用助手中定义比较器,以便它们可以独立进行单元测试并在队列实例之间重用。13
  • 策略结构是值类型——更改检测意味着重建堆或创建新队列;否则,你的排序将不再符合比较器的假设。
  • 复制堆内容进行报告会分配内存;在与遥测服务集成时,回收缓冲区或使用arena,以保持无GC的Zig代码可预测性。10

练习

  • 扩展调度器,通过在比较器中统计累积运行时来尊重“批处理大小”提示;添加一个测试,断言在混合优先级下公平性。13
  • 修改SLA模拟器,使用std.log写入审计条目,并将输出与多个策略下的预期进行比较。log.zig
  • TopK包装器返回快照和聚合平均值;考虑如何通过异步指标钩子公开它。47

替代方案和边缘情况

  • 如果你需要具有相同分数的项目的稳定排序,请将有效载荷包装在一个存储单调递增序列号的结构体中,并将其包含在比较器中。
  • 对于非常大的队列,考虑分块或使用配对堆——std.PriorityQueue是二叉堆,对于百万级项目的堆可能会导致缓存未命中。
  • 当在模块边界之间公开队列工厂时,记录分配器所有权并提供显式的destroy助手,以防止调用者在运行时更改策略时发生泄漏。19

Help make this chapter better.

Found a typo, rough edge, or missing explanation? Open an issue or propose a small improvement on GitHub.