Skip to content

pass batchSize argument instead of hardcoded#108

Merged
fcostaoliveira merged 3 commits intomasterfrom
joan-respect-batch-size
Apr 22, 2026
Merged

pass batchSize argument instead of hardcoded#108
fcostaoliveira merged 3 commits intomasterfrom
joan-respect-batch-size

Conversation

@JoanFM
Copy link
Copy Markdown
Contributor

@JoanFM JoanFM commented Apr 22, 2026

No description provided.

@jit-ci
Copy link
Copy Markdown

jit-ci Bot commented Apr 22, 2026

🛡️ Jit Security Scan Results

CRITICAL HIGH MEDIUM

✅ No security findings were detected in this PR


Security scan by Jit

@fcostaoliveira fcostaoliveira self-requested a review April 22, 2026 15:28
Copy link
Copy Markdown
Contributor

@fcostaoliveira fcostaoliveira left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Intent is right — direction is correct

The hardcoded 100 in scan() silently ignored the batch size the caller configured via GetBenchmarkRunnerWithBatchSize(...). Replacing it with l.batchSize is the right move.

⚠️ Blocking: l.batchSize is currently always 0

The BenchmarkRunner.batchSize field (benchmark_runner/benchmark_runner.go:49) is never assigned anywhere:

  • GetBenchmarkRunnerWithBatchSize(batchSize uint) (line 297) takes the argument but never stores it on loader. The parameter is dead.
  • No -batch-size CLI flag is registered.

Grep confirms: grep -n '\.batchSize\|batchSize =' benchmark_runner/benchmark_runner.go returns nothing.

So after this PR as-is, l.batchSize == 0 in every run, and the first thing scanWithTimeout does is:

if batchSize < 1 {
    panic("--batch-size cannot be less than 1")
}

every RunBenchmark call panics. The hardcoded 100 was masking the missing assignment.

Proposed follow-up (backward-compatible)

Two small additions alongside the existing one-line change:

1. Register -batch-size as a CLI flag in the constructor

	flag.UintVar(&loader.maxTokenSizeMB, "max-token-size-mb", 1, "Maximum size of token to read from input file in MB. Minimum is 1MB.")
	flag.UintVar(&loader.batchSize, "batch-size", batchSize, "Number of items to batch together per worker channel before dispatch.")
	return loader

Using the constructor argument as the flag default means:

  • The batchSize argument is finally actually honored (the whole point of the PR).
  • Users can override via --batch-size=N without code changes.
  • The panic message ("--batch-size cannot be less than 1") now matches a real flag.

2. Preserve prior effective behavior of ftsb_redisearch

cmd/ftsb_redisearch/main.go:31 currently calls GetBenchmarkRunnerWithBatchSize(10). That 10 has been silently ignored for however long the hardcode lived — the shipped binary has effectively been running at batch size 100. Honoring the arg now would cut effective batch size 10× with no other code change, which will show up as a throughput regression in historical comparisons.

Suggest bumping the literal to 100 so the shipped default matches prior runtime behavior:

loader = benchmark_runner.GetBenchmarkRunnerWithBatchSize(100)

Users who want 10 can now pass --batch-size=10 explicitly.

Verification

With both changes applied:

$ ftsb_redisearch -h | grep -A1 batch-size
  -batch-size uint
    	Number of items to batch together per worker channel before dispatch. (default 100)

go build ./... and go vet ./... both clean.

Minor / non-blocking

  • scanWithIndexer (scan.go:92) has the same hardcoding-risk shape but appears unreferenced — worth deleting as dead code in a follow-up.
  • PR body is empty; a line or two on motivation helps reviewers (this is actually part 1 of wiring a previously-ignored argument through).

Happy to push these as a PR into your branch if easier.

The batchSize argument to GetBenchmarkRunnerWithBatchSize was never
assigned to loader.batchSize, and no --batch-size flag was registered,
so l.batchSize was always 0. On top of #108's scan() fix, that causes
scanWithTimeout to panic ("--batch-size cannot be less than 1").

Register --batch-size with the constructor arg as the default so:
  - the caller-supplied batchSize is finally honored
  - users can override at runtime without code changes

Bump cmd/ftsb_redisearch/main.go's constructor arg from 10 -> 100 to
preserve the prior effective runtime batch size (hardcoded 100 in scan),
so the shipped binary's default behavior is unchanged by #108.
@fcostaoliveira
Copy link
Copy Markdown
Contributor

Pushed 379671d onto this branch with the two suggested follow-ups:

  • Register --batch-size in GetBenchmarkRunnerWithBatchSize, using the constructor argument as the flag default. This actually wires loader.batchSize through — without it, l.batchSize is 0 and scanWithTimeout panics ("--batch-size cannot be less than 1").
  • Bump cmd/ftsb_redisearch/main.go's constructor arg from 10100 so the shipped binary's effective batch size stays what it was under the old hardcoded value. Users who want 10 can now pass --batch-size=10 explicitly.

go build ./... and go vet ./... both clean. ftsb_redisearch -h shows -batch-size uint ... (default 100).

Happy to revert either piece if you'd rather keep the scope tight — just flag it.

Guards against two regressions:
  - "flag provided but not defined: -batch-size" (flag unregistered)
  - zero-batchSize panic in scanWithTimeout

Exec pattern mirrors TestFTSBWithDuration — boots Redis 8.4 in Docker,
runs ftsb_redisearch with --batch-size=50 --duration=3s, asserts the
flag is accepted and the run completes with "Issued" in the output.
@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
37.0% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

@fcostaoliveira fcostaoliveira self-requested a review April 22, 2026 15:52
@fcostaoliveira fcostaoliveira merged commit 175e443 into master Apr 22, 2026
3 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants