s3cache: Fix data race in readerAtCloser#6675
Open
kuzaxak wants to merge 1 commit intomoby:masterfrom
Open
Conversation
readerAtCloser implements io.ReaderAt, whose documented contract
explicitly allows concurrent callers:
Clients of ReadAt can execute parallel ReadAt calls on the same
input source.
However the type has no synchronization protecting its mutable state
(rc, ra, offset, closed). When used as the body of an S3 upload via
manager.Uploader, the AWS SDK v2 upload manager spawns DefaultUpload-
Concurrency=5 worker goroutines, each given an io.NewSectionReader
slice that shares the same underlying body. Each worker's Read
eventually becomes a concurrent ReadAt on the same *readerAtCloser
at a different offset, racing on the close-then-reopen path.
Under the race detector this produces DATA RACE warnings on rc,
offset and ra. Without the race detector it intermittently crashes
buildkitd with a nil pointer dereference at the hrs.rc.Read(p) call
in the io.ReaderAt fallback branch, when one goroutine nils rc
between a peer's nil-check and Read dispatch. The crash kills the
buildkit daemon and cannot be recovered by the buildctl client.
The bug reliably reproduces whenever a cache layer exceeds the
default 5 MiB part size, i.e. essentially every real-world Docker
image build that exports cache to S3.
PR moby#5597 added an offset parameter to s3Client.getReader and reduced
the frequency of close-reopen on the common sequential-read path,
but did not address the underlying thread-safety violation. This
commit takes the minimum-risk correctness fix: serialize ReadAt and
Close with a sync.Mutex so the io.ReaderAt contract is honoured and
the existing close-reopen logic remains correct under concurrent
callers.
Under the mutex this becomes serialised per readerAtCloser: correct,
but potentially slower for a single large blob that is being
re-exported from S3 back to S3. This is not a global BuildKit lock:
different cache layers can still upload in parallel, but one blob
backed by this reader loses multipart read parallelism and still
pays the reopen churn. A proper follow-up would also eliminate the
shared-state optimisation that causes the thrashing, for example by
having ReaderAt open an independent reader per call (stateless), or
by keeping a small pool of per-offset readers. That optimisation
can be follow-up work after this correctness fix, or part of the
moby#3993 refactor (which would also need a mutex added to the
replacement contentutil.readerAt).
Closes moby#6674
Signed-off-by: Vladimir Kuznichenkov <vova@kuzaxak.dev>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
readerAtCloser implements io.ReaderAt, whose documented contract explicitly allows concurrent callers:
However the type has no synchronization protecting its mutable state (rc, ra, offset, closed). When used as the body of an S3 upload via manager.Uploader, the AWS SDK v2 upload manager spawns DefaultUpload- Concurrency=5 worker goroutines, each given an io.NewSectionReader slice that shares the same underlying body. Each worker's Read eventually becomes a concurrent ReadAt on the same *readerAtCloser at a different offset, racing on the close-then-reopen path.
Under the race detector this produces DATA RACE warnings on rc, offset and ra. Without the race detector it intermittently crashes buildkitd with a nil pointer dereference at the hrs.rc.Read(p) call in the io.ReaderAt fallback branch, when one goroutine nils rc between a peer's nil-check and Read dispatch. The crash kills the buildkit daemon and cannot be recovered by the buildctl client.
The bug reliably reproduces whenever a cache layer exceeds the default 5 MiB part size, i.e. essentially every real-world Docker image build that exports cache to S3.
PR #5597 added an offset parameter to s3Client.getReader and reduced the frequency of close-reopen on the common sequential-read path, but did not address the underlying thread-safety violation. This commit takes the minimum-risk correctness fix: serialize ReadAt and Close with a sync.Mutex so the io.ReaderAt contract is honoured and the existing close-reopen logic remains correct under concurrent callers.
Under the mutex this becomes serialised per readerAtCloser: correct, but potentially slower for a single large blob that is being re-exported from S3 back to S3. This is not a global BuildKit lock: different cache layers can still upload in parallel, but one blob backed by this reader loses multipart read parallelism and still pays the reopen churn. A proper follow-up would also eliminate the shared-state optimisation that causes the thrashing, for example by having ReaderAt open an independent reader per call (stateless), or by keeping a small pool of per-offset readers. That optimisation can be follow-up work after this correctness fix, or part of the #3993 refactor (which would also need a mutex added to the replacement contentutil.readerAt).
Closes #6674