You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
grep -r KafkaSink tests/ returns nothing. There is no test anywhere that exercises KafkaSink.publish_messages, which is where service_status_to_x5f2 / job_status_to_x5f2 are actually invoked in production. Similarly, no test drives a real confluent_kafka.Message through the adapter layer.
FakeMessageSink skips serialization entirely, so every LivedataApp/OrchestratingProcessor-based integration test silently avoids the Kafka encoding path. This is how both issues fixed in #847 slipped through CI:
X5f2ToStatusAdapter using positional Timestamp(...) — killed dashboard status consumption.
Both would have been caught by a test that ran OrchestratingProcessor.process() through a sink that performs x5f2 encoding, or an adapter test that fed a real Kafka message object into X5f2ToStatusAdapter.adapt.
Suggested approaches
Two options, not mutually exclusive:
Serializing test sink. Add a variant of FakeMessageSink that mirrors KafkaSink.publish_messages' serialization dispatch (service_status_to_x5f2 / job_status_to_x5f2 / self._serializer(msg)), but stores the resulting bytes instead of producing to Kafka. Existing LivedataApp tests can opt in and would catch this entire class of type drift for free.
Direct KafkaSink tests. A small test file using a fake confluent_kafka.Producer (just capturing produce() calls) to round-trip ServiceStatus/JobStatus through the real KafkaSink. Cheap and focused.
Option 1 gives broader coverage for free; option 2 is a targeted safety net. I'd suggest starting with option 1.
Problem
grep -r KafkaSink tests/returns nothing. There is no test anywhere that exercisesKafkaSink.publish_messages, which is whereservice_status_to_x5f2/job_status_to_x5f2are actually invoked in production. Similarly, no test drives a realconfluent_kafka.Messagethrough the adapter layer.FakeMessageSinkskips serialization entirely, so everyLivedataApp/OrchestratingProcessor-based integration test silently avoids the Kafka encoding path. This is how both issues fixed in #847 slipped through CI:ServiceStatusPayload.started_at: intvsServiceStatus.started_at: Timestampmismatch — killed every backend service on its first heartbeat onmainafter Add Timestamp and Duration types for nanosecond values #829.X5f2ToStatusAdapterusing positionalTimestamp(...)— killed dashboard status consumption.Both would have been caught by a test that ran
OrchestratingProcessor.process()through a sink that performs x5f2 encoding, or an adapter test that fed a real Kafka message object intoX5f2ToStatusAdapter.adapt.Suggested approaches
Two options, not mutually exclusive:
FakeMessageSinkthat mirrorsKafkaSink.publish_messages' serialization dispatch (service_status_to_x5f2/job_status_to_x5f2/self._serializer(msg)), but stores the resulting bytes instead of producing to Kafka. ExistingLivedataApptests can opt in and would catch this entire class of type drift for free.KafkaSinktests. A small test file using a fakeconfluent_kafka.Producer(just capturingproduce()calls) to round-tripServiceStatus/JobStatusthrough the realKafkaSink. Cheap and focused.Option 1 gives broader coverage for free; option 2 is a targeted safety net. I'd suggest starting with option 1.
Related