Skip to content

fix(cmt): drain FIFO and process all packets per loop iteration#2963

Open
Geoffn-Hub wants to merge 1 commit intotbnobody:masterfrom
Geoffn-Hub:fix/cmt-fifo-burst-drain
Open

fix(cmt): drain FIFO and process all packets per loop iteration#2963
Geoffn-Hub wants to merge 1 commit intotbnobody:masterfrom
Geoffn-Hub:fix/cmt-fifo-burst-drain

Conversation

@Geoffn-Hub
Copy link
Copy Markdown

Problem

The CMT2300A receive loop uses an either/or structure: when a packet interrupt fires, it drains the hardware FIFO into the software ring buffer but does no processing. When no interrupt is pending, it processes only one buffered packet. This creates a bottleneck with inverters that send multiple response fragments in rapid succession (e.g. MIT series inverters send 6 back-to-back fragments).

The 64-byte merged FIFO can hold approximately 2 Hoymiles packets (~27 bytes each). During a burst of 6 fragments, the FIFO overflows before loop() can drain it, causing packet loss and triggering retransmits or timeouts.

Additionally, CMT2300A::available() checks PREAM_OK | SYNC_OK | CRC_OK | PKT_OK flags, which can trigger reads before a packet is fully received (e.g. on preamble detection alone).

Changes

1. Remove either/or structure in loop()

Previously:

if (_packetReceived) {
    // drain FIFO into buffer
} else {
    // process ONE packet from buffer
}

Now:

if (_packetReceived) {
    // drain FIFO into buffer
}
// process ALL packets from buffer

FIFO drain and packet processing now happen sequentially in the same iteration, not as mutually exclusive branches. Packets are processed immediately after being read rather than waiting for the next loop cycle.

2. Process all buffered packets per iteration

Changed from if (!_rxBuffer.empty()) (one packet) to while (!_rxBuffer.empty()) (drain the entire buffer). During bursts, all received fragments are processed in one pass.

3. Fix available() to check only PKT_OK

CMT2300A::available() now checks only CMT2300A_MASK_PKT_OK_FLG instead of OR'ing all four flags. This matches the behaviour of rxFifoAvailable() and prevents premature reads on preamble/sync detection.

4. Minor: continuebreak on buffer full

When the software buffer is full, the old code used continue which re-entered the while loop only to flush again. Changed to break to exit immediately.

Impact

  • No behavioural change for single-fragment responses (HM/HMS series) — the buffer is simply drained and processed in one iteration instead of across two.
  • Significant improvement for multi-fragment bursts (MIT/HMT series) — reduces FIFO overflow probability by eliminating the processing gap between drain and next read.
  • available() fix prevents potential partial reads that could corrupt the buffer or waste cycles.

Testing

Tested with MIT-5000-8T (6-fragment responses over CMT2300A at 860 MHz band). Previously ~2/6 fragments were received reliably; with this fix the full burst is captured.

The CMT2300A receive loop previously used an either/or structure:
when a packet interrupt fired, it drained the hardware FIFO but did
no processing; when no interrupt was pending, it processed only one
buffered packet. This caused FIFO overflows with MIT inverters that
send 6 response fragments in rapid succession — the 64-byte hardware
FIFO can hold ~2 packets, and the old code couldn't drain fast enough.

Changes:
- Remove either/or: drain FIFO then process all buffered packets in
  the same loop() iteration
- Process entire software buffer (while loop) instead of one packet
- Fix available() to check only PKT_OK flag instead of OR'ing
  PREAM_OK|SYNC_OK|CRC_OK|PKT_OK, which could trigger reads before
  a packet was fully received
Geoffn-Hub pushed a commit to Geoffn-Hub/OpenDTU that referenced this pull request Feb 5, 2026
When buffer is full, break out of the drain loop instead of
continue (which caused infinite loop since flush_rx doesn't
reduce available() count). MIT type support included.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants