Skip to content

feat: ✨ Add auto-discovery and manual reconnect features for Siegenia…#1

Merged
PrzemyslawKlys merged 13 commits intomasterfrom
AutoDetect
Feb 4, 2026
Merged

feat: ✨ Add auto-discovery and manual reconnect features for Siegenia…#1
PrzemyslawKlys merged 13 commits intomasterfrom
AutoDetect

Conversation

@PrzemyslawKlys
Copy link
Copy Markdown
Member

No description provided.

… integration

* Enabled auto-discovery of IP changes by default, allowing the integration to automatically update the host if the controller gets a new IP on the same /24 subnet.
* Introduced a manual reconnect service to update host/credentials even when the device is offline, enhancing user control over the connection settings.
* Updated various components to utilize the new `serial` identifier for device management, ensuring stability across IP changes.
* Improved error handling during connection setup and device updates, allowing for better resilience and user feedback.
* Updated documentation and service definitions to reflect new features and usage instructions.
* Introduced `siegenia.cleanup_devices` service to merge duplicate devices and remove empty legacy ones.
* Enhanced issue management by implementing `_raise_issue` and `_clear_issue` methods for better connectivity error handling.
* Updated documentation and strings for clarity on connection issues.
* Bumped version to 1.1.5.
…ents

* Added a screenshot of the Siegenia device page to enhance documentation.
* Removed obsolete `requirements_test.txt` file as it is no longer needed.
* Introduced `requirements_test.txt` to specify testing dependencies.
* Ensures compatibility with pytest and its plugins for effective testing.
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +183 to +187
serial = entry.data.get(CONF_SERIAL) or entry.unique_id
# Collect all devices belonging to this domain
devices = [d for d in dev_reg.devices.values() if any(idt[0] == DOMAIN for idt in d.identifiers)]
if not devices:
return
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Scope device migration to the current entry

The new _async_migrate_devices routine collects every device whose identifier starts with the Siegenia domain and then merges them into a single primary device. Because it isn’t scoped to the current config entry or serial (see the domain-wide device list built here), adding or reloading one entry when multiple controllers are configured will collapse devices from other entries and move their entities onto the wrong device before deleting the originals. Filter the devices by this entry’s identifiers/serial before merging to avoid cross-device corruption.

Useful? React with 👍 / 👎.

@claude
Copy link
Copy Markdown

claude bot commented Nov 27, 2025

Pull Request Review: Auto-discovery and Manual Reconnect Features

Summary

This PR introduces significant improvements to the Siegenia integration by adding auto-discovery of IP changes and manual reconnection capabilities. Overall, this is a well-structured enhancement that addresses a real user pain point (devices changing IPs). However, there are several areas that need attention before merging.


🔴 Critical Issues

1. Security: Network Scanning Without Rate Limiting (coordinator.py:196-257)

The _rediscover_host() method scans up to 192 IP addresses with only a 15-minute cooldown period. This could:

  • Cause network congestion on user networks
  • Trigger security alerts or IDS systems
  • Be problematic on shared/corporate networks
  • Drain resources if multiple integrations implement similar patterns

Recommendation:

  • Add exponential backoff (e.g., 15min → 30min → 1hr → 2hr)
  • Reduce the scan scope (currently scans up to 5 /24 subnets!)
  • Add a configuration option to disable auto-discovery
  • Consider using mDNS/zeroconf if the device supports it

2. Broad Exception Handling (Multiple locations)

Extensive use of except Exception: pass or except Exception: # noqa: BLE001:

  • __init__.py:92: Silently suppresses all connection errors
  • __init__.py:189: Ignores all device migration errors
  • __init_services__.py:105: Swallows device removal errors
  • coordinator.py:151, 192, 268: Multiple catch-all handlers

Recommendation:

  • Log exceptions at appropriate levels (warning/error)
  • Only catch specific expected exceptions
  • Let unexpected errors propagate to aid debugging
  • Use logger.exception() to capture full stack traces

3. Race Condition in Issue Management (coordinator.py:171-194)

The _issue_set flag is not thread-safe. If _raise_issue() and _clear_issue() are called concurrently, issues could be created/deleted incorrectly.

Recommendation: Use asyncio.Lock to protect issue creation/deletion.


⚠️ High Priority Issues

4. Path Separator Issue in README (README.md:64)

[![Siegenia Device Page](assets\screenshot1.png)](assets\screenshot1.png)

Uses Windows path separators (\) instead of forward slashes. This will break on Linux/macOS and in web viewers.

Fix: Change to assets/screenshot1.png

5. Potential Resource Leak in _probe_host() (coordinator.py:258-284)

If an exception occurs after connect() but before the finally block, the client may not disconnect properly. The finally block only handles one exception type.

Recommendation:

finally:
    try:
        await asyncio.wait_for(client.disconnect(), timeout=2.0)
    except Exception as exc:
        self.logger.debug("Probe cleanup failed for %s: %s", host, exc)

6. Missing DOMAIN Import (const.py)

The new ISSUE_UNREACHABLE constant is used in coordinator.py but DOMAIN must be imported from const.py. Verify this does not cause import issues.

7. Device Migration Runs on Every Setup (init.py:90-91)

_async_migrate_devices() runs on every integration load, not just once. This could slow down startup and is unnecessary after the first migration.

Recommendation: Add a migration version flag to entry.data to run this only once.


💡 Code Quality & Best Practices

8. Inconsistent Serial Number Handling

Multiple fallback chains like:

getattr(coordinator, "serial", None) or info.get("serialnr") or entry.unique_id or entry.data.get("host")

This is repeated across binary_sensor.py, button.py, sensor.py, etc.

Recommendation: Create a helper method on the coordinator:

def get_device_identifier(self) -> str:
    return self.serial or self.entry.unique_id or self.host

9. Unclear Subnetting Logic (coordinator.py:211-227)

The method adds multiple /24 subnets (previous subnet + 4 hardcoded common ones), then limits to 64 hosts per subnet, but the deduplication happens after, making the final count unpredictable.

Recommendation:

  • Clarify intent in comments
  • Consider limiting to just the previous /24 subnet initially
  • Make the common subnets configurable or remove them

10. Magic Numbers

  • 3.0 timeout in _probe_host (coordinator.py:267)
  • 8 concurrent probes (coordinator.py:238)
  • 64 hosts per subnet (coordinator.py:227)
  • 192 total candidates (coordinator.py:236)
  • 900 seconds rediscovery cooldown (coordinator.py:145)

Recommendation: Extract to named constants with explanatory comments.

11. Incomplete Error Context

When _handle_connection_error returns False, the caller (_async_update_data) does not know why rediscovery failed. Consider returning an enum or raising specific exceptions.


🧪 Test Coverage

Missing Tests

No tests were added for the major new features:

  • ❌ Auto-discovery scanning logic
  • ❌ IP change detection and host switching
  • ❌ Device cleanup service
  • ❌ set_connection service
  • ❌ Issue creation/deletion
  • ❌ Migration of duplicate devices

Recommendation: Add tests for at least:

  1. Successful rediscovery scenario
  2. Failed rediscovery with fallback
  3. Device cleanup merging entities
  4. Issue raised when device unreachable
  5. Host switching preserves serial number

🔒 Security Considerations

12. Credentials in Logs

Verify that debug logging never exposes passwords. The _probe_host method passes credentials but uses a shared logger.

13. Unauthenticated Device Detection

The code authenticates with username/password on every probe. If credentials are wrong, it silently skips the device. Consider warning users if auth fails during discovery.


🚀 Performance Considerations

14. Network Scan Impact

  • Scanning 192 IPs with 8 concurrent connections can take 10-80 seconds depending on timeouts
  • This blocks the update coordinator during first failure
  • Could impact Home Assistant responsiveness

Recommendation:

  • Run discovery in a background task, not during _async_update_data
  • Show a persistent notification to user during scan
  • Provide scan progress in logs

15. Redundant Config Entry Updates

_switch_host() and _update_serial() both call async_update_entry(). If both are called in sequence, this writes to disk twice unnecessarily.


📝 Documentation

16. Missing Service Documentation

The new cleanup_devices service is not documented in services.yaml strings.

Fix: Add to strings.json services section:

"cleanup_devices": {
  "name": "Cleanup Duplicate Devices",
  "description": "Merge duplicate devices and remove empty legacy entries."
}

17. Unclear Auto-Discovery Behavior

The README states auto-discovery works "on the same /24 subnet" but the code actually scans multiple subnets. Documentation should match implementation.


✅ Positive Aspects

  1. Good separation of concerns with _rediscover_host, _probe_host, and _switch_host
  2. Preserves serial numbers across IP changes for device stability
  3. Issue registry integration provides good user feedback
  4. Backwards compatible with existing installations
  5. Configurable auto-discovery allows users to opt out
  6. Manual reconnect service gives users control when needed

🎯 Recommended Priority for Fixes

  1. Must fix before merge:

  2. Should fix before merge:

  3. Nice to have:

    • Add test coverage (#11)
    • Extract magic numbers (#10)
    • Performance improvements (#14, #15)

Conclusion

This is a valuable feature addition that addresses a real user need. The implementation shows good architectural thinking with the separation of discovery, probing, and host switching. However, the network scanning approach needs refinement to avoid potential issues in production environments. With the critical fixes applied, this will be a solid enhancement to the integration.

Overall Assessment: Approve with changes requested ⚠️

* Introduced `MIGRATION_DEVICES_V2` constant for device migration tracking.
* Updated `_async_migrate_devices` to merge duplicate devices scoped to a specific entry.
* Improved `cleanup_devices` service to allow optional `entity_id` for targeted cleanup.
* Added new translations and descriptions for cleanup functionality in `strings.json` and `en.json`.
* Bumped version to `1.1.7` in `manifest.json`.
@claude
Copy link
Copy Markdown

claude bot commented Nov 27, 2025

Pull Request Review: Auto-discovery and Manual Reconnect Features

Summary

This PR adds significant functionality for handling IP address changes of Siegenia window controllers, including auto-discovery and manual reconnection capabilities. Overall, this is a well-structured enhancement with thoughtful error handling. Below are my findings across code quality, potential issues, performance, security, and testing.


✅ Strengths

Code Quality

  • Well-organized architecture: The separation of concerns between coordinator, services, and config flow is clean
  • Comprehensive error handling: Good use of try-except blocks with specific exception types
  • Good logging: Appropriate debug/info/warning levels throughout
  • Type hints: Consistent use of type annotations
  • Device migration logic: Smart approach to consolidating duplicate devices

Features

  • Intelligent IP discovery: The subnet scanning with concurrency limits and backoff is well-designed
  • Graceful degradation: Allows setup to continue even if initial connection fails
  • User experience: Clear issue registry integration for connection problems

⚠️ Issues & Recommendations

1. Security Concerns (HIGH priority)

Credential Exposure Risk

Location: custom_components/siegenia/__init_services__.py:40-66

The set_connection service allows credentials to be changed via service calls. While this provides flexibility, credentials passed through Home Assistant services are logged in the service call history.

Recommendation:

  • Document that users should use the UI config flow when possible
  • Consider masking password in service call logs
  • Add a note in services.yaml about security implications

Network Scanning Scope

Location: custom_components/siegenia/coordinator.py:227-233

The auto-discovery scans up to 192 hosts across multiple subnets by default, which could be considered aggressive on some networks.

Recommendation:

  • Add a user-facing option to configure scan scope (current subnet only vs. extended)
  • Consider making DEFAULT_AUTO_DISCOVER = False and requiring explicit opt-in
  • Document the network scanning behavior clearly for users

2. Potential Bugs (MEDIUM priority)

Race Condition in Device Migration

Location: custom_components/siegenia/__init__.py:92-97 and __init__.py:182-218

The migration runs once per entry based on MIGRATION_DEVICES_V2 flag, but if two entries load simultaneously, both might attempt migration.

Recommendation:

# Add a lock at the hass data level
MIGRATION_LOCK = "migration_lock"
hass.data.setdefault(MIGRATION_LOCK, asyncio.Lock())
async with hass.data[MIGRATION_LOCK]:
    if not entry.data.get(MIGRATION_DEVICES_V2):
        await _async_migrate_devices(hass, entry)

Incomplete Device Registry Cleanup

Location: custom_components/siegenia/__init__.py:212-214

The safeguard check async_get_device(..., raise_on_missing=False) doesn't actually do anything with its result:

if not dev_reg.async_get_device(dev.identifiers, dev.connections, dev.id, raise_on_missing=False):
    # compatibility safeguard
    pass

Recommendation: Either use the result or remove this dead code block.

Unsafe Dictionary Update

Location: custom_components/siegenia/__init_services__.py:50-51

new_data[CONF_HOST] = call.data.get(CONF_HOST, new_data.get(CONF_HOST))

If CONF_HOST is not provided in the call and not in existing data, this could set it to None.

Recommendation:

if CONF_HOST in call.data:
    new_data[CONF_HOST] = call.data[CONF_HOST]

Potential Infinite Recursion

Location: custom_components/siegenia/coordinator.py:372-378

except Exception as err:
    recovered = await self._handle_connection_error(err)
    if recovered:
        return await self._async_update_data()  # Recursive call

If recovery succeeds but the connection immediately fails again, this could recurse. While the backoff mechanism helps, there's no recursion depth check.

Recommendation: Add a retry counter or use an iterative approach with max attempts.

3. Performance Considerations (MEDIUM priority)

Concurrent Network Probing

Location: custom_components/siegenia/coordinator.py:251-270

The rediscovery creates up to 192 concurrent tasks (with semaphore limiting to 8 concurrent connections). This is generally fine, but the task cancellation could be more efficient.

Recommendation:

  • Consider using asyncio.wait(..., return_when=FIRST_COMPLETED) instead of asyncio.as_completed()
  • This allows immediate cancellation without waiting for tasks to respond to cancellation

Entity Registry Iteration

Location: Multiple places iterate all entities

for ent in list(ent_reg.entities.values()):
    if ent.device_id == dev.id:

Recommendation: Use er.async_entries_for_device(device_id) for better performance in large installations.

Push Update Handling

Location: custom_components/siegenia/coordinator.py:380-419

The push callback does dictionary merging on every push event, even if unchanged.

Recommendation: Add a simple equality check before merging to avoid unnecessary updates.

4. Code Quality Issues (LOW priority)

Overly Broad Exception Handling

Multiple locations use except Exception: or except Exception as exc: # noqa: BLE001

While sometimes necessary, this can hide unexpected errors. Consider:

  • Being more specific where possible (e.g., ClientConnectorError, TimeoutError)
  • At minimum, always log the exception at debug level

Type: ignore Comments

Location: Multiple service handler locations

entity = hass.data["entity_components"]["cover"].get_entity(entity_id)  # type: ignore[index]

Recommendation: Consider creating a helper function with proper typing to reduce type:ignore comments throughout the codebase.

Dead Code Path

Location: custom_components/siegenia/coordinator.py:94-96

The coordinator initialization sets a push callback with try/except that catches and ignores all exceptions:

try:
    self.client.set_push_callback(self._push_callback)
except Exception:
    pass

This silently fails - consider at least logging if this fails.


🧪 Test Coverage

Current State

  • 307 total lines of test code across 8 test files
  • Tests cover config flow, diagnostics, services, and push/poll behavior

Missing Coverage

High Priority:

  1. No tests for the new auto-discovery functionality (_rediscover_host, _probe_host)
  2. No tests for the set_connection service
  3. No tests for device migration logic (_async_migrate_devices)
  4. No tests for issue registry integration

Medium Priority:
5. No tests for the cleanup_devices service
6. No tests for connection error recovery flow

Recommendation: Add integration tests that:

  • Mock network scanning to verify probe behavior
  • Test the device migration runs once and handles duplicates correctly
  • Verify credential updates via service calls
  • Test backoff behavior in rediscovery

📋 Documentation & User Experience

Positive

  • README updated with clear explanation of new features
  • Good use of translations for UI strings
  • Screenshot added to show device page

Suggestions

  • Document the network scanning behavior more explicitly (which subnets, how many hosts)
  • Add troubleshooting section for when auto-discovery doesn't work
  • Consider adding a "Disable auto-discovery" option in UI for security-conscious users
  • Document the cleanup_devices service in README (currently only in code)

🔒 Security Summary

  1. Credentials in service calls - Document best practices
  2. Network scanning - Consider opt-in model and better documentation
  3. SSL verification disabled - Already handled correctly with CERT_NONE for self-signed certs, this is appropriate for local devices

📊 Overall Assessment

Quality: ⭐⭐⭐⭐ (4/5)

  • Well-architected feature with good error handling
  • Minor bugs and improvements needed
  • Security considerations require attention

Recommendation: Approve with changes requested

Must Fix Before Merge:

  1. Fix the unsafe dictionary update in set_connection service
  2. Add recursion depth limit or iterative approach for connection recovery
  3. Remove or fix the dead code path in device cleanup
  4. Document security implications of set_connection service

Should Fix Soon:

  1. Add tests for auto-discovery functionality
  2. Add race condition protection for device migration
  3. Use async_entries_for_device() for better performance
  4. Improve exception specificity where possible

Nice to Have:

  1. Make auto-discovery opt-in rather than default
  2. Add more granular network scanning controls
  3. Create typed helper for entity lookups to reduce type:ignore comments

Great work on this feature! The auto-discovery functionality is well-designed and will significantly improve the user experience for devices with dynamic IPs. The issues identified are manageable and mostly involve hardening edge cases.

* Introduced `CONF_EXTENDED_DISCOVERY` to allow scanning of common home subnets.
* Updated documentation to reflect the optional nature of auto-discover IP changes.
* Enhanced service descriptions and user interface strings for clarity.
@claude
Copy link
Copy Markdown

claude bot commented Nov 27, 2025

Pull Request Review

Overview

This PR adds significant new functionality including auto-discovery of IP changes, manual reconnection services, device cleanup, and improved error handling. The changes span 641 additions across 19 files.

✅ Strengths

Code Quality

  • Good use of type hints throughout (list[ipaddress.IPv4Address], str | None)
  • Proper exception handling with specific exception types
  • Well-structured async/await patterns
  • Clear separation of concerns (coordinator, services, config flow)

Architecture

  • Smart exponential backoff for rediscovery (15 min → 2 hour max)
  • Rate limiting with semaphores for concurrent network probing
  • Lock-based migration to prevent race conditions
  • Issue registry integration for user-facing error tracking

Documentation

  • README updated with clear feature descriptions
  • Service descriptions in services.yaml
  • Translation strings for UI elements

🔴 Critical Issues

1. Security: Network Scanning Risk (coordinator.py:219-282)

The auto-discovery feature scans up to 192 IP addresses with concurrent probes. This could:

  • Trigger IDS/IPS systems on corporate networks
  • Violate terms of service for some ISPs
  • Cause unintended authentication attempts on other devices

Recommendation:

  • Add prominent warnings in documentation about network scanning
  • Consider reducing REDISCOVER_MAX_HOSTS from 192 to something more conservative (64)
  • Add user consent checkbox during initial setup
  • Consider mDNS/zeroconf discovery as an alternative

2. Credential Exposure in Logs (coordinator.py:284-314)

async def _probe_host(self, host: str) -> str | None:
    await client.login(self.username, self.password)

Credentials are used for authentication attempts across multiple hosts. If logging is verbose, credentials could leak.

Recommendation:

  • Ensure credentials are never logged during probe attempts
  • Add rate limiting to prevent brute-force appearance
  • Consider using cached session tokens if possible

3. Bare Exception Handlers (Throughout)

Multiple instances of except Exception: without logging or proper handling:

  • __init__.py:104 - Device migration silently fails
  • __init__.py:222 - Device removal errors swallowed
  • __init_services__.py:116-117 - Cleanup errors ignored
  • coordinator.py:109-111 - Push callback errors caught but not logged

Recommendation:

except Exception as exc:  # noqa: BLE001
    self.logger.warning("Device migration failed: %s", exc, exc_info=True)

4. Race Condition in Device Migration (init.py:95-105)

While there's a lock, the migration logic runs during setup and could interfere with entity registration:

async with hass.data[_lock_key]:
    if not entry.data.get(MIGRATION_DEVICES_V2):
        await _async_migrate_devices(hass, entry)

Recommendation:

  • Run migration before async_forward_entry_setups to avoid entity registration conflicts
  • Add retry logic if device registry is locked
  • Test with multiple concurrent entry setups

⚠️ Major Concerns

5. Missing Test Coverage

No new tests found for:

  • Auto-discovery feature (_rediscover_host, _probe_host)
  • Device migration logic (_async_migrate_devices)
  • Cleanup service (cleanup_devices)
  • Connection error recovery (_handle_connection_error)
  • Manual reconnect service (set_connection)

Recommendation: Add comprehensive tests covering:

# tests/test_auto_discovery.py
async def test_rediscover_host_finds_device()
async def test_rediscover_respects_backoff()
async def test_probe_rejects_wrong_serial()
async def test_migration_merges_duplicates()

6. Inefficient Device Iteration (init_services.py:111-113)

for ent in list(ent_reg.entities.values()):
    if ent.device_id == dev.id:

This iterates all entities in the registry for each duplicate device.

Recommendation:

device_entities = [e for e in ent_reg.entities.values() if e.device_id == dev.id]
for ent in device_entities:
    ent_reg.async_update_entity(ent.entity_id, device_id=primary.id)

7. Duplicate Code (init.py:190-223 vs init_services.py:82-119)

The device cleanup logic appears twice with minor variations.

Recommendation:

  • Extract to a shared function: async def _merge_duplicate_devices(hass, entry_id) -> int
  • Return count of merged devices for logging
  • Use in both migration and service

8. Missing Validation (config_flow.py:235-250)

The connection step accepts user input without validation:

new_data = {
    CONF_HOST: user_input[CONF_HOST],  # No IP/hostname validation
    CONF_PORT: user_input[CONF_PORT],  # No range check

Recommendation:

import voluptuous as vol
from homeassistant.helpers import config_validation as cv

schema = vol.Schema({
    vol.Required(CONF_HOST): cv.string,  # or cv.matches_regex(IP_REGEX)
    vol.Required(CONF_PORT): vol.All(vol.Coerce(int), vol.Range(min=1, max=65535)),
    # ...
})

📝 Minor Issues

9. Inconsistent Identifier Logic (binary_sensor.py:48-50)

ident = self._entry.unique_id or getattr(self.coordinator, "serial", None) or self._serial

This fallback chain is complex and repeated across multiple entity files.

Recommendation:

  • Add a helper method to coordinator: def get_device_identifier(self) -> str
  • Use consistently across all entities

10. Type Safety (coordinator.py:102-111)

def _on_push(msg: dict[str, Any]) -> None:
    try:
        self._handle_push_update(msg)
    except Exception:  # Catches everything, reduces type safety

Recommendation:

except (KeyError, ValueError, TypeError) as exc:
    self.logger.debug("Push update failed: %s", exc)
except Exception as exc:
    self.logger.error("Unexpected push error: %s", exc, exc_info=True)

11. Magic Numbers (const.py:42-46)

REDISCOVER_COOLDOWN_SECONDS = 900
REDISCOVER_BACKOFF_MAX = 7200
REDISCOVER_MAX_SUBNETS = 5

Good that these are constants, but they lack documentation.

Recommendation:

# Minimum time between rediscovery attempts (15 minutes)
REDISCOVER_COOLDOWN_SECONDS = 900
# Maximum backoff time for failed rediscovery (2 hours)
REDISCOVER_BACKOFF_MAX = 7200

12. Resource Cleanup (coordinator.py:310-314)

try:
    await asyncio.wait_for(client.disconnect(), timeout=2.0)
except Exception as exc:  # noqa: BLE001
    self.logger.debug("Probe cleanup failed for %s: %s", host, exc)

Good that there's a timeout, but 2 seconds might be too long when probing 192 hosts.

Recommendation:

  • Reduce timeout to 0.5-1 second for probe cleanup
  • Consider task groups for better lifecycle management

🎯 Performance Considerations

13. Concurrent Probing Limits

REDISCOVER_CONCURRENCY = 8

This is reasonable, but could still generate significant network traffic.

Recommendation:

  • Make this configurable via options flow
  • Add metrics/logging for discovery performance
  • Consider adaptive concurrency based on response times

14. Polling Interval During Discovery

The coordinator continues polling during discovery attempts, which could compound network load.

Recommendation:

  • Pause coordinator updates during active discovery
  • Or increase interval temporarily during discovery

🧪 Testing Recommendations

High Priority Tests:

  1. Auto-discovery with multiple subnets
  2. Serial number mismatch rejection
  3. Authentication failure during probe
  4. Concurrent entry setup with migration
  5. Cleanup service with no duplicates
  6. Rediscovery backoff behavior
  7. Issue creation/clearing lifecycle

Integration Tests:

  1. Full IP change scenario (device moves, rediscovery finds it)
  2. Multiple devices on same network
  3. Device returns to original IP

📚 Documentation Needs

  1. Security implications of auto-discovery feature
  2. Network requirements (firewalls, mDNS ports)
  3. Troubleshooting guide for discovery failures
  4. Migration guide for existing users
  5. Service examples in README

Summary

This is a substantial and valuable feature addition, but has some important issues that should be addressed:

Must Fix Before Merge:

  • Add test coverage for new features (critical)
  • Address bare exception handlers with proper logging
  • Add validation to config flow inputs
  • Document security implications of network scanning

Should Fix:

  • Deduplicate device cleanup logic
  • Add helper for device identifier resolution
  • Improve exception handling specificity
  • Add inline documentation for magic numbers

Nice to Have:

  • Consider mDNS/zeroconf alternative
  • Make concurrency configurable
  • Add discovery performance metrics

Risk Level: Medium-High (network scanning + credential handling)
Recommendation: Request changes for test coverage and security documentation before merge.


Great work on a complex feature! The architecture is sound and the code quality is generally good. Addressing the testing and security concerns will make this production-ready.

* Introduced an `extended_discovery` parameter in the `SiegeniaDataUpdateCoordinator` constructor.
* This allows for more flexible configuration of the discovery process.
@claude
Copy link
Copy Markdown

claude bot commented Nov 27, 2025

Pull Request Review: Auto-Discovery and Manual Reconnect Features

Overview

This PR introduces significant functionality for handling IP address changes in Siegenia window controllers, along with device cleanup and manual reconnect capabilities. The implementation is generally solid but has several areas that need attention.


🔴 Critical Issues

1. Security: Network Scanning Without Rate Limiting

Location: coordinator.py:220-283

The _rediscover_host() method can scan up to 192 hosts across multiple subnets with only basic concurrency limits. This could:

  • Trigger network intrusion detection systems
  • Be perceived as port scanning/network reconnaissance
  • Cause performance issues on constrained networks

Recommendation:

  • Add user-configurable rate limiting
  • Consider exponential backoff between scan attempts
  • Add a configurable timeout between probes
  • Log scanning activity clearly for network administrators

2. Bare Exception Handlers Throughout

Locations: Multiple (e.g., __init__.py:99-101, coordinator.py:146-148, __init_services__.py:116-117)

The codebase extensively uses except Exception: with # noqa: BLE001 to suppress linting warnings. While some broad exception handling is acceptable in fallback scenarios, many of these should be more specific:

# Example from coordinator.py:146
except Exception as exc:  # noqa: BLE001
    self.logger.debug("Unexpected ensure_connected failure: %s", exc)
    raise UpdateFailed(exc) from exc

Recommendation:

  • Use specific exception types where possible
  • Only use broad handlers for true "catch-all" scenarios
  • Remove noqa comments where exceptions can be specified

3. Race Condition in Issue Management

Location: coordinator.py:193-218

While _raise_issue() and _clear_issue() use an asyncio lock, the _issue_set flag is initialized twice (line 101 and line 119), and the flag could become inconsistent if exceptions occur during issue operations.

Recommendation:

  • Remove duplicate initialization (line 119)
  • Ensure flag updates happen within the lock's scope atomically
  • Consider using HA's built-in issue management helpers if available

⚠️ Potential Bugs

4. Device Migration Runs on Every Setup

Location: __init__.py:95-105

The device migration runs on every async_setup_entry call, even though it's guarded by a migration flag. The lock is shared globally across entries, which could cause unnecessary blocking.

Issue:

if not entry.data.get(MIGRATION_DEVICES_V2):
    try:
        await _async_migrate_devices(hass, entry)

If the entry reload fails after migration but before the flag is set, migration will run again.

Recommendation:

  • Set the migration flag before running migration
  • Use entry-specific locking, not global
  • Add version checking to prevent re-running on already-migrated entries

5. Serial Number Update Race Condition

Location: coordinator.py:183-191

The _update_serial() method updates both instance state and config entry data without coordination. If called multiple times concurrently (e.g., from push updates and discovery), the last writer wins.

Recommendation:

  • Add locking around serial updates
  • Or make serial updates idempotent and queue them

6. Incomplete Error Recovery in _async_update_data

Location: coordinator.py:365-394

The retry logic attempts recovery twice but has unclear semantics:

attempts = 0
while attempts < 2:
    try:
        # ...
        recovered = await self._handle_connection_error(err)
        if recovered:
            attempts += 1
            continue

If recovery succeeds on the first attempt, it increments attempts to 1, then retries. If that fails and recovery succeeds again, it tries one more time. But the loop doesn't clearly communicate this intent.

Recommendation:

  • Rename attempts to retries for clarity
  • Add comments explaining the retry strategy
  • Consider a max retry constant

🟡 Code Quality & Best Practices

7. Magic Numbers in Constants

Location: const.py:33-46

Several tuning constants lack justification:

  • REDISCOVER_COOLDOWN_SECONDS = 900 (15 min)
  • REDISCOVER_BACKOFF_MAX = 7200 (2 hours)
  • PROBE_TIMEOUT = 3.0

Recommendation: Add comments explaining why these specific values were chosen.

8. Complex Device Identifier Logic

Location: Multiple files (e.g., binary_sensor.py:48, button.py:51)

The device identifier fallback chain is repeated across multiple entity files:

ident = self._entry.unique_id or getattr(self.coordinator, "serial", None) or self._serial

Recommendation:

  • Extract to a helper method on the coordinator
  • Document the precedence order and why it's needed

9. Inconsistent Error Handling in Services

Location: __init_services__.py:40-67

The _handle_set_connection service silently fails if the entity or coordinator isn't found. This provides poor user feedback.

Recommendation:

  • Raise ServiceValidationError for invalid entity_id
  • Log warnings when operations fail silently

10. Missing Input Validation

Location: config_flow.py:98-106

The config flow doesn't validate that auto_discover and extended_discovery are compatible (e.g., extended requires auto-discover to be enabled).

Recommendation: Add validation in the config flow or set extended to False when auto-discover is disabled.


🔵 Performance Considerations

11. Network Discovery Performance

Location: coordinator.py:220-283

The discovery process:

  • Creates tasks for up to 192 hosts
  • Runs 8 concurrent probes
  • Each probe has a 3-second timeout

Best case: Finds device in first batch (8 probes) = 3 seconds
Worst case: Scans all 192 hosts = 24 batches × 3 seconds = 72 seconds (with cancellation optimizations)

Recommendation:

  • This is reasonable, but document the expected time range
  • Consider reducing default REDISCOVER_MAX_HOSTS if not needed
  • Add progress logging for long scans

12. Redundant Device Registry Queries

Location: __init_services__.py:82-117

The _cleanup_devices service iterates over ALL devices and entities in the registry, then filters by entry ID. For large HA instances, this is inefficient.

Recommendation:

  • Use dev_reg.devices.get_devices_for_config_entry_id(entry_id) if available
  • Or cache the filtered list

🟢 Security Review

13. SSL Certificate Validation Disabled

Location: api.py:62-64

ssl_ctx.check_hostname = False
ssl_ctx.verify_mode = ssl.CERT_NONE

Assessment: This is acceptable for local IoT devices with self-signed certificates, but:

Recommendation:

  • Add a comment explaining why this is safe for local devices
  • Consider warning users in documentation that this disables SSL verification
  • Ensure the device is only accessible on local networks

14. Credentials in Config Entry

Location: __init_services__.py:40-67

The set_connection service allows updating credentials through a service call. Ensure this doesn't expose credentials in HA's service call history.

Assessment: HA typically redacts password fields in service call logs, so this should be safe.

Recommendation: Verify that CONF_PASSWORD is properly redacted in logs.


📝 Test Coverage

15. Missing Tests for New Features

Based on searching the test directory, there don't appear to be tests for:

  • Auto-discovery functionality
  • cleanup_devices service
  • set_connection service
  • Device migration logic
  • IP change detection

Recommendation:

  • Add integration tests for auto-discovery (can use mock network responses)
  • Add tests for device cleanup with duplicate devices
  • Add tests for the migration logic
  • Test edge cases (no serial, auth failures during discovery, etc.)

📚 Documentation & Maintainability

16. Commit Message Quality

The commit messages are excellent and follow conventional commits format:

  • feat: ✨ Add auto-discovery and manual reconnect features
  • ✅ Clear descriptions in commit bodies
  • ✅ Version bumps included

17. README Updates

The README was updated with clear instructions for the new features. Well done!

18. Missing Docstrings

Several new methods lack docstrings:

  • _update_serial()
  • _raise_issue() / _clear_issue()
  • _switch_host()

Recommendation: Add docstrings explaining when these methods should be called and any side effects.


✅ Positive Highlights

  1. Excellent feature design: Auto-discovery with configurable scope is well thought out
  2. Good user experience: Manual reconnect service provides fallback when auto-discovery fails
  3. Proper use of HA patterns: Issue registry integration, proper coordinator usage
  4. Backward compatibility: Default settings preserve existing behavior
  5. Clean separation of concerns: Discovery logic is well isolated in the coordinator
  6. Concurrency control: Proper use of asyncio primitives (locks, semaphores)

📋 Summary

Overall Assessment: This is a valuable feature addition that solves a real problem (IP changes in IoT devices). The code quality is good, but there are several areas that need attention before merging:

Must Fix (Blocking):

  1. Address bare exception handlers in critical paths
  2. Fix race condition in device migration
  3. Add tests for new functionality

Should Fix (Non-blocking but important):

  1. Add rate limiting documentation for network scanning
  2. Improve error messages in services
  3. Add docstrings to new methods
  4. Consolidate device identifier logic

Nice to Have:

  1. Performance optimizations for device registry queries
  2. Additional logging for debugging
  3. User-configurable discovery timeouts

Recommendation: Request changes for test coverage and critical bug fixes. The feature is well-designed and ready for production after addressing the above issues.

Great work overall! The auto-discovery feature will significantly improve the user experience for this integration. 🎉

…tity identification

* Added `device_identifier` method to `SiegeniaDataUpdateCoordinator` for consistent entity identification.
* Updated various components to utilize the new method for retrieving device identifiers.
* Enhanced error handling in service setup to raise `ServiceValidationError` for better debugging.
* Adjusted configuration flow to ensure `CONF_EXTENDED_DISCOVERY` is set correctly based on `CONF_AUTO_DISCOVER`.
@claude
Copy link
Copy Markdown

claude bot commented Nov 27, 2025

Pull Request Review: Auto-Discovery and Manual Reconnect Features

Thank you for this substantial feature addition! I've reviewed the code and have feedback across several categories.


Strengths

  1. Well-structured feature: The auto-discovery mechanism is thoughtfully designed with proper backoff and rate limiting
  2. Comprehensive documentation: README updates clearly explain the new features
  3. Device migration logic: The duplicate device cleanup is a nice UX improvement
  4. Security-conscious: Auto-discovery is opt-in by default (line const.py:33)
  5. Good error handling: Connection errors are caught and handled gracefully

🔴 Critical Issues

1. Security: Network Scanning Without Clear User Consent (coordinator.py:224-287)

The auto-discovery feature scans up to 192 IP addresses across multiple subnets. While opt-in, the implications aren't clearly communicated:

  • Issue: Extended discovery scans common home subnets (192.168.0.0/24, 192.168.1.0/24, 10.0.0.0/24, 172.16.0.0/24) which could trigger security alerts on enterprise networks
  • Recommendation:
    • Add a warning in the UI when enabling extended discovery
    • Consider reducing REDISCOVER_MAX_HOSTS from 192 to something more reasonable (32-64)
    • Document that this feature should not be used on corporate networks

2. Potential Device Hijacking (coordinator.py:289-319)

The _probe_host() method authenticates to ANY device that accepts credentials:

serial = ((info or {}).get("data") or {}).get("serialnr")
if not serial or (self.serial and serial != self.serial):
    return None

Issue: If self.serial is None (possible on first setup), this could connect to a different Siegenia device on the network with the same credentials.

Recommendation: Make serial verification mandatory before accepting a rediscovered host.

3. Broad Exception Handling (Multiple locations)

Excessive use of bare except Exception with # noqa: BLE001:

  • Lines 150, 174, 312, 318, etc.

Issue: This can mask bugs and make debugging difficult. The noqa comments suggest awareness but not resolution.

Recommendation:

  • Catch specific exceptions where possible
  • Log at WARNING level for unexpected exceptions instead of DEBUG
  • Only use broad catches at true boundaries (e.g., push callbacks)

⚠️ Major Concerns

4. Race Condition in Issue Management (coordinator.py:197-222)

The issue lock protects the async operations but not the _issue_set flag check:

async def _raise_issue(self) -> None:
    async with self._issue_lock:
        if self._issue_set:  # ← Not atomic with lock acquisition
            return

Recommendation: Check the flag inside a synchronous lock or use atomic operations.

5. Incomplete Migration Rollback (init.py:95-108)

if not entry.data.get(MIGRATION_DEVICES_V2):
    # Set flag optimistically to avoid reruns if HA crashes mid-migration
    hass.config_entries.async_update_entry(entry, data={**entry.data, MIGRATION_DEVICES_V2: True})

Issue: If migration fails after setting the flag, it won't retry on next startup. Corrupted device registry states could persist.

Recommendation:

  • Set the flag AFTER successful migration
  • Add a repair flow to allow manual re-migration if needed

6. Password Exposure in Service Calls (services.yaml:23-59)

The set_connection service accepts passwords, and the description warns they're stored in history, but this is a significant security issue.

Recommendation:

  • Remove password parameter from the service entirely
  • Force users to use the UI config flow for credential changes
  • The UI already supports this via "Configure → Connection"

7. Network Performance Impact (coordinator.py:264-287)

Concurrent probing of up to 192 hosts with only 8 workers (REDISCOVER_CONCURRENCY) could take 192 * 3s / 8 = ~72 seconds in worst case.

Issue: This blocks the update loop and makes the integration unresponsive.

Recommendation:

  • Run discovery in a background task, not in the update loop
  • Add a timeout for the entire discovery process (e.g., 30s)
  • Show progress in the UI or logs

🟡 Code Quality Issues

8. Duplicate Code in Device Cleanup (init.py:186-223, init_services.py:83-119)

The device cleanup logic appears twice with slight variations.

Recommendation: Extract to a shared helper function.

9. Type Safety (Multiple files)

Missing type hints on several new methods:

  • _on_push callback (coordinator.py:103)
  • _revert closure (coordinator.py:416)
  • Several service handlers

Recommendation: Add complete type annotations, especially for public APIs.

10. Magic Numbers in Configuration (const.py:41-47)

Hard-coded limits without clear justification:

REDISCOVER_COOLDOWN_SECONDS = 900  # 15 minutes
REDISCOVER_BACKOFF_MAX = 7200      # 2 hours
REDISCOVER_MAX_HOSTS = 192

Recommendation: Document the reasoning for these values or make them configurable.

11. Device Identifier Inconsistency (binary_sensor.py:48, button.py:51, etc.)

Multiple patterns for getting device identifiers:

# Pattern 1:
ident = getattr(self.coordinator, "device_identifier", lambda: None)() or self._serial

# Pattern 2:
getattr(self.coordinator, "device_identifier", lambda: None)() or info.get("serialnr") or self._entry.data.get("host")

Recommendation: Standardize on coordinator.device_identifier() everywhere since it's now a proper method.


🔵 Performance Considerations

12. Unnecessary Device Info Fetching (coordinator.py:300)

During discovery probing, the code fetches full device info:

info = await client.get_device()

Issue: This adds latency to each probe (could be 100ms+).

Recommendation: Cache device info from initial setup and only refresh it when serial changes.

13. Synchronous Device Registry Operations (init.py:206-223)

The migration performs multiple synchronous device registry operations in the async context.

Recommendation: Batch operations where possible or use async variants if available.


🟢 Minor Issues & Style

14. Inconsistent String Formatting

Mix of f-strings and format():

f"Siegenia {host}"  # Good
ipaddress.ip_network(f"{self.host}/24", strict=False)  # Could be clearer

Recommendation: Consistently use f-strings for all formatting.

15. Unclear Variable Names

  • nets (coordinator.py:240) → candidate_networks
  • deduped (coordinator.py:257) → unique_candidates
  • semapho (coordinator.py:264) → truncated identifier

Recommendation: Use descriptive names throughout.

16. Missing Docstrings

Several new methods lack docstrings:

  • _update_serial()
  • _switch_host()
  • _ensure_connected()

Recommendation: Add docstrings to all public and complex private methods.


🛡️ Security Summary

  1. ✅ Self-signed cert handling is appropriate for local LAN devices
  2. ✅ Auto-discovery is opt-in by default
  3. ⚠️ Password in service call history is a concern
  4. ⚠️ Network scanning could trigger security alerts
  5. ⚠️ Potential for device hijacking if serial is not checked

📊 Test Coverage

I couldn't run the tests, but I see test files exist. Please verify:

  • Auto-discovery with successful host change
  • Auto-discovery with wrong device (different serial)
  • Auto-discovery timeout/failure scenarios
  • Device migration with multiple devices
  • Device migration failure handling
  • set_connection service with various parameter combinations

📝 Recommendations Summary

Must Fix Before Merge:

  1. Make serial verification mandatory in device probing
  2. Remove password parameter from set_connection service
  3. Run discovery in background, not blocking the update loop
  4. Fix migration flag timing (set after success, not before)

Should Fix:
5. Reduce scanning scope (fewer hosts, add timeout)
6. Improve exception handling (specific types, better logging)
7. Deduplicate device cleanup code
8. Add UI warning for extended discovery

Nice to Have:
9. Complete type annotations
10. Add docstrings to new methods
11. Standardize device identifier access
12. Improve variable naming


Overall, this is a valuable feature addition, but it needs security and reliability improvements before merging. The core logic is sound, but the edge cases and failure modes need more attention.

Please let me know if you'd like me to elaborate on any of these points or if you have questions about the recommendations.

Copilot AI review requested due to automatic review settings February 4, 2026 16:16
@PrzemyslawKlys
Copy link
Copy Markdown
Member Author

Updates pushed to address review items:\n- rediscovery now refuses to probe if serial is unknown (prevents cross-device hijack)\n- migration flag is set only after successful migration\n- set_connection description in strings now warns about password history\n\nTests: python3 -m pytest -q

@claude
Copy link
Copy Markdown

claude bot commented Feb 4, 2026

Pull Request Review: Auto-discovery and Reconnect Features

Summary

This PR adds substantial functionality for handling IP address changes in Siegenia window controllers, including auto-discovery, manual reconnection services, and device migration. The overall implementation is well-structured, but there are several areas that need attention.

Code Quality & Best Practices

Strengths ✅

  • Good architectural design with clear separation of concerns between discovery, connection handling, and device management
  • Comprehensive error handling with appropriate exception types
  • Backoff strategy for rediscovery prevents tight loops (coordinator.py:514-590)
  • Push callback preservation when switching hosts (coordinator.py:743-747)
  • Well-documented service definitions in services.yaml and strings.json

Issues to Address ⚠️

1. Broad Exception Handling

Multiple instances use except Exception: pass which can hide serious bugs:

coordinator.py:139-141:

except Exception:
    pass

Recommendation: Be specific about expected exceptions or at least log them for debugging.

init.py:96-98, 138-141:

except Exception as exc:  # noqa: BLE001
    coordinator.logger.debug("Device migration skipped: %s", exc)

Better approach: Catch specific exceptions (KeyError, ValueError, etc.) and decide if they warrant a warning vs debug log.

2. Potential Race Condition in Issue Registry

coordinator.py:605-617 and 619-627:

async def _raise_issue(self) -> None:
    async with self._issue_lock:
        if self._issue_set:
            return
        ir.async_create_issue(...)  # Not awaited!
        self._issue_set = True

The issue registry functions appear to be synchronous, but the lock pattern suggests concurrent access concerns. If async_create_issue can raise an exception, self._issue_set won't be updated correctly.

Recommendation: Add try/except around the issue creation/deletion and ensure consistent state.

3. Device Migration Logic Concerns

init.py:108-141:

  • The migration runs on every entry setup with a lock, but the lock key is stored in hass.data without cleanup
  • If _async_migrate_devices fails partially, the migration flag is still set
  • The function modifies device registry while potentially still initializing entities

Recommendation: Consider running migration as a separate repair flow or ensuring idempotency.

4. Service Handler Entity Lookup

init_services.py:173-176:

entity = hass.data["entity_components"]["cover"].get_entity(entity_id)

This assumes the entity is always a cover and uses internal HA structure directly.

Recommendation: Use entity_registry.async_get(entity_id) to validate, then get the coordinator through proper channels.

Potential Bugs 🐛

Critical

1. Serial Number Stability Issue

Throughout the codebase (binary_sensor.py:259, button.py:305, etc.), the serial is retrieved with:

serial = getattr(coordinator, "serial", None) or (coordinator.device_info or {}).get(...)

If coordinator.serial is "" (empty string), it's truthy but will fail as an identifier. This cascades through all entities.

Fix: Use explicit None checks: getattr(coordinator, "serial", None) if getattr(coordinator, "serial", None) is not None else ...

2. Rediscovery Can Match Wrong Device

coordinator.py:707-709:

serial = ((info or {}).get("data") or {}).get("serialnr")
if not serial or (self.serial and serial != self.serial):
    return None

If self.serial is None (possible on first run), ANY device that responds will be accepted.

Fix: Require serial match: if not serial or not self.serial or serial != self.serial:

3. Concurrent Data Update During Host Switch

coordinator.py:728-762:
When switching hosts, there's no guarantee that _async_update_data isn't running concurrently. The coordinator might try to use the old client while it's being disconnected.

Recommendation: Add a coordinator-level lock around client operations or ensure updates are paused during host switches.

Medium

4. Config Entry Update Without Validation

init_services.py:196-198:

hass.config_entries.async_update_entry(entry, data=new_data)
await hass.config_entries.async_reload(entry.entry_id)

If the new host/credentials are invalid, the entry is already updated. The reload will fail, leaving the integration in a broken state with wrong config.

Fix: Validate connection BEFORE updating the entry, or handle reload failure by reverting.

5. Extended Discovery Subnet List Hardcoded

coordinator.py:646-651:

common = ["192.168.0.0/24", "192.168.1.0/24", "10.0.0.0/24", "172.16.0.0/24"]

Missing common ranges like 192.168.2.0/24, 10.0.1.0/24, etc.

Suggestion: Consider using the current host's /16 subnet instead of hardcoded ranges.

6. Device Cleanup Service Can Fail Silently

init_services.py:229:

devices = dev_reg.async_entries_for_config_entry(target_entry_id) if hasattr(dev_reg, "async_entries_for_config_entry") else [d for d in dev_reg.devices.values() if target_entry_id in d.config_entries]

The hasattr check suggests compatibility code, but this will silently use different logic on different HA versions, potentially causing inconsistent behavior.

Security Concerns 🔒

Medium Priority

1. Service Passwords Stored in History

services.yaml:980 and strings.json:1068:
The warning is present in the description, but users might miss it. Consider:

password:
  required: false
  selector:
    text:
      type: password
      # Add deprecation notice or remove from service entirely

Recommendation: Strongly consider removing password from the service definition and requiring users to use the UI config flow for credential updates.

2. IP Discovery Port Scanning

The extended discovery feature scans up to 192 IPs across multiple subnets (const.py:430). While limited to port 443 and with proper authentication, this could:

  • Trigger IDS/IPS alerts on some networks
  • Be considered network reconnaissance
  • Cause performance issues on slower networks

Recommendation:

  • Add more prominent warnings in the UI
  • Consider requiring explicit user confirmation for extended discovery
  • Add rate limiting between probes

3. SSL Verification Disabled

While necessary for self-signed certs (api.py:62-64), this is worth noting:

ssl_ctx.check_hostname = False
ssl_ctx.verify_mode = ssl.CERT_NONE

Status: Acceptable for local IoT devices, but ensure documentation mentions this.

Performance Considerations ⚡

Issues

1. Discovery Blocking

coordinator.py:676-692:
The discovery scans up to 192 hosts with 8 concurrent connections and 3-second timeouts. Worst case: ~72 seconds to complete.

During this time, _async_update_data is blocking, which delays all coordinator updates.

Recommendation:

  • Run discovery in background task instead of blocking update
  • Provide user feedback during discovery (persistent notification)
  • Add discovery progress tracking

2. Device Registry Iteration

init.py:115-116 and init_services.py:229:

devices = [d for d in dev_reg.devices.values() if entry.entry_id in d.config_entries]

Iterates ALL devices in the registry. For large installations (100+ devices), this is inefficient.

Fix: Use async_entries_for_config_entry() which is indexed (with fallback for older HA versions).

3. Duplicate Serial Updates

coordinator.py:592-600:

def _update_serial(self, serial: str | None) -> None:
    if not serial:
        return
    if serial == self.serial:
        return
    self.serial = serial
    data = dict(self.entry.data)
    data[CONF_SERIAL] = serial
    self.hass.config_entries.async_update_entry(self.entry, data=data)

This updates the config entry on EVERY serial change, even though it should only happen once. Consider adding a flag to track if the serial was persisted.

Test Coverage 📋

Current State

Only one test file change:

# tests/conftest.py:1184
def __init__(self, host, port=DEFAULT_PORT, session=None, logger=None, ws_protocol="wss", **_):

Missing Coverage

The PR adds 653 lines of new code but minimal test updates. Critical areas lacking tests:

  1. IP Discovery Logic (coordinator.py:629-692)

    • Subnet calculation
    • Serial matching
    • Concurrent probe cancellation
    • Backoff behavior
  2. Host Switching (coordinator.py:728-762)

    • Config entry updates
    • Client recreation
    • Push callback preservation
  3. Device Migration (init.py:108-141)

    • Device merging logic
    • Entity reassignment
    • Identifier preservation
  4. Service Handlers (init_services.py)

    • set_connection service
    • cleanup_devices service
    • Error handling

Recommendation: Add comprehensive unit tests for these critical paths before merging.

Additional Recommendations

Documentation

  1. Add inline comments explaining the device identifier strategy (serial vs host vs unique_id fallback chain)
  2. Document the migration path for existing installations
  3. Add troubleshooting section for discovery failures

Code Organization

  1. Consider extracting discovery logic to a separate module (discovery.py)
  2. The coordinator is now 450+ lines; consider splitting into coordinator + connection manager
  3. Extract device migration to a separate helper module

User Experience

  1. Add a sensor showing discovery status (idle/scanning/found/failed)
  2. Provide a button to manually trigger discovery instead of waiting for failure
  3. Show last successful connection time in device info

Summary

This is a feature-rich PR that addresses real user needs. The core functionality is sound, but there are several reliability and edge-case issues that should be addressed before merging:

Must Fix:

  • Serial number empty string handling
  • Rediscovery device matching when serial is None
  • Service password handling (consider removing)
  • Add tests for critical paths

Should Fix:

  • Exception handling specificity
  • Config entry update validation in service
  • Discovery blocking coordinator updates
  • Device migration error handling

Consider:

  • Breaking discovery into separate module
  • Adding user-facing discovery status
  • Improving performance for large installations

The PR shows good engineering practices overall, with thoughtful error handling and user-facing features. With the fixes above, this will be a solid addition to the integration.


Review completed by Claude Code

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR enhances the Siegenia integration with IP auto-discovery and manual reconnection capabilities, and improves device identity stability across IP changes and migrations.

Changes:

  • Introduces coordinator-level auto-discovery and host rediscovery logic (with HA repairs issues) and wires it into config flow and options.
  • Adds new services (siegenia.set_connection, siegenia.cleanup_devices) plus corresponding translations, strings, and README documentation for manual reconnection and device cleanup.
  • Normalizes entity unique IDs and device identifiers using a stable serial/identifier across covers, sensors, buttons, numbers, updates, and binary_sensors, and adds a device-migration path.

Reviewed changes

Copilot reviewed 18 out of 19 changed files in this pull request and generated 13 comments.

Show a summary per file
File Description
tests/conftest.py Updates mock_client to accept ws_protocol and extra kwargs, keeping tests compatible with the extended SiegeniaClient usage.
custom_components/siegenia/update.py Uses a stable serial/identifier (via coordinator or entry) for the firmware update entity and devices, aligning with new device identity handling.
custom_components/siegenia/translations/en.json Adds issue translation for connection failures, new options labels for auto-discovery, and service descriptions for connection updates and cleanup.
custom_components/siegenia/strings.json Mirrors English strings for issues, options, and services for the frontend’s internal strings system.
custom_components/siegenia/services.yaml Defines new set_connection and cleanup_devices services, including selectors and fields for connection parameters and cleanup scoping.
custom_components/siegenia/sensor.py Switches sensors to use coordinator/entry-based serial and device_identifier for consistent unique IDs and device identifiers.
custom_components/siegenia/select.py Aligns select entities with the stable serial/identifier scheme and shared device_identifier.
custom_components/siegenia/number.py Adds stable unique ID/serial for the stopover number entity and introduces device_info so it participates in device registry identity.
custom_components/siegenia/manifest.json Bumps integration version from 1.0.0 to 1.1.7.
custom_components/siegenia/cover.py Uses coordinator serial and entry unique_id for cover unique IDs, and device_identifier for device registry identifiers.
custom_components/siegenia/coordinator.py Adds auto-discovery, rediscovery, stable serial storage, HA issue creation/clearing, push callback setup in __init__, and connection retry logic with backoff and host switching.
custom_components/siegenia/const.py Introduces config keys and defaults for auto/extended discovery, issue IDs, and rediscovery tuning constants.
custom_components/siegenia/config_flow.py Extends config and options flows with auto/extended discovery flags and persists serial; also keeps extended discovery disabled when auto-discover is off.
custom_components/siegenia/button.py Normalizes button entity serial/identifier and device_info to use coordinator device_identifier.
custom_components/siegenia/binary_sensor.py Normalizes binary sensors’ serial and device_info.identifiers via coordinator device_identifier.
custom_components/siegenia/init_services.py Adds set_connection and cleanup_devices services and wires them into HA, including config entry updates and registry-based device merging.
custom_components/siegenia/init.py Passes new flags to the coordinator, softens initial connection failures, and adds a one-time device migration helper to merge duplicate devices.
custom_components/siegenia/services.yaml (Same file as above listing) Documents and structures the new services for HA’s service UI.
custom_components/siegenia/strings.json (Same file as above listing) Provides localized names/descriptions for the new services and options in HA UI.
README.md Documents auto-discover IP changes, manual reconnect via options/service, and duplicate device cleanup workflow.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +56 to +64
def device_info(self):
info = (self.coordinator.device_info or {}).get("data", {})
ident = getattr(self.coordinator, "device_identifier", lambda: None)() or self._serial
return {
"identifiers": {(DOMAIN, ident)},
"manufacturer": "Siegenia",
"name": info.get("devicename") or "Siegenia Device",
"model": info.get("type"),
}
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SiegeniaStopoverNumber.device_info uses the raw type field as the model string, whereas other entities (e.g. covers, sensors, binary_sensors, update entities) consistently call resolve_model(info) to expose a friendly model name. Because all of these entities share the same device identifiers, this mismatch can cause the less friendly numeric model to overwrite the nicer resolved model in the device registry. For consistency with the rest of the integration and to avoid clobbering the model, consider using resolve_model(info) here as well.

Copilot uses AI. Check for mistakes.
Comment on lines 80 to +120
hass.services.async_register(DOMAIN, "set_mode", _handle_set_mode)
hass.services.async_register(DOMAIN, "set_connection", _handle_set_connection)

async def _cleanup_devices(call: ServiceCall) -> None:
"""Merge duplicate devices for a specific entry and remove empty leftovers.

Accepts optional entity_id to scope; otherwise uses the first Siegenia entry.
"""
dev_reg = dr.async_get(hass)
ent_reg = er.async_get(hass)

target_entry_id: str | None = None
entity_id = call.data.get("entity_id")
if entity_id:
ent = ent_reg.async_get(entity_id)
if ent and ent.config_entry_id:
target_entry_id = ent.config_entry_id
if not target_entry_id:
entries = hass.config_entries.async_entries(DOMAIN)
if not entries:
raise ServiceValidationError("No Siegenia entries found for cleanup")
target_entry_id = entries[0].entry_id

devices = dev_reg.async_entries_for_config_entry(target_entry_id) if hasattr(dev_reg, "async_entries_for_config_entry") else [d for d in dev_reg.devices.values() if target_entry_id in d.config_entries]
if not devices:
raise ServiceValidationError("No devices found for this Siegenia entry")

primary = max(devices, key=lambda d: len(d.identifiers))

for dev in devices:
if dev.id == primary.id:
continue
ents = ent_reg.async_entries_for_device(dev.id) if hasattr(ent_reg, "async_entries_for_device") else [e for e in ent_reg.entities.values() if e.device_id == dev.id]
for ent in ents:
ent_reg.async_update_entity(ent.entity_id, device_id=primary.id)
try:
dev_reg.async_remove_device(dev.id)
except Exception:
pass

hass.services.async_register(DOMAIN, "cleanup_devices", _cleanup_devices)
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new siegenia.set_connection and siegenia.cleanup_devices services introduce non-trivial behavior (updating config entry connection details and merging/removing devices) but currently lack any direct tests in tests/ (existing service tests only cover set_mode, reboot_device, reset_device, and renew_cert in test_cover_and_services.py). Given that the rest of this module’s services are exercised by tests, it would be good to add coverage for these new services to validate typical and error cases (e.g. invalid entity_id, merging devices, and preserving serial/identifiers).

Copilot uses AI. Check for mistakes.
Comment on lines 371 to +399
async def _async_update_data(self) -> dict[str, Any]:
try:
if not self.client.connected:
await self.client.connect()
await self.client.login(self.username, self.password)
await self.client.start_heartbeat(self.heartbeat_interval)
params = await self.client.get_device_params()
self._adjust_interval(params)
# Check warnings on polled data too
self._handle_warnings(params)
# Track last stable states per sash for UX when MOVING without a recent command
attempts = 0
while attempts < 2:
try:
states = ((params or {}).get("data") or {}).get("states") or {}
for k, v in states.items():
if v and v != "MOVING":
self._last_stable_state_by_sash[int(k)] = v
except Exception:
pass
return params
except AuthenticationError as err:
# Trigger reauth flow in HA
raise ConfigEntryAuthFailed from err
except Exception as err: # noqa: BLE001
raise UpdateFailed(err) from err
await self._ensure_connected()
params = await self.client.get_device_params()
self._adjust_interval(params)
# Check warnings on polled data too
self._handle_warnings(params)
await self._clear_issue()
# Track last stable states per sash for UX when MOVING without a recent command
try:
states = ((params or {}).get("data") or {}).get("states") or {}
for k, v in states.items():
if v and v != "MOVING":
self._last_stable_state_by_sash[int(k)] = v
except Exception:
pass
return params
except AuthenticationError as err:
raise ConfigEntryAuthFailed from err
except Exception as err: # noqa: BLE001
recovered = await self._handle_connection_error(err)
if recovered:
attempts += 1
continue
await self._raise_issue()
raise UpdateFailed(err) from err
# Should not reach here
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new auto-discovery and rediscovery logic in the coordinator (e.g. _handle_connection_error, _rediscover_host, _probe_host, and _switch_host) significantly affects how the integration behaves when the device IP changes, but there are no tests in tests/ that cover these paths. Since other coordinator behaviors (push vs. poll, warning events, etc.) are already tested, it would be valuable to add tests that exercise at least: enabling/disabling auto_discover, backoff behavior, successful vs. failed rediscovery, and updating the config entry’s host/serial fields when a new IP is found.

Copilot uses AI. Check for mistakes.
Comment on lines +271 to +286
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
found = None
for d in done:
try:
found = d.result()
except Exception as exc: # noqa: BLE001
self.logger.debug("Probe task failed: %s", exc)
if found:
break
# Cancel remaining tasks
for t in pending:
t.cancel()
try:
await asyncio.gather(*pending, return_exceptions=True)
except Exception:
pass
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In _rediscover_host, using asyncio.wait(..., return_when=asyncio.FIRST_COMPLETED) and then cancelling all remaining tasks means you only ever probe the first IP that finishes (often the first candidate), rather than scanning through the candidate list until one of them matches the device. This makes rediscovery much less effective than intended. Consider either awaiting all _probe_host tasks and returning the first non-None result, or iterating with asyncio.as_completed so that you can stop once a positive match is found without aborting other probes prematurely.

Suggested change
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
found = None
for d in done:
try:
found = d.result()
except Exception as exc: # noqa: BLE001
self.logger.debug("Probe task failed: %s", exc)
if found:
break
# Cancel remaining tasks
for t in pending:
t.cancel()
try:
await asyncio.gather(*pending, return_exceptions=True)
except Exception:
pass
found: str | None = None
try:
for coro in asyncio.as_completed(tasks):
try:
result = await coro
except Exception as exc: # noqa: BLE001
self.logger.debug("Probe task failed: %s", exc)
continue
if result:
found = result
break
finally:
# Cancel any remaining tasks and wait for them to finish
for t in tasks:
if not t.done():
t.cancel()
try:
await asyncio.gather(*tasks, return_exceptions=True)
except Exception:
pass

Copilot uses AI. Check for mistakes.
Comment on lines +43 to +49
entity = hass.data["entity_components"]["cover"].get_entity(entity_id) # type: ignore[index]
if entity is None:
raise ServiceValidationError(f"Entity {entity_id} not found for siegenia.set_connection")
coordinator = getattr(entity, "coordinator", None)
entry = getattr(coordinator, "entry", None) if coordinator else None
if entry is None:
raise ServiceValidationError("Coordinator missing on entity for siegenia.set_connection")
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The siegenia.set_connection service handler currently resolves the entity only from the cover entity component, but the service schema and translations describe entity_id as “Any entity from the device.” If a user passes a non-cover entity (e.g. a sensor from the same device), this will raise ServiceValidationError even though the service UI allows selecting it. To align behavior with the service description, either restrict the selector/schema to domain: cover or update the handler to resolve the config entry via the entity registry/device registry instead of assuming a cover entity.

Suggested change
entity = hass.data["entity_components"]["cover"].get_entity(entity_id) # type: ignore[index]
if entity is None:
raise ServiceValidationError(f"Entity {entity_id} not found for siegenia.set_connection")
coordinator = getattr(entity, "coordinator", None)
entry = getattr(coordinator, "entry", None) if coordinator else None
if entry is None:
raise ServiceValidationError("Coordinator missing on entity for siegenia.set_connection")
# Resolve the config entry from the entity registry (any domain), falling back
# to the device registry if needed. This allows using any entity from the device,
# not just the cover entity.
ent_reg = er.async_get(hass)
ent_entry = ent_reg.async_get(entity_id)
if ent_entry is None:
raise ServiceValidationError(
f"Entity {entity_id} not found for siegenia.set_connection"
)
entry = None
if ent_entry.config_entry_id:
entry = hass.config_entries.async_get_entry(ent_entry.config_entry_id)
# If the entity itself is not directly tied to a config entry, try via its device.
if entry is None and ent_entry.device_id:
dev_reg = dr.async_get(hass)
dev_entry = dev_reg.async_get(ent_entry.device_id)
if dev_entry is not None:
for config_entry_id in dev_entry.config_entries:
candidate = hass.config_entries.async_get_entry(config_entry_id)
if candidate is not None and candidate.domain == DOMAIN:
entry = candidate
break
if entry is None:
raise ServiceValidationError(
f"Config entry not found for entity {entity_id} in siegenia.set_connection"
)

Copilot uses AI. Check for mistakes.
Comment on lines +220 to +221
except Exception:
pass
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'except' clause does nothing but pass and there is no explanatory comment.

Suggested change
except Exception:
pass
except Exception as exc: # noqa: BLE001
# Best-effort cleanup: failing to delete the issue is non-fatal.
self.logger.debug("Failed to delete issue '%s' for host %s: %s", ISSUE_UNREACHABLE, self.host, exc)

Copilot uses AI. Check for mistakes.
t.cancel()
try:
await asyncio.gather(*pending, return_exceptions=True)
except Exception:
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'except' clause does nothing but pass and there is no explanatory comment.

Copilot uses AI. Check for mistakes.
Comment on lines +327 to +328
except Exception:
pass
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'except' clause does nothing but pass and there is no explanatory comment.

Suggested change
except Exception:
pass
except Exception as exc: # noqa: BLE001
self.logger.debug("Error disconnecting Siegenia client during host switch: %s", exc)

Copilot uses AI. Check for mistakes.
Comment on lines +387 to +388
except Exception:
pass
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'except' clause does nothing but pass and there is no explanatory comment.

Suggested change
except Exception:
pass
except Exception as exc: # noqa: BLE001
self.logger.debug(
"Failed to update last stable states from params: %s",
exc,
)

Copilot uses AI. Check for mistakes.
Comment on lines +453 to +454
except Exception:
pass
Copy link

Copilot AI Feb 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'except' clause does nothing but pass and there is no explanatory comment.

Suggested change
except Exception:
pass
except Exception as err:
logging.getLogger(__name__).debug(
"Failed to update serial from push payload: %s", err
)

Copilot uses AI. Check for mistakes.
Fix device automation mappings and add regressions
@PrzemyslawKlys PrzemyslawKlys merged commit 64d9207 into master Feb 4, 2026
5 checks passed
@claude
Copy link
Copy Markdown

claude bot commented Feb 4, 2026

Pull Request Review: Auto-Discovery and Manual Reconnect Features

Summary

This PR adds significant enhancements for handling IP changes in Siegenia devices, including auto-discovery, manual reconnection services, and device cleanup utilities. Overall, the implementation is solid with good test coverage and thoughtful features.

Positive Highlights ✅

Code Quality

  • Excellent test coverage: New test files cover config flow, regressions, and edge cases comprehensively
  • Clean architecture: Separation of concerns between coordinator, config flow, and services
  • Good error handling: Proper exception handling with fallbacks throughout
  • Type hints: Good use of type annotations for maintainability

Features

  • Auto-discovery with backoff: Smart exponential backoff prevents network flooding (coordinator.py:171-189)
  • Graceful degradation: Setup continues even if initial connection fails (__init__.py:98-100)
  • Device migration: Automatic cleanup of duplicate devices (__init__.py:197-231)
  • Push updates: Smart interval adjustment based on device state (coordinator.py:407-480)

Issues & Concerns 🔴

1. Security: Broad Exception Catching (High Priority)

Multiple instances of except Exception: pass silently swallow errors that may indicate security issues:

coordinator.py:154-156, 318-325

except Exception as exc:  # noqa: BLE001
    self.logger.debug("Unexpected ensure_connected failure: %s", exc)
    raise UpdateFailed(exc) from exc

Recommendation: Catch specific exceptions. At minimum, distinguish between network errors (ClientConnectorError, TimeoutError) and authentication errors.

coordinator.py:322-325 - Silent disconnect failures during probing could leak connections:

finally:
    try:
        await asyncio.wait_for(client.disconnect(), timeout=2.0)
    except Exception as exc:  # noqa: BLE001
        self.logger.debug("Probe cleanup failed for %s: %s", host, exc)

Recommendation: Log at warning level if cleanup fails to track potential resource leaks.

2. Network Scanning Security Concerns (High Priority)

coordinator.py:228-291 - The rediscovery function scans up to 192 IP addresses:

  • Default cooldown of 900 seconds (15 min) may still be aggressive for some networks
  • Extended discovery probes common home subnets which could trigger IDS alerts
  • No rate limiting beyond concurrent semaphore

Recommendations:

  • Add a user-configurable max scan limit in options
  • Consider adding a random jitter to backoff timing
  • Log at INFO level when starting a rediscovery scan (currently only DEBUG)
  • Document that extended discovery may trigger network security alerts

3. Race Condition in Service Registration (init.py:116-120)

marker = f"{DOMAIN}_services_registered"
if not hass.data.get(marker):
    await async_setup_services(hass)
    hass.data[marker] = True

Issue: Not atomic - multiple concurrent setups could register services twice.

Recommendation:

if marker not in hass.data:
    async with hass.data.setdefault(f"{marker}_lock", asyncio.Lock()):
        if marker not in hass.data:
            await async_setup_services(hass)
            hass.data[marker] = True

4. Potential Resource Leak (coordinator.py:274-290)

Task cancellation after FIRST_COMPLETED may not properly clean up all probe connections.

Recommendation: Add timeout to the gather to ensure cleanup:

for t in pending:
    t.cancel()
await asyncio.wait_for(
    asyncio.gather(*pending, return_exceptions=True),
    timeout=5.0
)

5. Unsafe Type Annotations (init.py:89-90)

coordinator._motion_interval = timedelta(seconds=motion_s)  # type: ignore[attr-defined]
coordinator._idle_interval = timedelta(seconds=idle_s)      # type: ignore[attr-defined]

Issue: Setting private attributes from outside the class breaks encapsulation and can cause maintenance issues.

Recommendation: Add proper setter methods or constructor parameters to the coordinator class.

Performance Considerations ⚡

1. Efficient Subnet Scanning (coordinator.py:252-258)

Good: Sorts hosts by distance from last known IP, reducing average discovery time.

2. Push-Based Updates (coordinator.py:407-432)

Excellent: Dynamically adjusts polling interval based on push events, reducing unnecessary polls.

3. Concurrent Probing (coordinator.py:268-273)

Good: Uses semaphore (concurrency=8) to balance speed vs network load.

Suggestion: Make REDISCOVER_CONCURRENCY configurable in advanced options for users with different network capabilities.

Potential Bugs 🐛

1. Config Flow Extended Discovery Logic (config_flow.py:105-106, 263-264)

if not data.get(CONF_AUTO_DISCOVER, False):
    data[CONF_EXTENDED_DISCOVERY] = False

Issue: Silently forces extended_discovery to False when auto_discover is disabled. This is correct logic but may confuse users if they explicitly set extended_discovery=True.

Recommendation: Add a validation warning in the UI when this happens.

2. Missing Validation (init_services.py:161-169)

def _parse_duration(text: str) -> tuple[int, int]:
    text = str(text).strip()
    if ":" in text:
        hh, mm = text.split(":", 1)
        return int(hh), int(mm)
    mins = int(text)
    return mins // 60, mins % 60

Issue: No bounds checking or error handling. Negative values or very large values could cause issues.

Recommendation:

def _parse_duration(text: str) -> tuple[int, int]:
    text = str(text).strip()
    if ":" in text:
        hh, mm = text.split(":", 1)
        h, m = int(hh), int(mm)
    else:
        mins = int(text)
        h, m = mins // 60, mins % 60
    
    if h < 0 or m < 0 or m >= 60:
        raise ValueError(f"Invalid duration: {text}")
    return h, m

3. Inconsistent State Handling (coordinator.py:388-393)

Only tracks stable states from polled data, not from all update paths. This could cause get_last_stable_state() to return stale data after certain operations.

Test Coverage 📝

Excellent additions:

  • test_config_flow.py: 152 lines covering user flow, auth errors, reauth
  • test_regressions.py: 140 lines covering device triggers, options, intervals

Missing tests:

  • Auto-discovery scenarios (no connection, successful rediscovery, backoff behavior)
  • Manual reconnection service
  • Device cleanup service with multiple duplicate devices
  • Push update handling and interval adjustments
  • Extended discovery mode

Recommendation: Add integration tests for the new auto-discovery feature, especially edge cases like authentication failures during rediscovery.

Documentation 📚

Good:

  • README.md updated with clear instructions for new features
  • services.yaml documents all service parameters
  • Translation files updated (en, de, fr)

Needs Improvement:

  • No inline documentation for complex algorithms (e.g., rediscovery logic)
  • Discovery constants in const.py lack comments explaining tuning rationale
  • No migration guide for users upgrading from previous versions

Best Practices Violations ⚠️

  1. Magic Numbers: Several hardcoded values that should be constants:

    • coordinator.py:82: poll_interval * 6 (push interval multiplier)
    • coordinator.py:84: poll_interval * 10 (idle interval multiplier)
    • coordinator.py:85: 60 (push idle timeout)
    • coordinator.py:530: 10 (motion revert delay)
  2. Testing Environment Detection: init.py:127

testing = os.environ.get("PYTEST_CURRENT_TEST") is not None

Better to use a proper test fixture or mock rather than checking environment variables.

Security Best Practices ✅

Good:

  • Credentials are passed through secure config entry (not logged)
  • WebSocket protocol properly validated in schema
  • No hardcoded secrets
  • Issue registry used for error reporting to users

Could be improved:

  • No explicit rate limiting on failed authentication attempts
  • Rediscovery scans could be used for network reconnaissance (document clearly)

Breaking Changes 🔥

None identified. Changes are additive with sensible defaults.

Recommendations Summary

Must Fix (Before Merge)

  1. Fix race condition in service registration
  2. Add proper exception handling (avoid bare except Exception)
  3. Add validation to _parse_duration service function
  4. Fix resource cleanup in probe task cancellation

Should Fix (Near Future)

  1. Make discovery concurrency configurable
  2. Add comprehensive tests for auto-discovery
  3. Add inline documentation for complex algorithms
  4. Convert private attribute injection to proper API
  5. Add rate limiting for authentication retries

Nice to Have

  1. Extract magic numbers to named constants
  2. Add metrics/telemetry for discovery performance
  3. Add user notification when extended discovery triggers
  4. Consider adding a "test connection" button in config flow

Overall Assessment

Rating: 7.5/10

This is a well-implemented feature with good separation of concerns and solid fundamentals. The main concerns are around exception handling, a minor race condition, and test coverage for the new discovery features. With the high-priority issues addressed, this would be an excellent addition to the integration.

The auto-discovery feature is particularly well-designed with exponential backoff and sensible limits. The manual reconnection service is a great UX improvement for users with changing network configurations.


Great work on this PR! The features are valuable and the implementation shows good software engineering practices. Please address the security and race condition concerns before merging.

@PrzemyslawKlys PrzemyslawKlys deleted the AutoDetect branch February 4, 2026 21:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants