feat: ✨ Add auto-discovery and manual reconnect features for Siegenia…#1
feat: ✨ Add auto-discovery and manual reconnect features for Siegenia…#1PrzemyslawKlys merged 13 commits intomasterfrom
Conversation
… integration * Enabled auto-discovery of IP changes by default, allowing the integration to automatically update the host if the controller gets a new IP on the same /24 subnet. * Introduced a manual reconnect service to update host/credentials even when the device is offline, enhancing user control over the connection settings. * Updated various components to utilize the new `serial` identifier for device management, ensuring stability across IP changes. * Improved error handling during connection setup and device updates, allowing for better resilience and user feedback. * Updated documentation and service definitions to reflect new features and usage instructions.
* Introduced `siegenia.cleanup_devices` service to merge duplicate devices and remove empty legacy ones. * Enhanced issue management by implementing `_raise_issue` and `_clear_issue` methods for better connectivity error handling. * Updated documentation and strings for clarity on connection issues. * Bumped version to 1.1.5.
…ents * Added a screenshot of the Siegenia device page to enhance documentation. * Removed obsolete `requirements_test.txt` file as it is no longer needed.
* Introduced `requirements_test.txt` to specify testing dependencies. * Ensures compatibility with pytest and its plugins for effective testing.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| serial = entry.data.get(CONF_SERIAL) or entry.unique_id | ||
| # Collect all devices belonging to this domain | ||
| devices = [d for d in dev_reg.devices.values() if any(idt[0] == DOMAIN for idt in d.identifiers)] | ||
| if not devices: | ||
| return |
There was a problem hiding this comment.
Scope device migration to the current entry
The new _async_migrate_devices routine collects every device whose identifier starts with the Siegenia domain and then merges them into a single primary device. Because it isn’t scoped to the current config entry or serial (see the domain-wide device list built here), adding or reloading one entry when multiple controllers are configured will collapse devices from other entries and move their entities onto the wrong device before deleting the originals. Filter the devices by this entry’s identifiers/serial before merging to avoid cross-device corruption.
Useful? React with 👍 / 👎.
Pull Request Review: Auto-discovery and Manual Reconnect FeaturesSummaryThis PR introduces significant improvements to the Siegenia integration by adding auto-discovery of IP changes and manual reconnection capabilities. Overall, this is a well-structured enhancement that addresses a real user pain point (devices changing IPs). However, there are several areas that need attention before merging. 🔴 Critical Issues1. Security: Network Scanning Without Rate Limiting (coordinator.py:196-257)The
Recommendation:
2. Broad Exception Handling (Multiple locations)Extensive use of
Recommendation:
3. Race Condition in Issue Management (coordinator.py:171-194)The Recommendation: Use
|
* Introduced `MIGRATION_DEVICES_V2` constant for device migration tracking. * Updated `_async_migrate_devices` to merge duplicate devices scoped to a specific entry. * Improved `cleanup_devices` service to allow optional `entity_id` for targeted cleanup. * Added new translations and descriptions for cleanup functionality in `strings.json` and `en.json`. * Bumped version to `1.1.7` in `manifest.json`.
Pull Request Review: Auto-discovery and Manual Reconnect FeaturesSummaryThis PR adds significant functionality for handling IP address changes of Siegenia window controllers, including auto-discovery and manual reconnection capabilities. Overall, this is a well-structured enhancement with thoughtful error handling. Below are my findings across code quality, potential issues, performance, security, and testing. ✅ StrengthsCode Quality
Features
|
* Introduced `CONF_EXTENDED_DISCOVERY` to allow scanning of common home subnets. * Updated documentation to reflect the optional nature of auto-discover IP changes. * Enhanced service descriptions and user interface strings for clarity.
Pull Request ReviewOverviewThis PR adds significant new functionality including auto-discovery of IP changes, manual reconnection services, device cleanup, and improved error handling. The changes span 641 additions across 19 files. ✅ StrengthsCode Quality
Architecture
Documentation
🔴 Critical Issues1. Security: Network Scanning Risk (coordinator.py:219-282)The auto-discovery feature scans up to 192 IP addresses with concurrent probes. This could:
Recommendation:
2. Credential Exposure in Logs (coordinator.py:284-314)async def _probe_host(self, host: str) -> str | None:
await client.login(self.username, self.password)Credentials are used for authentication attempts across multiple hosts. If logging is verbose, credentials could leak. Recommendation:
3. Bare Exception Handlers (Throughout)Multiple instances of
Recommendation: except Exception as exc: # noqa: BLE001
self.logger.warning("Device migration failed: %s", exc, exc_info=True)4. Race Condition in Device Migration (init.py:95-105)While there's a lock, the migration logic runs during setup and could interfere with entity registration: async with hass.data[_lock_key]:
if not entry.data.get(MIGRATION_DEVICES_V2):
await _async_migrate_devices(hass, entry)Recommendation:
|
* Introduced an `extended_discovery` parameter in the `SiegeniaDataUpdateCoordinator` constructor. * This allows for more flexible configuration of the discovery process.
Pull Request Review: Auto-Discovery and Manual Reconnect FeaturesOverviewThis PR introduces significant functionality for handling IP address changes in Siegenia window controllers, along with device cleanup and manual reconnect capabilities. The implementation is generally solid but has several areas that need attention. 🔴 Critical Issues1. Security: Network Scanning Without Rate LimitingLocation: The
Recommendation:
2. Bare Exception Handlers ThroughoutLocations: Multiple (e.g., The codebase extensively uses # Example from coordinator.py:146
except Exception as exc: # noqa: BLE001
self.logger.debug("Unexpected ensure_connected failure: %s", exc)
raise UpdateFailed(exc) from excRecommendation:
3. Race Condition in Issue ManagementLocation: While Recommendation:
|
…tity identification * Added `device_identifier` method to `SiegeniaDataUpdateCoordinator` for consistent entity identification. * Updated various components to utilize the new method for retrieving device identifiers. * Enhanced error handling in service setup to raise `ServiceValidationError` for better debugging. * Adjusted configuration flow to ensure `CONF_EXTENDED_DISCOVERY` is set correctly based on `CONF_AUTO_DISCOVER`.
Pull Request Review: Auto-Discovery and Manual Reconnect FeaturesThank you for this substantial feature addition! I've reviewed the code and have feedback across several categories. ✅ Strengths
🔴 Critical Issues1. Security: Network Scanning Without Clear User Consent (coordinator.py:224-287)The auto-discovery feature scans up to 192 IP addresses across multiple subnets. While opt-in, the implications aren't clearly communicated:
2. Potential Device Hijacking (coordinator.py:289-319)The serial = ((info or {}).get("data") or {}).get("serialnr")
if not serial or (self.serial and serial != self.serial):
return NoneIssue: If Recommendation: Make serial verification mandatory before accepting a rediscovered host. 3. Broad Exception Handling (Multiple locations)Excessive use of bare
Issue: This can mask bugs and make debugging difficult. The Recommendation:
|
|
Updates pushed to address review items:\n- rediscovery now refuses to probe if serial is unknown (prevents cross-device hijack)\n- migration flag is set only after successful migration\n- set_connection description in strings now warns about password history\n\nTests: python3 -m pytest -q |
Pull Request Review: Auto-discovery and Reconnect FeaturesSummaryThis PR adds substantial functionality for handling IP address changes in Siegenia window controllers, including auto-discovery, manual reconnection services, and device migration. The overall implementation is well-structured, but there are several areas that need attention. Code Quality & Best PracticesStrengths ✅
Issues to Address
|
There was a problem hiding this comment.
Pull request overview
This PR enhances the Siegenia integration with IP auto-discovery and manual reconnection capabilities, and improves device identity stability across IP changes and migrations.
Changes:
- Introduces coordinator-level auto-discovery and host rediscovery logic (with HA repairs issues) and wires it into config flow and options.
- Adds new services (
siegenia.set_connection,siegenia.cleanup_devices) plus corresponding translations, strings, and README documentation for manual reconnection and device cleanup. - Normalizes entity unique IDs and device identifiers using a stable serial/identifier across covers, sensors, buttons, numbers, updates, and binary_sensors, and adds a device-migration path.
Reviewed changes
Copilot reviewed 18 out of 19 changed files in this pull request and generated 13 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/conftest.py | Updates mock_client to accept ws_protocol and extra kwargs, keeping tests compatible with the extended SiegeniaClient usage. |
| custom_components/siegenia/update.py | Uses a stable serial/identifier (via coordinator or entry) for the firmware update entity and devices, aligning with new device identity handling. |
| custom_components/siegenia/translations/en.json | Adds issue translation for connection failures, new options labels for auto-discovery, and service descriptions for connection updates and cleanup. |
| custom_components/siegenia/strings.json | Mirrors English strings for issues, options, and services for the frontend’s internal strings system. |
| custom_components/siegenia/services.yaml | Defines new set_connection and cleanup_devices services, including selectors and fields for connection parameters and cleanup scoping. |
| custom_components/siegenia/sensor.py | Switches sensors to use coordinator/entry-based serial and device_identifier for consistent unique IDs and device identifiers. |
| custom_components/siegenia/select.py | Aligns select entities with the stable serial/identifier scheme and shared device_identifier. |
| custom_components/siegenia/number.py | Adds stable unique ID/serial for the stopover number entity and introduces device_info so it participates in device registry identity. |
| custom_components/siegenia/manifest.json | Bumps integration version from 1.0.0 to 1.1.7. |
| custom_components/siegenia/cover.py | Uses coordinator serial and entry unique_id for cover unique IDs, and device_identifier for device registry identifiers. |
| custom_components/siegenia/coordinator.py | Adds auto-discovery, rediscovery, stable serial storage, HA issue creation/clearing, push callback setup in __init__, and connection retry logic with backoff and host switching. |
| custom_components/siegenia/const.py | Introduces config keys and defaults for auto/extended discovery, issue IDs, and rediscovery tuning constants. |
| custom_components/siegenia/config_flow.py | Extends config and options flows with auto/extended discovery flags and persists serial; also keeps extended discovery disabled when auto-discover is off. |
| custom_components/siegenia/button.py | Normalizes button entity serial/identifier and device_info to use coordinator device_identifier. |
| custom_components/siegenia/binary_sensor.py | Normalizes binary sensors’ serial and device_info.identifiers via coordinator device_identifier. |
| custom_components/siegenia/init_services.py | Adds set_connection and cleanup_devices services and wires them into HA, including config entry updates and registry-based device merging. |
| custom_components/siegenia/init.py | Passes new flags to the coordinator, softens initial connection failures, and adds a one-time device migration helper to merge duplicate devices. |
| custom_components/siegenia/services.yaml | (Same file as above listing) Documents and structures the new services for HA’s service UI. |
| custom_components/siegenia/strings.json | (Same file as above listing) Provides localized names/descriptions for the new services and options in HA UI. |
| README.md | Documents auto-discover IP changes, manual reconnect via options/service, and duplicate device cleanup workflow. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| def device_info(self): | ||
| info = (self.coordinator.device_info or {}).get("data", {}) | ||
| ident = getattr(self.coordinator, "device_identifier", lambda: None)() or self._serial | ||
| return { | ||
| "identifiers": {(DOMAIN, ident)}, | ||
| "manufacturer": "Siegenia", | ||
| "name": info.get("devicename") or "Siegenia Device", | ||
| "model": info.get("type"), | ||
| } |
There was a problem hiding this comment.
SiegeniaStopoverNumber.device_info uses the raw type field as the model string, whereas other entities (e.g. covers, sensors, binary_sensors, update entities) consistently call resolve_model(info) to expose a friendly model name. Because all of these entities share the same device identifiers, this mismatch can cause the less friendly numeric model to overwrite the nicer resolved model in the device registry. For consistency with the rest of the integration and to avoid clobbering the model, consider using resolve_model(info) here as well.
| hass.services.async_register(DOMAIN, "set_mode", _handle_set_mode) | ||
| hass.services.async_register(DOMAIN, "set_connection", _handle_set_connection) | ||
|
|
||
| async def _cleanup_devices(call: ServiceCall) -> None: | ||
| """Merge duplicate devices for a specific entry and remove empty leftovers. | ||
|
|
||
| Accepts optional entity_id to scope; otherwise uses the first Siegenia entry. | ||
| """ | ||
| dev_reg = dr.async_get(hass) | ||
| ent_reg = er.async_get(hass) | ||
|
|
||
| target_entry_id: str | None = None | ||
| entity_id = call.data.get("entity_id") | ||
| if entity_id: | ||
| ent = ent_reg.async_get(entity_id) | ||
| if ent and ent.config_entry_id: | ||
| target_entry_id = ent.config_entry_id | ||
| if not target_entry_id: | ||
| entries = hass.config_entries.async_entries(DOMAIN) | ||
| if not entries: | ||
| raise ServiceValidationError("No Siegenia entries found for cleanup") | ||
| target_entry_id = entries[0].entry_id | ||
|
|
||
| devices = dev_reg.async_entries_for_config_entry(target_entry_id) if hasattr(dev_reg, "async_entries_for_config_entry") else [d for d in dev_reg.devices.values() if target_entry_id in d.config_entries] | ||
| if not devices: | ||
| raise ServiceValidationError("No devices found for this Siegenia entry") | ||
|
|
||
| primary = max(devices, key=lambda d: len(d.identifiers)) | ||
|
|
||
| for dev in devices: | ||
| if dev.id == primary.id: | ||
| continue | ||
| ents = ent_reg.async_entries_for_device(dev.id) if hasattr(ent_reg, "async_entries_for_device") else [e for e in ent_reg.entities.values() if e.device_id == dev.id] | ||
| for ent in ents: | ||
| ent_reg.async_update_entity(ent.entity_id, device_id=primary.id) | ||
| try: | ||
| dev_reg.async_remove_device(dev.id) | ||
| except Exception: | ||
| pass | ||
|
|
||
| hass.services.async_register(DOMAIN, "cleanup_devices", _cleanup_devices) |
There was a problem hiding this comment.
The new siegenia.set_connection and siegenia.cleanup_devices services introduce non-trivial behavior (updating config entry connection details and merging/removing devices) but currently lack any direct tests in tests/ (existing service tests only cover set_mode, reboot_device, reset_device, and renew_cert in test_cover_and_services.py). Given that the rest of this module’s services are exercised by tests, it would be good to add coverage for these new services to validate typical and error cases (e.g. invalid entity_id, merging devices, and preserving serial/identifiers).
| async def _async_update_data(self) -> dict[str, Any]: | ||
| try: | ||
| if not self.client.connected: | ||
| await self.client.connect() | ||
| await self.client.login(self.username, self.password) | ||
| await self.client.start_heartbeat(self.heartbeat_interval) | ||
| params = await self.client.get_device_params() | ||
| self._adjust_interval(params) | ||
| # Check warnings on polled data too | ||
| self._handle_warnings(params) | ||
| # Track last stable states per sash for UX when MOVING without a recent command | ||
| attempts = 0 | ||
| while attempts < 2: | ||
| try: | ||
| states = ((params or {}).get("data") or {}).get("states") or {} | ||
| for k, v in states.items(): | ||
| if v and v != "MOVING": | ||
| self._last_stable_state_by_sash[int(k)] = v | ||
| except Exception: | ||
| pass | ||
| return params | ||
| except AuthenticationError as err: | ||
| # Trigger reauth flow in HA | ||
| raise ConfigEntryAuthFailed from err | ||
| except Exception as err: # noqa: BLE001 | ||
| raise UpdateFailed(err) from err | ||
| await self._ensure_connected() | ||
| params = await self.client.get_device_params() | ||
| self._adjust_interval(params) | ||
| # Check warnings on polled data too | ||
| self._handle_warnings(params) | ||
| await self._clear_issue() | ||
| # Track last stable states per sash for UX when MOVING without a recent command | ||
| try: | ||
| states = ((params or {}).get("data") or {}).get("states") or {} | ||
| for k, v in states.items(): | ||
| if v and v != "MOVING": | ||
| self._last_stable_state_by_sash[int(k)] = v | ||
| except Exception: | ||
| pass | ||
| return params | ||
| except AuthenticationError as err: | ||
| raise ConfigEntryAuthFailed from err | ||
| except Exception as err: # noqa: BLE001 | ||
| recovered = await self._handle_connection_error(err) | ||
| if recovered: | ||
| attempts += 1 | ||
| continue | ||
| await self._raise_issue() | ||
| raise UpdateFailed(err) from err | ||
| # Should not reach here |
There was a problem hiding this comment.
The new auto-discovery and rediscovery logic in the coordinator (e.g. _handle_connection_error, _rediscover_host, _probe_host, and _switch_host) significantly affects how the integration behaves when the device IP changes, but there are no tests in tests/ that cover these paths. Since other coordinator behaviors (push vs. poll, warning events, etc.) are already tested, it would be valuable to add tests that exercise at least: enabling/disabling auto_discover, backoff behavior, successful vs. failed rediscovery, and updating the config entry’s host/serial fields when a new IP is found.
| done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED) | ||
| found = None | ||
| for d in done: | ||
| try: | ||
| found = d.result() | ||
| except Exception as exc: # noqa: BLE001 | ||
| self.logger.debug("Probe task failed: %s", exc) | ||
| if found: | ||
| break | ||
| # Cancel remaining tasks | ||
| for t in pending: | ||
| t.cancel() | ||
| try: | ||
| await asyncio.gather(*pending, return_exceptions=True) | ||
| except Exception: | ||
| pass |
There was a problem hiding this comment.
In _rediscover_host, using asyncio.wait(..., return_when=asyncio.FIRST_COMPLETED) and then cancelling all remaining tasks means you only ever probe the first IP that finishes (often the first candidate), rather than scanning through the candidate list until one of them matches the device. This makes rediscovery much less effective than intended. Consider either awaiting all _probe_host tasks and returning the first non-None result, or iterating with asyncio.as_completed so that you can stop once a positive match is found without aborting other probes prematurely.
| done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED) | |
| found = None | |
| for d in done: | |
| try: | |
| found = d.result() | |
| except Exception as exc: # noqa: BLE001 | |
| self.logger.debug("Probe task failed: %s", exc) | |
| if found: | |
| break | |
| # Cancel remaining tasks | |
| for t in pending: | |
| t.cancel() | |
| try: | |
| await asyncio.gather(*pending, return_exceptions=True) | |
| except Exception: | |
| pass | |
| found: str | None = None | |
| try: | |
| for coro in asyncio.as_completed(tasks): | |
| try: | |
| result = await coro | |
| except Exception as exc: # noqa: BLE001 | |
| self.logger.debug("Probe task failed: %s", exc) | |
| continue | |
| if result: | |
| found = result | |
| break | |
| finally: | |
| # Cancel any remaining tasks and wait for them to finish | |
| for t in tasks: | |
| if not t.done(): | |
| t.cancel() | |
| try: | |
| await asyncio.gather(*tasks, return_exceptions=True) | |
| except Exception: | |
| pass |
| entity = hass.data["entity_components"]["cover"].get_entity(entity_id) # type: ignore[index] | ||
| if entity is None: | ||
| raise ServiceValidationError(f"Entity {entity_id} not found for siegenia.set_connection") | ||
| coordinator = getattr(entity, "coordinator", None) | ||
| entry = getattr(coordinator, "entry", None) if coordinator else None | ||
| if entry is None: | ||
| raise ServiceValidationError("Coordinator missing on entity for siegenia.set_connection") |
There was a problem hiding this comment.
The siegenia.set_connection service handler currently resolves the entity only from the cover entity component, but the service schema and translations describe entity_id as “Any entity from the device.” If a user passes a non-cover entity (e.g. a sensor from the same device), this will raise ServiceValidationError even though the service UI allows selecting it. To align behavior with the service description, either restrict the selector/schema to domain: cover or update the handler to resolve the config entry via the entity registry/device registry instead of assuming a cover entity.
| entity = hass.data["entity_components"]["cover"].get_entity(entity_id) # type: ignore[index] | |
| if entity is None: | |
| raise ServiceValidationError(f"Entity {entity_id} not found for siegenia.set_connection") | |
| coordinator = getattr(entity, "coordinator", None) | |
| entry = getattr(coordinator, "entry", None) if coordinator else None | |
| if entry is None: | |
| raise ServiceValidationError("Coordinator missing on entity for siegenia.set_connection") | |
| # Resolve the config entry from the entity registry (any domain), falling back | |
| # to the device registry if needed. This allows using any entity from the device, | |
| # not just the cover entity. | |
| ent_reg = er.async_get(hass) | |
| ent_entry = ent_reg.async_get(entity_id) | |
| if ent_entry is None: | |
| raise ServiceValidationError( | |
| f"Entity {entity_id} not found for siegenia.set_connection" | |
| ) | |
| entry = None | |
| if ent_entry.config_entry_id: | |
| entry = hass.config_entries.async_get_entry(ent_entry.config_entry_id) | |
| # If the entity itself is not directly tied to a config entry, try via its device. | |
| if entry is None and ent_entry.device_id: | |
| dev_reg = dr.async_get(hass) | |
| dev_entry = dev_reg.async_get(ent_entry.device_id) | |
| if dev_entry is not None: | |
| for config_entry_id in dev_entry.config_entries: | |
| candidate = hass.config_entries.async_get_entry(config_entry_id) | |
| if candidate is not None and candidate.domain == DOMAIN: | |
| entry = candidate | |
| break | |
| if entry is None: | |
| raise ServiceValidationError( | |
| f"Config entry not found for entity {entity_id} in siegenia.set_connection" | |
| ) |
| except Exception: | ||
| pass |
There was a problem hiding this comment.
'except' clause does nothing but pass and there is no explanatory comment.
| except Exception: | |
| pass | |
| except Exception as exc: # noqa: BLE001 | |
| # Best-effort cleanup: failing to delete the issue is non-fatal. | |
| self.logger.debug("Failed to delete issue '%s' for host %s: %s", ISSUE_UNREACHABLE, self.host, exc) |
| t.cancel() | ||
| try: | ||
| await asyncio.gather(*pending, return_exceptions=True) | ||
| except Exception: |
There was a problem hiding this comment.
'except' clause does nothing but pass and there is no explanatory comment.
| except Exception: | ||
| pass |
There was a problem hiding this comment.
'except' clause does nothing but pass and there is no explanatory comment.
| except Exception: | |
| pass | |
| except Exception as exc: # noqa: BLE001 | |
| self.logger.debug("Error disconnecting Siegenia client during host switch: %s", exc) |
| except Exception: | ||
| pass |
There was a problem hiding this comment.
'except' clause does nothing but pass and there is no explanatory comment.
| except Exception: | |
| pass | |
| except Exception as exc: # noqa: BLE001 | |
| self.logger.debug( | |
| "Failed to update last stable states from params: %s", | |
| exc, | |
| ) |
| except Exception: | ||
| pass |
There was a problem hiding this comment.
'except' clause does nothing but pass and there is no explanatory comment.
| except Exception: | |
| pass | |
| except Exception as err: | |
| logging.getLogger(__name__).debug( | |
| "Failed to update serial from push payload: %s", err | |
| ) |
Fix device automation mappings and add regressions
Pull Request Review: Auto-Discovery and Manual Reconnect FeaturesSummaryThis PR adds significant enhancements for handling IP changes in Siegenia devices, including auto-discovery, manual reconnection services, and device cleanup utilities. Overall, the implementation is solid with good test coverage and thoughtful features. Positive Highlights ✅Code Quality
Features
Issues & Concerns 🔴1. Security: Broad Exception Catching (High Priority)Multiple instances of coordinator.py:154-156, 318-325 except Exception as exc: # noqa: BLE001
self.logger.debug("Unexpected ensure_connected failure: %s", exc)
raise UpdateFailed(exc) from excRecommendation: Catch specific exceptions. At minimum, distinguish between network errors (ClientConnectorError, TimeoutError) and authentication errors. coordinator.py:322-325 - Silent disconnect failures during probing could leak connections: finally:
try:
await asyncio.wait_for(client.disconnect(), timeout=2.0)
except Exception as exc: # noqa: BLE001
self.logger.debug("Probe cleanup failed for %s: %s", host, exc)Recommendation: Log at warning level if cleanup fails to track potential resource leaks. 2. Network Scanning Security Concerns (High Priority)coordinator.py:228-291 - The rediscovery function scans up to 192 IP addresses:
Recommendations:
3. Race Condition in Service Registration (init.py:116-120)marker = f"{DOMAIN}_services_registered"
if not hass.data.get(marker):
await async_setup_services(hass)
hass.data[marker] = TrueIssue: Not atomic - multiple concurrent setups could register services twice. Recommendation: if marker not in hass.data:
async with hass.data.setdefault(f"{marker}_lock", asyncio.Lock()):
if marker not in hass.data:
await async_setup_services(hass)
hass.data[marker] = True4. Potential Resource Leak (coordinator.py:274-290)Task cancellation after Recommendation: Add timeout to the gather to ensure cleanup: for t in pending:
t.cancel()
await asyncio.wait_for(
asyncio.gather(*pending, return_exceptions=True),
timeout=5.0
)5. Unsafe Type Annotations (init.py:89-90)coordinator._motion_interval = timedelta(seconds=motion_s) # type: ignore[attr-defined]
coordinator._idle_interval = timedelta(seconds=idle_s) # type: ignore[attr-defined]Issue: Setting private attributes from outside the class breaks encapsulation and can cause maintenance issues. Recommendation: Add proper setter methods or constructor parameters to the coordinator class. Performance Considerations ⚡1. Efficient Subnet Scanning (coordinator.py:252-258)Good: Sorts hosts by distance from last known IP, reducing average discovery time. 2. Push-Based Updates (coordinator.py:407-432)Excellent: Dynamically adjusts polling interval based on push events, reducing unnecessary polls. 3. Concurrent Probing (coordinator.py:268-273)Good: Uses semaphore (concurrency=8) to balance speed vs network load. Suggestion: Make REDISCOVER_CONCURRENCY configurable in advanced options for users with different network capabilities. Potential Bugs 🐛1. Config Flow Extended Discovery Logic (config_flow.py:105-106, 263-264)if not data.get(CONF_AUTO_DISCOVER, False):
data[CONF_EXTENDED_DISCOVERY] = FalseIssue: Silently forces extended_discovery to False when auto_discover is disabled. This is correct logic but may confuse users if they explicitly set extended_discovery=True. Recommendation: Add a validation warning in the UI when this happens. 2. Missing Validation (init_services.py:161-169)def _parse_duration(text: str) -> tuple[int, int]:
text = str(text).strip()
if ":" in text:
hh, mm = text.split(":", 1)
return int(hh), int(mm)
mins = int(text)
return mins // 60, mins % 60Issue: No bounds checking or error handling. Negative values or very large values could cause issues. Recommendation: def _parse_duration(text: str) -> tuple[int, int]:
text = str(text).strip()
if ":" in text:
hh, mm = text.split(":", 1)
h, m = int(hh), int(mm)
else:
mins = int(text)
h, m = mins // 60, mins % 60
if h < 0 or m < 0 or m >= 60:
raise ValueError(f"Invalid duration: {text}")
return h, m3. Inconsistent State Handling (coordinator.py:388-393)Only tracks stable states from polled data, not from all update paths. This could cause Test Coverage 📝Excellent additions:
Missing tests:
Recommendation: Add integration tests for the new auto-discovery feature, especially edge cases like authentication failures during rediscovery. Documentation 📚Good:
Needs Improvement:
Best Practices Violations
|
No description provided.