Skip to content

[STABILITY] Potential Worker Starvation due to Missing Request Timeouts #3495

@chauhan-varun

Description

@chauhan-varun

What happened

Across the codebase, specifically in many analyzers and connectors, the requests library is frequently used without an explicit timeout parameter.

Affected Components (Examples):

  • api_app/analyzers_manager/observable_analyzers/stratosphere.py (Line 66)
  • api_app/analyzers_manager/observable_analyzers/abuseipdb.py (Line 29)
  • Many other analyzers and connectors using requests.get() or requests.post().

Because IntelOwl uses a fixed-size pool of Celery workers, a malicious or slow external service can keep a request open indefinitely (a "slow-tail" attack). If an attacker targets multiple analyzers at once, they can occupy every available worker, effectively causing a Denial of Service (DoS) where no legitimate scans can be processed.

Environment

  1. OS: Linux
  2. IntelOwl version: Current develop branch

What did you expect to happen

All outgoing HTTP requests should have a reasonable, explicit timeout (e.g., requests.get(url, timeout=30)).

How to reproduce your issue

  1. Point an analyzer (like the Stratosphere or any web-based analyzer) to a server designed to never close the connection (e.g., slowloris or a simple infinite loop server).
  2. Observe that the Celery worker for that task stays in a "Started" state forever and never returns to the pool.
  3. Repeat for all worker slots to halt the entire system.

Error messages and logs

No error messages are generated because the worker simply hangs. This is visible in Celery monitoring tools like Flower as tasks that never progress.

Suggested Fix

  1. Define a global default timeout for all HTTP requests within IntelOwl's utility modules.
  2. Iterate through all analyzers and connectors to ensure they use this default timeout or an explicit one.
  3. Consider using a wrapper for requests or switching to a library that enforces timeouts.
    or messages are generated because the worker simply hangs. This is visible in Celery monitoring tools like Flower as tasks that never progress.

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingstale

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions