Resolved
Attack Paths and regular scans, alongside with their associated tasks like compliance overviews and scan summaries, are running as expected.
We apologise for the inconvenience.
Monitoring
Attack Paths may fail intermittently. This behavior is expected to persist through the weekend while we monitor the impact of recent changes. We have increased database capacity and adjusted memory configuration based on observed load and usage patterns.
Regular scans and the rest of the API are working as expected.
We are continuing to monitor closely. We will provide another update if the situation changes.
Monitoring
Regular scans and their related tasks are now running as expected without delays. Attack Paths remain operational but without guarantees, as their underlying data backend is under heavy resource consumption.
We’re continuing the investigation to identify the cause. In the meantime we’ve pushed another fix to improve the findings ingestion query performance.
We are continuing to monitor closely. We will provide another update within the next 2 hours or if the situation changes.
Monitoring
The system has stabilized over the past few hours and scan processing has resumed. Scans and their related tasks, compliance overviews and summaries, are currently running with some delay.
We have identified ongoing elevated resource consumption in the Neo4j cluster that supports Attack Paths visualization. We’re investigating the root cause and preparing additional mitigations in case connectivity issues reoccur.
We are continuing to monitor closely. We will provide another update within the next 2 hours or if the situation changes.
Identified
We’ve deployed a fix and are actively monitoring the results. We see signs of recovery. Services are stabilizing and we are closely monitoring the situation.
We will provide an update as soon as we have confirmation that full functionality has been restored.
Investigating
We have identified two contributing factors:
Attack Paths database connectivity: The Neo4j database is rejecting connections, causing all scans to fail during initialization. Memory usage on the instance increased gradually throughout the day until reaching 100%, at which point the instance became unresponsive. Attempts to provision replacement instances have also failed to start. We are actively investigating the root cause of the memory growth.
API database performance: Attack Paths scans are retrieving findings from the primary read/write database instance instead of the read-only replica. This is causing elevated CPU usage on the Prowler API database cluster, which in turn is delaying dependent tasks (compliance overviews, scan summaries) and increasing the processing queue.
Our engineering team is actively working on both issues. We will provide an update as soon as we have more information.
Investigating
We are currently experiencing an issue affecting all scan types across the platform. Scans are failing to start, which means new security findings are not being generated during this time.
Our team is working to identify the cause. We'll provide details as soon as possible.