Resolved
We have confirmed the release of our Data Extraction fix has continued to mitigate issue occurrence during peak traffic and that rates of Data Extraction failures are below pre-incident baseline.
Monitoring
We have implemented additional safeguards, and will continue to monitor through peak traffic to ensure complete resolution.
Identified
We have observed a reoccurrence of Data Collection failures, and are working on additional measures to mitigate them.
Resolved
Following our recent patch, we have tracked incidence of unexpected Data Collection values and Evaluation failures as they have decreased to negligible levels, and confirmed restored functionality with affected users.
Moving forward, we will continue to improve our system to have effective fallbacks and increased redundancy.
Identified
We have observed an increase in latency affecting the Gemini models and are actively investigating the issue, and are working on improvements to the formatting of the recently implemented Data Collection backups.
Resolved
We have fully released a fix that implements increased redundancy in our Data Collection and Evaluation mechanisms via multiple backup LLMs, and confirmed complete mitigation.
Monitoring
We have observed data extraction and evaluation failures reoccur, and are following through to ensure complete mitigation.
Resolved
The issue has been fully resolved, and safeguards have been implemented to prevent recurrence.
Monitoring
We have observed errors return to baseline. Currently, call summaries and data extraction are fully functional.
At this point in time, we are following through with improvements to data extraction and call summaries to ensure they are not impacted by similar availability issues in the future.
Identified
We have observed a significant reduction in errors, and continue to work on a solution to fully mitigate the core issue.
Identified
Starting at ~14:00 UTC today, some conversations have been affected by a disruption in our data extraction and call summary mechanisms, and a subset of these may have seen increased conversation latency. We have identified the issue as stemming from reduced availability of the Gemini 2.0 and Gemini 2.5 Flash models, and are currently working to resolve the issue.