Increased Conversation Latency
Resolved·Degraded performance

LLM error rates have normalized, and Gemini-2.5-Flash conversation latency is back within expected ranges. We’ll continue monitoring closely. Improvements accommodating reduced cloud provider have been implemented, and will be live shortly.

Thu, Jan 29, 2026, 12:30 AM
(6 days ago)
·
Affected components
Updates

Resolved

LLM error rates have normalized, and Gemini-2.5-Flash conversation latency is back within expected ranges. We’ll continue monitoring closely. Improvements accommodating reduced cloud provider have been implemented, and will be live shortly.

Thu, Jan 29, 2026, 12:30 AM

Monitoring

We have observed a reoccurrence of Gemini-2.5-Flash failures and are continuing to monitor the issue.

Wed, Jan 28, 2026, 03:49 PM(8 hours earlier)

Resolved

LLM failures have now returned to baseline, and conversation latency using Gemini-2.5-Flash is now back to expected levels. For future mitigation, we plan to improve our fallback methods to better handle reduced cloud provider availability.

Tue, Jan 27, 2026, 10:18 PM(17 hours earlier)

Monitoring

We have observed a significant decrease in error occurrence and are continuing to monitor availability of the resources that provide service for Gemini-2.5-Flash.

Tue, Jan 27, 2026, 09:10 PM(1 hour earlier)

Identified

We have identified the issue scope is isolated to the Gemini-2.5-Flash model. We are working with our cloud provider to resolve this.

Tue, Jan 27, 2026, 06:30 PM(2 hours earlier)

Investigating

Currently, some conversations are affected by increased latency due to elevated LLM generation failures. We are investigating the root cause and working to mitigate this.

Tue, Jan 27, 2026, 05:44 PM(46 minutes earlier)