Hello everyone,
We are experiencing a serious issue with our Yeti-Switch installation and would appreciate any help or suggestions.
After the disk of our PostgreSQL database server filled up, we took the following actions:
- Stopped services
- Freed up disk space
- Restarted services, including
yeti-cdr-billing
However, since then, the cdr_billing
worker crashes continuously with the following error:
ruby
CopiarEditar
Jul 31 14:10:48 BCL4WEB01 yeti-cdr-billing[691160]: [14:10:48.087981 ] [ INFO]: Worker for cdr_billing started
Jul 31 14:11:26 BCL4WEB01 yeti-cdr-billing[691160]: [14:11:26.047016 ] [ INFO]: => batch(22561240): events 211181
Jul 31 14:11:58 BCL4WEB01 yeti-cdr-billing[691160]: [14:11:58.518607 ] [ERROR]: <ActiveRecord::StatementInvalid> PG::InternalError: ERROR: invalid memory alloc request size 1073741824
It looks like a single event or a batch is triggering a very large memory allocation, possibly due to corrupted or oversized data. We tried increasing PostgreSQL work_mem
(which was very low), but the error persists.
Does anyone know:
- How to inspect or skip problematic events in the
cdr_billing
queue? - How to safely reset or clean up the PGQ queue?
- If this has happened to anyone else after a disk full scenario?
Weβre using Yeti-Switch version 1.13, installed following the official documentation.
Thanks in advance for any help or guidance.
Best regards,
David