PG::InternalError: invalid memory alloc request size 1073741824 after database disk full – pgq events stuck

Hello everyone,

We are experiencing a serious issue with our Yeti-Switch installation and would appreciate any help or suggestions.

After the disk of our PostgreSQL database server filled up, we took the following actions:

  • Stopped services
  • Freed up disk space
  • Restarted services, including yeti-cdr-billing

However, since then, the cdr_billing worker crashes continuously with the following error:

ruby

CopiarEditar

Jul 31 14:10:48 BCL4WEB01 yeti-cdr-billing[691160]: [14:10:48.087981 ] [ INFO]: Worker for cdr_billing started
Jul 31 14:11:26 BCL4WEB01 yeti-cdr-billing[691160]: [14:11:26.047016 ] [ INFO]: => batch(22561240): events 211181
Jul 31 14:11:58 BCL4WEB01 yeti-cdr-billing[691160]: [14:11:58.518607 ] [ERROR]: <ActiveRecord::StatementInvalid> PG::InternalError: ERROR:  invalid memory alloc request size 1073741824

It looks like a single event or a batch is triggering a very large memory allocation, possibly due to corrupted or oversized data. We tried increasing PostgreSQL work_mem (which was very low), but the error persists.

Does anyone know:

  • How to inspect or skip problematic events in the cdr_billing queue?
  • How to safely reset or clean up the PGQ queue?
  • If this has happened to anyone else after a disk full scenario?

We’re using Yeti-Switch version 1.13, installed following the official documentation.

Thanks in advance for any help or guidance.

Best regards,
David


It happened because your pgqd was not running.

How to inspect or skip problematic events in the cdr_billing queue?

select pgq.finish_batch(22561240);
1 Like

…the issue was resolved, and the billing worker resumed processing normally.

Much appreciated! :raising_hands: