Error during starting SEMS

HI everybody. When I start SEMS, it gives me ansver:

[9848/9848] [yeti:cdr/TrustedHeaders.cpp:41] ERROR: pqxx_exception: ERROR: function
load_trusted_headers(unknown) does not exist
LINE 1: SELECT * FROM load_trusted_headers($1)
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.

[9848/9848] [yeti:cdr/TrustedHeaders.cpp:54] ERROR: can’t load trusted headers config
[9848/9848] [yeti:yeti.cpp:309] ERROR: TrustedHeaders configure failed
[9848/9848] [yeti:SBC.cpp:118] ERROR: yeti configuration error

Your databases is not properly initialized.

How can I inicialize it ?

root@yeti:~# service sems start
Job for sems.service failed because the control process exited with error code.
See “systemctl status sems.service” and “journalctl -xe” for details.
root@yeti:~# "systemctl status sems.service

^C
root@yeti:~# systemctl status sems.service
● sems.service - SEMS for YETI project
Loaded: loaded (/lib/systemd/system/sems.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2020-06-29 15:02:42 MSK; 15s ago
Docs: https://yeti-switch.org/docs/
Process: 64293 ExecStart=/usr/sbin/sems -P /var/run/sems.pid -u root -g root -f /etc/sems/s

Jun 29 15:02:42 yeti sems[64293]: configuration file: /etc/sems/sems.conf
Jun 29 15:02:42 yeti sems[64293]: plug-in path: /usr/lib/sems/plug-in
Jun 29 15:02:42 yeti sems[64293]: daemon mode: yes
Jun 29 15:02:42 yeti sems[64293]: daemon UID: root
Jun 29 15:02:42 yeti sems[64293]: daemon GID: root
Jun 29 15:02:42 yeti sems[64293]: -----BEGIN CFG DUMP-----
Jun 29 15:02:42 yeti sems[64293]: -----END CFG DUMP-----
Jun 29 15:02:42 yeti systemd[1]: sems.service: Control process exited, code=exited, status=1/
Jun 29 15:02:42 yeti systemd[1]: sems.service: Failed with result ‘exit-code’.
Jun 29 15:02:42 yeti systemd[1]: Failed to start SEMS for YETI project.
lines 1-16/16 (END)
● sems.service - SEMS for YETI project
Loaded: loaded (/lib/systemd/system/sems.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2020-06-29 15:02:42 MSK; 15s ago
Docs: https://yeti-switch.org/docs/
Process: 64293 ExecStart=/usr/sbin/sems -P /var/run/sems.pid -u root -g root -f /etc/sems/sems.conf (code=exited, status=1/FA

Jun 29 15:02:42 yeti sems[64293]: configuration file: /etc/sems/sems.conf
Jun 29 15:02:42 yeti sems[64293]: plug-in path: /usr/lib/sems/plug-in
Jun 29 15:02:42 yeti sems[64293]: daemon mode: yes
Jun 29 15:02:42 yeti sems[64293]: daemon UID: root
Jun 29 15:02:42 yeti sems[64293]: daemon GID: root
Jun 29 15:02:42 yeti sems[64293]: -----BEGIN CFG DUMP-----
Jun 29 15:02:42 yeti sems[64293]: -----END CFG DUMP-----
Jun 29 15:02:42 yeti systemd[1]: sems.service: Control process exited, code=exited, status=1/FAILURE
Jun 29 15:02:42 yeti systemd[1]: sems.service: Failed with result ‘exit-code’.
Jun 29 15:02:42 yeti systemd[1]: Failed to start SEMS for YETI project.

https://yeti-switch.org/docs/en/installation-1.10/web.html#databases-data-initialization

I inicialzed databases after their instalation. Or I must make it any time ?

It should be initialized one time during installation. Looks like you done something wrong.

Your SEMS connects to routing database that not contains right structure.

I initialized databases again but SEMS dosen’t run. How can I solve it ?

May be I have to delete databases and create them again ?

may be your SEMS connects to wrong databases

How can I check it ?

I installed Yeti on debian 9 and I have the same problem with SEMS:
[27029/27029] [core/AmThread.cpp:151] DEBUG: Thread session-cleaner 139907351279360 (139907351279360)calling on_stop, give it a chance to clean up
[27029/27029] [core/AmSessionContainer.cpp:172] DEBUG: brodcasting ServerShutdown system event to 0 sessions…
[27029/27052] [apps/sctp_bus/SctpBus.cpp:238] DEBUG: SctpBus stopped
[27029/27052] [core/AmThread.cpp:99] INFO: Thread sctp-bus 139906684458752 is ending
[27029/27043] [core/AmAudioFileRecorder.cpp:89] DEBUG: 0 unprocessed events on stop
[27029/27043] [core/AmAudioFileRecorder.cpp:97] DEBUG: 0 mono recorders on stop
[27029/27043] [core/AmAudioFileRecorder.cpp:101] DEBUG: 0 stereo recorders on stop
[27029/27043] [core/AmAudioFileRecorder.cpp:105] DEBUG: Audio recorder stopped
[27029/27043] [core/AmThread.cpp:99] INFO: Thread recorder 139907339699968 is ending
[27029/27044] [core/PcapFileRecorder.cpp:75] DEBUG: 0 unprocessed events on stop
[27029/27044] [core/PcapFileRecorder.cpp:83] DEBUG: pcap recorder stopped
[27029/27044] [core/AmThread.cpp:99] INFO: Thread pcap recorder 139907338647296 is ending
[27029/27029] [core/AmSessionContainer.cpp:186] DEBUG: waiting for active event queues to stop…
[27029/27051] [apps/registrar_client/SIPRegistrarClient.cpp:326] DEBUG: Session received system Event
[27029/27051] [apps/registrar_client/SIPRegistrarClient.cpp:312] DEBUG: shutdown SIP registrar client: deregistering
[27029/27051] [core/AmThread.cpp:99] INFO: Thread sip-reg-client 139906685511424 is ending
[27029/27054] [core/AmThread.cpp:92] INFO: Thread 139906682353408 is starting
[27029/27054] [apps/jsonrpc/RpcServerLoop.cpp:386] DEBUG: adding 0 more server threads
[27029/27054] [apps/jsonrpc/RpcServerThread.cpp:202] DEBUG: adding 0 RPC server threads
[27029/27054] [apps/jsonrpc/RpcServerLoop.cpp:391] INFO: running server loop; listening on 127.0.0.1:7080
[27029/27054] [apps/jsonrpc/RpcServerLoop.cpp:437] INFO: running event loop
[27029/27050] [core/AmThread.cpp:92] INFO: Thread 139906688751360 is starting
[27029/27053] [core/AmThread.cpp:92] INFO: Thread 139906683406080 is starting

show your system.cfg and databases.yml

Hi, take system.cfg:

signalling {
globals {
yeti {
pop_id = 4
msg_logger_dir = /var/spool/sems/dump
audio_recorder_dir = /var/spool/sems/record
audio_recorder_compress = true
log_dir = /tmp
routing {
schema = switch8
function = route_release
init = init

                            master_pool {
                                    host = 127.0.0.1
                                    port = 5432
                                    name = yeti
                                    user = yeti
                                    pass = Asdf1234

                                    size = 4
                                    check_interval = 10
                                    max_exceptions = 0
                                    statement_timeout=3000
                            }

                            failover_to_slave = false
                            slave_pool {
                                    host = 127.0.0.1
                                    port = 5432
                                    name = yeti
                                    user = yeti
                                    pass = Asdf1234

                                    size = 4
                                    check_interval = 10
                                    max_exceptions = 0
                                    statement_timeout=3000
                            }

                            cache {
                                    enabled = false
                                    check_interval = 60
                                    buckets = 100000
                               check_interval = 60
                                    buckets = 100000
                            }

                            use_radius = false
                    }

                    cdr {
                            dir = /var/spool/sems/cdrs
                            completed_dir = /var/spool/sems/cdrs/completed

                            pool_size = 2
                            batch_size = 10
                            batch_timeout = 10000
                            check_interval = 2000

                            schema = switch
                            function = writecdr

                            master {
                                    host = 127.0.0.1
                                    port = 5433
                                    name = cdr
                                    user = cdr
                                    pass = Asdf1234
                            }

                            failover_to_slave = false
                            slave {
                                    host = 127.0.0.1
                                    port = 5433
                                    name = cdr
                                    user = cdr
                                    pass = Asdf1234
                            }

                            failover_requeue = true
                            failover_to_file = false
                            serialize_dynamic_fields = false
                    }

                    resources {
                            reject_on_error = false
                            write {

                            reject_on_error = false
                            write {
                                    //socket = /var/run/redis/redis.sock
                                    host = 127.0.0.1
                                    port = 6379
                                    size = 2
                                    timeout = 500
                            }
                            read {
                                    //socket = /var/run/redis/redis.sock
                                    host = 127.0.0.1
                                    port = 6379
                                    size = 2
                                    timeout = 1000
                            }
                    }

                    registrations {
                            check_interval = 5000
                    }

                    registrar {
                            enabled = false
                            redis {
                                    host = 127.0.0.1
                                    port = 6379
                            }
                    }

                    rpc {
                            calls_show_limit = 1000
                    }

statistics {

active-calls {

period = 5

clickhouse {

table = active_calls

queue = snapshots

buffering = false

allowed_fields = {

resources,

audio_record_enabled,

auth_orig_ip,

audio_record_enabled,

auth_orig_ip,

auth_orig_port

}

}

}

}

            }
    }
    node 0 { }

}

lnp {
globals {
daemon {
listen = {
“tcp://127.0.0.1:3333”,
“tcp://127.0.0.1:3332”
}
log_level = 2
}
db {
host = 127.0.0.1
port = 5432
name = yeti
user = yeti
pass = Asdf1234
schema = switch8
conn_timeout = 0
check_interval = 5000
}
sip {
contact_user = yeti-lnp-resolver
from_uri = sip:yeti-lnp-resolver@localhost
from_name = yeti-lnp-resolver
}
}
node 8 { }
}

database.yml
GNU nano 2.7.4 File: /opt/yeti-web/config/database.yml

production:
adapter: postgresql
encoding: unicode
database: yeti
pool: 5
username: yeti
password: Asdf1234
host: 127.0.0.1
schema_search_path: ‘gui, public, switch, billing, class4, runtime_stats, sys logs, data_import’
port: 5432
min_messages: notice

secondbase:
production:
adapter: postgresql
encoding: unicode
database: cdr
pool: 5
username: cdr
password: Asdf1234
host: 127.0.0.1
schema_search_path: ‘cdr, reports, billing’
port: 5432
min_messages: notice

I improved system.cfg whit your link, and management server is working now, thanks for it. but SEMS dosen’t start

error

ul 02 17:34:08 yeti-srv sems[4405]: -----END CFG DUMP-----
Jul 02 17:34:08 yeti-srv yeti-management[465]: [466] info: server/src/SctpServer.cpp:247: associated with 127.0.0.1:51865/40 (7)
Jul 02 17:34:08 yeti-srv yeti-management[465]: [466] info: server/src/mgmt_server.cpp:211: process request for 'signalling' node 8
Jul 02 17:34:08 yeti-srv yeti-management[465]: [466] error: server/src/SctpServer.cpp:468: internal_exception: 404 unknown node

sems.config

general {
    daemon = yes
    stderr = no
    syslog_loglevel = 2
    syslog_facility = LOCAL0
    node_id = 8
    shutdown_mode
{
        code = 508
        reason = "Yeti node in shutdown mode"
        allow_uac = true
 }
  //pcap_upload_queue = pcap
    media_processor_threads = 2
    rtp_receiver_threads = 2
    session_processor_threads = 10
    sip_udp_server_threads = 2
    sip_tcp_server_threads = 2
    dead_rtp_time=30
}


signaling-interfaces 
{
interface input 
{
default-media-interface = input
ip4 {
sip-udp {
 address = 192.168.0.249
port = 5061
use-raw-sockets = off
 }
sip-tcp {
address = 192.168.0.249
   port = 5061
connect-timeout = 2000
static-client-port = on
 idle-timeout=900000
 use-raw-sockets = off
}
}
}
}

media-interfaces {
    interface input {
        ip4 {
            rtp {
                address = 192.168.0.249
                low-port = 16383
                high-port = 32767
                dscp = 46
                use-raw-sockets = off
            }
        }
    }
}

modules {
    module "di_log"{}
    module "mp3"{}
    module "opus"{}
    module "wav"{}
    module "gsm"{}
    module "ilbc"{}
    module "adpcm"{}
    module "l16"{}
    module "g722"{}
    module "registrar_client" {}
    module "sctp_bus"{}
    module "http_client"{}
    module "session_timer"{}
    module "jsonrpc"{
listen{
address = 127.0.0.1
port = 7080
}
server_threads=1
   }
module-global "uac_auth" { }
module "yeti" {
management {
address = 127.0.0.1
port = 4444
timeout = 60000
}
core_options_handling = yes
}
}

routing {
    application = yeti
}
Jul 02 17:34:08 yeti-srv yeti-management[465]: [466] info: server/src/mgmt_server.cpp:211: process request for 'signalling' node 8
Jul 02 17:34:08 yeti-srv yeti-management[465]: [466] error: server/src/SctpServer.cpp:468: internal_exception: 404 unknown node

There is no node 8 section in your system.cfg, see https://yeti-switch.org/docs/en/installation-1.10/management.html

Or may be configuration is not applied - yeti-management should be restarted.