external gateway doesn't start
Aldevinas Katkus
4-20-18
Our hosting provider updated linux kernel and an external gateway does not start anymore.
Current OS:
CentOS Linux 7.3.1611
Kernel Linux 2.6.32-042stab128.2 on x86_64

We are checking if the gateway service is running every 10 minutes and starting up if it is not running.

Why it tries to contact the Cloud Controller?
Anyone faced with something like this?

We are getting this in the log:
Apr 20 08:43:16 485996 StandardERPServer-Slave: 2018-04-20 08:43:07 --------------------- backtrace end --------------------------

Apr 20 08:43:16 485996 StandardERPServer-Slave: 2018-04-20 08:43:07 service call: CloudControllerFindServer to: CLOUDCONTROLLER_FINDSERVER failed with: communications error
Apr 20 08:43:16 485996 StandardERPServer-Slave: 2018-04-20 08:43:07 DoFindCloudServer(D7A99639-8011060F-66CBB529-765798C3-E50ACF6F) communications error
Apr 20 08:43:16 485996 StandardERPServer-Slave: 2018-04-20 08:43:07 DoFindCloudServer(D7A99639-8011060F-66CBB529-765798C3-E50ACF6F) attempt 23
Apr 20 08:43:16 485996 StandardERPServer-Slave: 2018-04-20 08:43:16 TIMINGS: blocking slow web-call (2) to https://cloudcontroller.hansaworld.net:444/cloudcall took 1.1 seconds
Apr 20 08:43:16 485996 StandardERPServer-Slave: 2018-04-20 08:43:16 backtrace called due to: slow webcall
Apr 20 08:43:16 485996 StandardERPServer-Slave: 2018-04-20 08:43:16 -------------------- backtrace start -------------------------
Apr 20 08:43:16 485996 StandardERPServer-Slave: 2018-04-20 08:43:16 --------------------- backtrace end --------------------------
A
Aldevinas Katkus
4-20-18
version 8.4 2017-12-09 (build 84191013)
Aldevinas Katkus
4-23-18
Created byAldevinas Katkus07:56 20 Apr 2018
version 8.4 2017-12-09 (build 84191013)
I have restored the external gw database from the backup and it looks it works.
Aldevinas Katkus
5-14-18
This weekend it happened again. This time after the server crashed. I had to recreate the HDB both in the client (windows) and external gw (linux centos 7):

Does the 0.0.0.0:30000 means it tries to connect to the server with an IP 0.0.0.0?

May 14 09:00:12 485996 StandardERPServer-Slave: 2018-05-14 09:00:02 DoFindCloudServer(D7A99639-8011060F-66CBB529-765798C3-E50ACF6F) got 0.0.0.0:30000, status = Try Again (being launched)
May 14 09:00:12 485996 StandardERPServer-Slave: 2018-05-14 09:00:02 DoFindCloudServer(D7A99639-8011060F-66CBB529-765798C3-E50ACF6F) attempt 60
May 14 09:00:12 485996 StandardERPServer-Slave: 2018-05-14 09:00:06 TIMINGS: blocking slow web-call (2) to https://cloudcontroller.hansaworld.net:444/cloudcall took 2.4 seconds
May 14 09:00:12 485996 StandardERPServer-Slave: 2018-05-14 09:00:06 backtrace called due to: slow webcall
May 14 09:00:12 485996 StandardERPServer-Slave: 2018-05-14 09:00:06 -------------------- backtrace start -------------------------
May 14 09:00:12 485996 StandardERPServer-Slave: 2018-05-14 09:00:06 --------------------- backtrace end --------------------------
May 14 09:00:12 485996 StandardERPServer-Slave: 2018-05-14 09:00:06 DoFindCloudServer(D7A99639-8011060F-66CBB529-765798C3-E50ACF6F) got 0.0.0.0:30000, status = Try Again (being launched)
May 14 09:00:12 485996 StandardERPServer-Slave: 2018-05-14 09:00:06 Serveris:0.0.0.0 Portas:30000
May 14 09:00:12 485996 StandardERPServer-Slave: Nepavyko prisijungti prie serverio. Bandykite jungtis vÄ—liau.

May 14 09:00:12 485996 systemd: Unit hansassgw.service entered failed state.
May 14 09:00:12 485996 systemd: hansassgw.service failed.
Leave Comment
You can subscribe to notifications for this post by selecting the 'star' icon on the top right corner of the post.
Back to the list
Latest Posts
Piotr Wycichowski
Thanks Gatis, anyway it seems to be reported as bug instead of wish because now it depends on who created task for whom, am I right? /Piotr W....
14:39 15 May 2024
Aldevinas Katkus
Actually a new version of this bug is SL>>Reports>>Periodic customer statement "accumulates" amounts Not sure why it was duplicated, I clicked "submit"only once....
08:09 10 May 2024