I'm running a Greenplum on a single machine having both master and segments. I've enabled auto_explain on both master and segment hosts. I'm seeing a greenplum crash whenever i run a query that involves redistribute motion plan. Below are the details:
GreenPlum State:
[gpadmin@sg-madstand-88 ~]$ gpstate
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:-Starting gpstate with args:
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.0.1 build commit:053a66ae19cd7301ec8c8910ed85ec2c20ad60cc'
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.24 (Greenplum Database 6.0.1 build commit:053a66ae19cd7301ec8c8910ed85ec2c20ad60cc) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Oct 11 2019 18:47:35'
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:-Obtaining Segment details from master...
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:-Gathering data from segments...
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:-Greenplum instance status summary
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:-----------------------------------------------------
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Master instance = Active
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Master standby = No master standby configured
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Total segment instance count from metadata = 2
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:-----------------------------------------------------
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Primary Segment Status
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:-----------------------------------------------------
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Total primary segments = 2
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Total primary segment valid (at master) = 2
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Total primary segment failures (at master) = 0
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Total number of postmaster.pid files found = 2
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 2
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Total number of /tmp lock files found = 2
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Total number postmaster processes missing = 0
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Total number postmaster processes found = 2
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:-----------------------------------------------------
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Mirror Segment Status
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:-----------------------------------------------------
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:- Mirrors not configured on this array
20200226:03:44:59:016864 gpstate:sg-madstand-88:gpadmin-[INFO]:-----------------------------------------------------
Table definition:
madan=# \dS+ mad;
Table "public.mad"
Column | Type | Modifiers | Storage | Stats target | Description
--------+---------+-----------+---------+--------------+-------------
id | integer | | plain | |
Distributed by: (id)
Query that is crashing:
[gpadmin@sg-madstand-88 ~]$ psql -d madan
psql (9.4.24)
Type "help" for help.
madan=# select gp_segment_id, count(*) from mad group by 1;
WARNING: bogus varno: 65001 (seg0 slice2 172.31.43.226:7000 pid=14069)
WARNING: bogus varno: 65001 (seg0 slice2 172.31.43.226:7000 pid=14069)
WARNING: bogus varno: 65001 (seg0 slice2 172.31.43.226:7000 pid=14069)
WARNING: bogus varno: 65001 (seg0 slice2 172.31.43.226:7000 pid=14069)
WARNING: bogus varno: 65001 (seg0 slice2 172.31.43.226:7000 pid=14069)
WARNING: bogus varno: 65001 (seg0 slice2 172.31.43.226:7000 pid=14069)
WARNING: bogus varno: 65001 (seg0 slice2 172.31.43.226:7000 pid=14069)
WARNING: bogus varno: 65001 (seg0 slice2 172.31.43.226:7000 pid=14069)
WARNING: bogus varno: 65001 (seg1 slice2 172.31.43.226:7001 pid=14070)
WARNING: bogus varno: 65001 (seg1 slice2 172.31.43.226:7001 pid=14070)
WARNING: bogus varno: 65001 (seg1 slice2 172.31.43.226:7001 pid=14070)
WARNING: bogus varno: 65001 (seg1 slice2 172.31.43.226:7001 pid=14070)
WARNING: bogus varno: 65001 (seg1 slice2 172.31.43.226:7001 pid=14070)
WARNING: bogus varno: 65001 (seg1 slice2 172.31.43.226:7001 pid=14070)
WARNING: bogus varno: 65001 (seg1 slice2 172.31.43.226:7001 pid=14070)
WARNING: bogus varno: 65001 (seg1 slice2 172.31.43.226:7001 pid=14070)
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!>
I'm attaching the master and segment logs for the above query crash with this ticket.