Discussion:
[ADMIN] (new thread) could not rename temporary statistics file "pg_stat_tmp/pgstat.tmp" to "pg_stat_tmp/pgstat.stat": No such file or directory
(too old to reply)
Craig Ringer
2012-06-08 05:56:38 UTC
Permalink
Hi.
I have PostgreSQL 9.1.3 and the last night crash it.
<2012-06-06 00:59:07 MDT 814 4fceffbb.32e >LOG: autovacuum: found
orphan temp table "(null)"."tmpmuestadistica" in database "dbRX"
<2012-06-06 01:05:26 MDT 1854 4fc7d1eb.73e >LOG: could not rename
temporary statistics file "pg_stat_tmp/pgstat.tmp" to
"pg_stat_tmp/pgstat.stat": No such file or directory
<2012-06-06 01:05:28 MDT 1383 4fcf0136.567 >ERROR: tuple
concurrently updated
<2012-06-06 01:05:28 MDT 1383 4fcf0136.567 >CONTEXT: automatic
vacuum of table "global.pg_catalog.pg_attrdef"
<2012-06-06 01:06:09 MDT 1851 4fc7d1eb.73b >ERROR: xlog flush
request 4/E29EE490 is not satisfied --- flushed only to 3/13527A10
<2012-06-06 01:06:09 MDT 1851 4fc7d1eb.73b >CONTEXT: writing block
0 of relation base/311360/12244_vm
<2012-06-06 01:06:10 MDT 1851 4fc7d1eb.73b >ERROR: xlog flush
request 4/E29EE490 is not satisfied --- flushed only to 3/13527A10
<2012-06-06 01:06:10 MDT 1851 4fc7d1eb.73b >CONTEXT: writing block
0 of relation base/311360/12244_vm
<2012-06-06 01:06:10 MDT 1851 4fc7d1eb.73b >WARNING: could not
write block 0 of base/311360/12244_vm
<2012-06-06 01:06:10 MDT 1851 4fc7d1eb.73b >DETAIL: Multiple
failures --- write error might be permanent.
Last night it was terminated by signal 6.
<2012-06-07 01:36:44 MDT 2509 4fd05a0c.9cd >LOG: startup process
(PID 2525) was terminated by signal 6: Aborted
<2012-06-07 01:36:44 MDT 2509 4fd05a0c.9cd >LOG: aborting startup
due to startup process failure
<2012-06-07 01:37:37 MDT 2680 4fd05a41.a78 >LOG: database system
shutdown was interrupted; last known up at 2012-06-07 01:29:40 MDT
<2012-06-07 01:37:37 MDT 2680 4fd05a41.a78 >LOG: could not open
file "pg_xlog/000000010000000300000013" (log file 3, segment 19): No
such file or directory
<2012-06-07 01:37:37 MDT 2680 4fd05a41.a78 >LOG: invalid primary
checkpoint record
And the only option was pg_resetxlog.
<2012-06-07 09:24:22 MDT 1306 4fd0c7a6.51a >ERROR: missing chunk
number 0 for toast value 393330 in pg_toast_2619
<2012-06-07 09:24:31 MDT 1306 4fd0c7a6.51a >ERROR: missing chunk
number 0 for toast value 393332 in pg_toast_2619
I lost some databases.
I restarted the cluster again with initdb and then I restored the
databases that I could backup (for the other I restored an old backup)
no space or permissions problem. No filesystem or disk error.
Can you help me to know what happened?
Did you take a copy of the PostgreSQL data directory and error logs
before you tried to fix the problem, as per the advice here:

http://wiki.postgresql.org/wiki/Corruption

If you did, it might be possible to tell what happened. If you didn't
then you've probably destroyed the evidence needed to determine what
went wrong (and maybe recover some lost data).

--
Craig Ringer
--
Sent via pgsql-admin mailing list (pgsql-***@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
Fabricio
2012-06-14 07:33:45 UTC
Permalink
Date: Fri, 8 Jun 2012 13:56:38 +0800
Subject: [ADMIN] (new thread) could not rename temporary statistics file "pg_stat_tmp/pgstat.tmp" to "pg_stat_tmp/pgstat.stat": No such file or directory
Hi.
I have PostgreSQL 9.1.3 and the last night crash it.
<2012-06-06 00:59:07 MDT 814 4fceffbb.32e >LOG: autovacuum: found
orphan temp table "(null)"."tmpmuestadistica" in database "dbRX"
<2012-06-06 01:05:26 MDT 1854 4fc7d1eb.73e >LOG: could not rename
temporary statistics file "pg_stat_tmp/pgstat.tmp" to
"pg_stat_tmp/pgstat.stat": No such file or directory
<2012-06-06 01:05:28 MDT 1383 4fcf0136.567 >ERROR: tuple
concurrently updated
<2012-06-06 01:05:28 MDT 1383 4fcf0136.567 >CONTEXT: automatic
vacuum of table "global.pg_catalog.pg_attrdef"
<2012-06-06 01:06:09 MDT 1851 4fc7d1eb.73b >ERROR: xlog flush
request 4/E29EE490 is not satisfied --- flushed only to 3/13527A10
<2012-06-06 01:06:09 MDT 1851 4fc7d1eb.73b >CONTEXT: writing block
0 of relation base/311360/12244_vm
<2012-06-06 01:06:10 MDT 1851 4fc7d1eb.73b >ERROR: xlog flush
request 4/E29EE490 is not satisfied --- flushed only to 3/13527A10
<2012-06-06 01:06:10 MDT 1851 4fc7d1eb.73b >CONTEXT: writing block
0 of relation base/311360/12244_vm
<2012-06-06 01:06:10 MDT 1851 4fc7d1eb.73b >WARNING: could not
write block 0 of base/311360/12244_vm
<2012-06-06 01:06:10 MDT 1851 4fc7d1eb.73b >DETAIL: Multiple
failures --- write error might be permanent.
Last night it was terminated by signal 6.
<2012-06-07 01:36:44 MDT 2509 4fd05a0c.9cd >LOG: startup process
(PID 2525) was terminated by signal 6: Aborted
<2012-06-07 01:36:44 MDT 2509 4fd05a0c.9cd >LOG: aborting startup
due to startup process failure
<2012-06-07 01:37:37 MDT 2680 4fd05a41.a78 >LOG: database system
shutdown was interrupted; last known up at 2012-06-07 01:29:40 MDT
<2012-06-07 01:37:37 MDT 2680 4fd05a41.a78 >LOG: could not open
file "pg_xlog/000000010000000300000013" (log file 3, segment 19): No
such file or directory
<2012-06-07 01:37:37 MDT 2680 4fd05a41.a78 >LOG: invalid primary
checkpoint record
And the only option was pg_resetxlog.
<2012-06-07 09:24:22 MDT 1306 4fd0c7a6.51a >ERROR: missing chunk
number 0 for toast value 393330 in pg_toast_2619
<2012-06-07 09:24:31 MDT 1306 4fd0c7a6.51a >ERROR: missing chunk
number 0 for toast value 393332 in pg_toast_2619
I lost some databases.
I restarted the cluster again with initdb and then I restored the
databases that I could backup (for the other I restored an old backup)
no space or permissions problem. No filesystem or disk error.
Can you help me to know what happened?
Did you take a copy of the PostgreSQL data directory and error logs
Sorry, I didn't.

Maybe was filesystem error:

This was the S.O log.

ext4 filesyetem
Jun 7 01:32:55 SERVIDOR kernel: IRQ 71/cciss0: IRQF_DISABLED is not guaranteed on shared IRQs
Jun 7 01:32:55 SERVIDOR kernel: cciss/c0d0: p2 size 1127850720 exceeds device capacity, limited to end of disk
Jun 7 01:32:55 SERVIDOR kernel: JBD: barrier-based sync failed on cciss!c0d0p2-8 - disabling barriers
Jun 7 01:32:55 SERVIDOR kernel: JBD: barrier-based sync failed on cciss!c0d0p2-8 - disabling b
http://wiki.postgresql.org/wiki/Corruption
If you did, it might be possible to tell what happened. If you didn't
then you've probably destroyed the evidence needed to determine what
went wrong (and maybe recover some lost data).
--
Craig Ringer
Loading...