Pregunta Reparando un Raid-Array roto (Hetzner-Root-Server de Alemania)


No sé por qué, pero mi Hetzner Root Server pierde una vez por trimestre su matriz RAID y la situación es diferente cada vez. Esta vez necesito más ayuda. Tal vez quieren que cambie a servidor administrado, es aproximadamente un 40% más caro :).

EDITAR: Resultado deseado:

md3 : active raid1 sda4[2] sdb4[1]
      1822442815 blocks super 1.2 [2/2] [UU]

md2 : active raid1 sda3[2] sdb3[1]
      1073740664 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sda2[2] sdb2[1]
      524276 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[2] sdb1[1]
      33553336 blocks super 1.2 [2/2] [UU]

¿Cómo puedo arreglar la siguiente matriz de banda?

cat /proc/mdstat

Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md124 : active (auto-read-only) raid1 sda1[2]
      33553336 blocks super 1.2 [2/1] [U_]

md125 : active (auto-read-only) raid1 sda2[2]
      524276 blocks super 1.2 [2/1] [U_]

md126 : active (auto-read-only) raid1 sda3[2]
      1073740664 blocks super 1.2 [2/1] [U_]

md127 : active (auto-read-only) raid1 sda4[2]
      1822442815 blocks super 1.2 [2/1] [U_]

md3 : active (auto-read-only) raid1 sdb4[1]
      1822442815 blocks super 1.2 [2/1] [_U]

md2 : active raid1 sdb3[1]
      1073740664 blocks super 1.2 [2/1] [_U]

md1 : active (auto-read-only) raid1 sdb2[1]
      524276 blocks super 1.2 [2/1] [_U]

md0 : active (auto-read-only) raid1 sdb1[1]
      33553336 blocks super 1.2 [2/1] [_U]

unused devices: <none>

Aquí están los detalles:

 /dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 79ca4cbd:5d44fcad:01e8ed8e:0bd7009a
           Name : rescue:0
  Creation Time : Mon Aug 20 11:23:55 2012
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 67106816 (32.00 GiB 34.36 GB)
     Array Size : 67106672 (32.00 GiB 34.36 GB)
  Used Dev Size : 67106672 (32.00 GiB 34.36 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : facee938:5e51285d:e49d35a7:4e3ae212

    Update Time : Sun Jan 17 02:23:41 2016
       Checksum : cf49c9d3 - correct
         Events : 504


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)
/dev/sda2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 4cdff7b7:2ec9bae4:8c9cbf02:67bfe971
           Name : rescue:1
  Creation Time : Mon Aug 20 11:23:55 2012
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1048552 (512.07 MiB 536.86 MB)
     Array Size : 1048552 (512.07 MiB 536.86 MB)
    Data Offset : 24 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 181560d1:abc6d39b:3bd45252:6c5bff30

    Update Time : Sat Jan 23 06:48:30 2016
       Checksum : e5f248df - correct
         Events : 2064


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)
/dev/sda3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 009a5d9b:7a0f238e:3ede62a0:0d2ee0ba
           Name : rescue:2
  Creation Time : Mon Aug 20 11:23:56 2012
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2147481600 (1024.00 GiB 1099.51 GB)
     Array Size : 2147481328 (1024.00 GiB 1099.51 GB)
  Used Dev Size : 2147481328 (1024.00 GiB 1099.51 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 106e6b66:9365ada0:b0ee61bf:15cf9585

    Update Time : Sat Jan 23 11:20:33 2016
       Checksum : b62dfda7 - correct
         Events : 6901428


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)
/dev/sda4:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : aa151e0d:2430bfba:a79d7030:d56a7872
           Name : rescue:3
  Creation Time : Mon Aug 20 11:23:56 2012
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 3644885903 (1738.02 GiB 1866.18 GB)
     Array Size : 3644885630 (1738.02 GiB 1866.18 GB)
  Used Dev Size : 3644885630 (1738.02 GiB 1866.18 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 1a883ec6:768c6884:f8465824:69bddd2e

    Update Time : Sat Jan 23 06:48:30 2016
       Checksum : a114be68 - correct
         Events : 2062


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 79ca4cbd:5d44fcad:01e8ed8e:0bd7009a
           Name : rescue:0
  Creation Time : Mon Aug 20 11:23:55 2012
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 67106816 (32.00 GiB 34.36 GB)
     Array Size : 67106672 (32.00 GiB 34.36 GB)
  Used Dev Size : 67106672 (32.00 GiB 34.36 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 2df660db:6eaab24e:be6a2b7e:6295cc6f

    Update Time : Sat Jan 23 11:20:53 2016
       Checksum : 9734d8ec - correct
         Events : 506


   Device Role : Active device 1
   Array State : .A ('A' == active, '.' == missing)
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 4cdff7b7:2ec9bae4:8c9cbf02:67bfe971
           Name : rescue:1
  Creation Time : Mon Aug 20 11:23:55 2012
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1048552 (512.07 MiB 536.86 MB)
     Array Size : 1048552 (512.07 MiB 536.86 MB)
    Data Offset : 24 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 2594f11b:7e7289b6:8bff6aff:10bf1b05

    Update Time : Mon Jan 25 06:46:16 2016
       Checksum : cc71a538 - correct
         Events : 2078


   Device Role : Active device 1
   Array State : .A ('A' == active, '.' == missing)
/dev/sdb3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 009a5d9b:7a0f238e:3ede62a0:0d2ee0ba
           Name : rescue:2
  Creation Time : Mon Aug 20 11:23:56 2012
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2147481600 (1024.00 GiB 1099.51 GB)
     Array Size : 2147481328 (1024.00 GiB 1099.51 GB)
  Used Dev Size : 2147481328 (1024.00 GiB 1099.51 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : b838fbbb:1dafe023:afce822f:45c7ba0d

    Update Time : Tue Jan 26 09:27:55 2016
       Checksum : 873de764 - correct
         Events : 7041530


   Device Role : Active device 1
   Array State : .A ('A' == active, '.' == missing)
/dev/sdb4:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : aa151e0d:2430bfba:a79d7030:d56a7872
           Name : rescue:3
  Creation Time : Mon Aug 20 11:23:56 2012
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 3644885903 (1738.02 GiB 1866.18 GB)
     Array Size : 3644885630 (1738.02 GiB 1866.18 GB)
  Used Dev Size : 3644885630 (1738.02 GiB 1866.18 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 8e56f14c:a1478ce7:59c6ba88:09d18a60

    Update Time : Mon Jan 25 06:46:11 2016
       Checksum : 20fe7d89 - correct
         Events : 2076


   Device Role : Active device 1
   Array State : .A ('A' == active, '.' == missing)

Y más detalles:

/dev/md0:
        Version : 1.2
  Creation Time : Mon Aug 20 11:23:55 2012
     Raid Level : raid1
     Array Size : 33553336 (32.00 GiB 34.36 GB)
  Used Dev Size : 33553336 (32.00 GiB 34.36 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Jan 26 09:38:15 2016
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue:0
           UUID : 79ca4cbd:5d44fcad:01e8ed8e:0bd7009a
         Events : 508

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       17        1      active sync   /dev/sdb1
/dev/md1:
        Version : 1.2
  Creation Time : Mon Aug 20 11:23:55 2012
     Raid Level : raid1
     Array Size : 524276 (512.07 MiB 536.86 MB)
  Used Dev Size : 524276 (512.07 MiB 536.86 MB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Mon Jan 25 06:46:16 2016
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue:1
           UUID : 4cdff7b7:2ec9bae4:8c9cbf02:67bfe971
         Events : 2078

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       18        1      active sync   /dev/sdb2
/dev/md2:
        Version : 1.2
  Creation Time : Mon Aug 20 11:23:56 2012
     Raid Level : raid1
     Array Size : 1073740664 (1024.00 GiB 1099.51 GB)
  Used Dev Size : 1073740664 (1024.00 GiB 1099.51 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Tue Jan 26 09:42:42 2016
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue:2
           UUID : 009a5d9b:7a0f238e:3ede62a0:0d2ee0ba
         Events : 7042054

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       19        1      active sync   /dev/sdb3
/dev/md3:
        Version : 1.2
  Creation Time : Mon Aug 20 11:23:56 2012
     Raid Level : raid1
     Array Size : 1822442815 (1738.02 GiB 1866.18 GB)
  Used Dev Size : 1822442815 (1738.02 GiB 1866.18 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Mon Jan 25 06:46:11 2016
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue:3
           UUID : aa151e0d:2430bfba:a79d7030:d56a7872
         Events : 2076

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       20        1      active sync   /dev/sdb4
/dev/md124:
        Version : 1.2
  Creation Time : Mon Aug 20 11:23:55 2012
     Raid Level : raid1
     Array Size : 33553336 (32.00 GiB 34.36 GB)
  Used Dev Size : 33553336 (32.00 GiB 34.36 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sun Jan 17 02:23:41 2016
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue:0
           UUID : 79ca4cbd:5d44fcad:01e8ed8e:0bd7009a
         Events : 504

    Number   Major   Minor   RaidDevice State
       2       8        1        0      active sync   /dev/sda1
       1       0        0        1      removed
/dev/md125:
        Version : 1.2
  Creation Time : Mon Aug 20 11:23:55 2012
     Raid Level : raid1
     Array Size : 524276 (512.07 MiB 536.86 MB)
  Used Dev Size : 524276 (512.07 MiB 536.86 MB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sat Jan 23 06:48:30 2016
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue:1
           UUID : 4cdff7b7:2ec9bae4:8c9cbf02:67bfe971
         Events : 2064

    Number   Major   Minor   RaidDevice State
       2       8        2        0      active sync   /dev/sda2
       1       0        0        1      removed
/dev/md126:
        Version : 1.2
  Creation Time : Mon Aug 20 11:23:56 2012
     Raid Level : raid1
     Array Size : 1073740664 (1024.00 GiB 1099.51 GB)
  Used Dev Size : 1073740664 (1024.00 GiB 1099.51 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sat Jan 23 11:20:33 2016
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue:2
           UUID : 009a5d9b:7a0f238e:3ede62a0:0d2ee0ba
         Events : 6901428

    Number   Major   Minor   RaidDevice State
       2       8        3        0      active sync   /dev/sda3
       1       0        0        1      removed
/dev/md127:
        Version : 1.2
  Creation Time : Mon Aug 20 11:23:56 2012
     Raid Level : raid1
     Array Size : 1822442815 (1738.02 GiB 1866.18 GB)
  Used Dev Size : 1822442815 (1738.02 GiB 1866.18 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sat Jan 23 06:48:30 2016
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : rescue:3
           UUID : aa151e0d:2430bfba:a79d7030:d56a7872
         Events : 2062

    Number   Major   Minor   RaidDevice State
       2       8        4        0      active sync   /dev/sda4
       1       0        0        1      removed

0


origen


¿Cuál es el problema? ¿Qué matriz RAID no puede montar? ¿Estás obteniendo un error? Parece que te falta un disco, ¿se espera eso? - Claris
Gracias por sus comentarios (el primero). He editado la pregunta y he agregado el resultado deseado (probablemente el correcto).
ver: serverfault.com/questions/445315/... ... - Claris
Gracias, pero ya lo he visto, esa es otra configuración (creo) y la solución no se explica con la suficiente claridad (para descubrir cuál de los dos dispositivos tiene la copia más reciente -> ¿cómo? Montarlos como de solo lectura - > ¿cómo? Entonces matar a la otra incursión -> cómo) y agregar el dispositivo a la correcta -> ¿cómo? )
Parece que tiene dos unidades (sda y sdb), cada una con 4 particiones (1, 2, 3, 4). yo pensar la idea es reflejar esas particiones. En cambio, tiene 4 particiones RAID conectadas a las particiones sda (md0, md1, md2 y md3) con su espejo eliminado. Además, tiene 4 particiones RAID conectadas a las particiones sdb (md124, md125, md126 y md127). Para ayudarlo, debemos comprender: 1. ¿Cuál es el rol / punto de montaje de cada partición (adivinar: raíz, inicio, intercambio, inicio)? Y 2. ¿cómo apareció esta situación? Por ejemplo, ¿podemos suponer que md0 y md124 tienen los mismos datos? - agtoever


Respuestas:


Las particiones del miembro 1 de RAID en / dev / sdb son las más recientes como se muestra aquí ...

a1
Array UUID : 79ca4cbd:5d44fcad:01e8ed8e:0bd7009a
Events : 504
Update Time : Sun Jan 17 02:23:41 2016
Device Role : Active device 0

b1
Array UUID : 79ca4cbd:5d44fcad:01e8ed8e:0bd7009a
Events : 506
Update Time : Sat Jan 23 11:20:53 2016
Device Role : Active device 1

a2
Array UUID : 4cdff7b7:2ec9bae4:8c9cbf02:67bfe971
Events : 2064
Update Time : Sat Jan 23 06:48:30 2016
Device Role : Active device 0

b2
Array UUID : 4cdff7b7:2ec9bae4:8c9cbf02:67bfe971
Events : 2078
Update Time : Mon Jan 25 06:46:16 2016
Device Role : Active device 1

a3
Array UUID : 009a5d9b:7a0f238e:3ede62a0:0d2ee0ba
Events : 6901428
Update Time : Sat Jan 23 11:20:33 2016
Device Role : Active device 0

b3
Array UUID : 009a5d9b:7a0f238e:3ede62a0:0d2ee0ba
Events : 7041530
Update Time : Tue Jan 26 09:27:55 2016
Device Role : Active device 1

a4
Array UUID : aa151e0d:2430bfba:a79d7030:d56a7872
Events : 2062
Update Time : Sat Jan 23 06:48:30 2016
Device Role : Active device 0

b4
Array UUID : aa151e0d:2430bfba:a79d7030:d56a7872
Events : 2076
Update Time : Mon Jan 25 06:46:11 2016
Device Role : Active device 1

Entonces, para obtener el resultado deseado, primero detenga las matrices md12X espúreas ...

mdadm --stop /dev/md124
mdadm --stop /dev/md125
mdadm --stop /dev/md126
mdadm --stop /dev/md127

A continuación, simplemente agregue cada partición RAID Member eliminada en su RAID original 1

mdadm --add /dev/md0 /dev/sda1
mdadm --add /dev/md1 /dev/sda2
mdadm --add /dev/md2 /dev/sda3
mdadm --add /dev/md3 /dev/sda4

0



Creo que lo que sucedió es que en algún momento mdadm no pudo ver sda, por lo que eliminó todas las particiones del miembro RAID 0. Luego, cuando mdadm ve que sda ​​aparece nuevamente, crea las matrices md12X con las particiones sda no sincronizadas. Entonces, la pregunta es ¿por qué mdadam a veces no está viendo sda? Podría ser un problema de alimentación, un problema de cable o un problema de manejo. Verifique el estado SMART en las unidades. - S.Haran
Voy a iniciar eso, muchas gracias ..
Hubo una falla md2 : active raid1 sda3[2](F) sdb3[1] 1073740664 blocks super 1.2 [2/1] [_U] ¿Cómo podría arreglar eso? Gracias de nuevo,..
Sospecho que tienes un problema con / dev / sda. La solución será similar a la última vez, pero para asegurarse de mostrar la salida de ... cat / proc / mdstat - S.Haran