¿ù°£ Àα⠰Խù°

°Ô½Ã¹° 1,358°Ç
   
[CEPH] RuntimeError: command returned non-zero exit status: 1
±Û¾´ÀÌ : ÃÖ°í°ü¸®ÀÚ ³¯Â¥ : 2015-05-07 (¸ñ) 02:53 Á¶È¸ : 6066
±ÛÁÖ¼Ò :
                                
$ ceph-deploy osd activate osd5:sda1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.22): /usr/bin/ceph-deploy osd activate osd5:sda1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks osd5:/dev/sda1:
[osd5][DEBUG ] connection detected need for sudo
[osd5][DEBUG ] connected to host: osd5 
[osd5][DEBUG ] detect platform information from remote host
[osd5][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host osd5 disk /dev/sda1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[osd5][INFO  ] Running command: sudo ceph-disk -v activate --mark-init upstart --mount /dev/sda1
[osd5][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sda1
[osd5][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[osd5][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[osd5][WARNIN] DEBUG:ceph-disk:Mounting /dev/sda1 on /var/lib/ceph/tmp/mnt.d3qCAx with options noatime,inode64
[osd5][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sda1 /var/lib/ceph/tmp/mnt.d3qCAx
[osd5][WARNIN] DEBUG:ceph-disk:Cluster uuid is 38e6bdde-e19e-4801-8dc0-0e7a47734611
[osd5][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[osd5][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[osd5][WARNIN] DEBUG:ceph-disk:OSD uuid is c6d98258-99cf-4774-b052-74cd93d21026
[osd5][WARNIN] DEBUG:ceph-disk:OSD id is 4
[osd5][WARNIN] DEBUG:ceph-disk:Marking with init system upstart
[osd5][WARNIN] DEBUG:ceph-disk:Authorizing OSD key...
[osd5][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.4 -i /var/lib/ceph/tmp/mnt.d3qCAx/keyring osd allow * mon allow profile osd
[osd5][WARNIN] Error EINVAL: entity osd.4 exists but key does not match
[osd5][WARNIN] ERROR:ceph-disk:Failed to activate
[osd5][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.d3qCAx
[osd5][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.d3qCAx
[osd5][WARNIN] Traceback (most recent call last):
[osd5][WARNIN]   File "/usr/sbin/ceph-disk", line 2769, in <module>
[osd5][WARNIN]     main()
[osd5][WARNIN]   File "/usr/sbin/ceph-disk", line 2747, in main
[osd5][WARNIN]     args.func(args)
[osd5][WARNIN]   File "/usr/sbin/ceph-disk", line 1981, in main_activate
[osd5][WARNIN]     init=args.mark_init,
[osd5][WARNIN]   File "/usr/sbin/ceph-disk", line 1755, in mount_activate
[osd5][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)
[osd5][WARNIN]   File "/usr/sbin/ceph-disk", line 1956, in activate
[osd5][WARNIN]     keyring=keyring,
[osd5][WARNIN]   File "/usr/sbin/ceph-disk", line 1574, in auth_key
[osd5][WARNIN]     'mon', 'allow profile osd',
[osd5][WARNIN]   File "/usr/sbin/ceph-disk", line 316, in command_check_call
[osd5][WARNIN]     return subprocess.check_call(arguments)
[osd5][WARNIN]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
[osd5][WARNIN]     raise CalledProcessError(retcode, cmd)
[osd5][WARNIN] subprocess.CalledProcessError: Command '['/usr/bin/ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', 'auth', 'add', 'osd.4', '-i', '/var/lib/ceph/tmp/mnt.d3qCAx/keyring', 'osd', 'allow *', 'mon', 'allow profile osd']' returned non-zero exit status 22
[osd5][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init upstart --mount /dev/sda1

ÇØ°á)
À̹ÌÇѹø OSD ¸¦ »ý¼ºÇß´ø ÀÌ·ÂÀÌ ÀÖÀ»°æ¿ì....OSD »èÁ¦½Ã ¾Æ·¡Ã³·³...
$ ceph osd tree
# id weight type name up/down reweight
-1 65.44 root default
-2 16.36 host osd0
0 16.36 osd.0 up 1
-3 16.36 host osd1
1 16.36 osd.1 up 1
-4 16.36 host osd2
2 16.36 osd.2 up 1
-5 16.36 host osd3
3 16.36 osd.3 up 1
4 0 osd.4 down 0
5 0 osd.5 down 0

$ ceph auth del osd.4
updated
$ ceph auth del osd.5
updated
$ ceph osd rm 4
removed osd.4
$ ceph osd rm 5
removed osd.5
$ ceph osd tree
# id weight type name up/down reweight
-1 65.44 root default
-2 16.36 host osd0
0 16.36 osd.0 up 1
-3 16.36 host osd1
1 16.36 osd.1 up 1
-4 16.36 host osd2
2 16.36 osd.2 up 1
-5 16.36 host osd3
3 16.36 osd.3 up 1

$ ceph -s
    cluster 38e6bdde-e19e-4801-8dc0-0e7a47734611
     health HEALTH_WARN 151 pgs backfill; 10 pgs backfilling; 4 pgs degraded; 2 pgs recovery_wait; 4 pgs stuck degraded; 163 pgs stuck unclean; recovery 25709/6990027 objects degraded (0.368%); 2622433/6990027 objects misplaced (37.517%)
     monmap e1: 3 mons at {mon0=115.68.200.60:6789/0,mon1=115.68.200.61:6789/0,mon2=115.68.200.62:6789/0}, election epoch 22, quorum 0,1,2 mon0,mon1,mon2
     osdmap e820: 6 osds: 5 up, 5 in
      pgmap v306122: 256 pgs, 2 pools, 7373 GB data, 1843 kobjects
            22316 GB used, 61467 GB / 83784 GB avail
            25709/6990027 objects degraded (0.368%); 2622433/6990027 objects misplaced (37.517%)
                   1 active+recovery_wait+degraded+remapped
                  93 active+clean
                 150 active+remapped+wait_backfill
                   1 active+degraded+remapped+backfilling
                   1 active+recovery_wait+degraded
                   1 active+degraded+remapped+wait_backfill
                   9 active+remapped+backfilling
recovery io 8411 kB/s, 2 objects/s
  client io 33645 kB/s wr, 72 op/s


À̸§ Æнº¿öµå
ºñ¹Ð±Û (üũÇÏ¸é ±Û¾´À̸¸ ³»¿ëÀ» È®ÀÎÇÒ ¼ö ÀÖ½À´Ï´Ù.)
¿ÞÂÊÀÇ ±ÛÀÚ¸¦ ÀÔ·ÂÇϼ¼¿ä.
   

 



 
»çÀÌÆ®¸í : ¸ðÁö¸®³× | ´ëÇ¥ : ÀÌ°æÇö | °³ÀÎÄ¿¹Â´ÏƼ : ·©Å°´åÄÄ ¿î¿µÃ¼Á¦(OS) | °æ±âµµ ¼º³²½Ã ºÐ´ç±¸ | ÀüÀÚ¿ìÆí : mojily°ñ¹ðÀÌchonnom.com Copyright ¨Ï www.chonnom.com www.kyunghyun.net www.mojily.net. All rights reserved.