hadoop - Expanding root partition on AWS EC2 -


i created public vpc , added bunch of nodes can use spark cluster. unfortunately, of them have partition setup looks following:

ec2-user@sparkslave1: lsblk /dev/xvda 100g /dev/xvda1 5.7g / 

i setup cloud manager on top of these machines , of nodes have 1g left hdfs. how extend partition takes of 100g?

i tried created /dev/xvda2, created volume group, added of /dev/xvda* /dev/xvda1 doesn't added it's mounted. cannot boot live cd in case, it's on aws. tried resize2fs says root partition takes of available blocks, cannot resized. how solve problem , how avoid problem in future?

thanks!

i don't think can resize running root volume. how you'd go increasing root size:

  1. create snapshot of current root volume
  2. create new volume snapshot of size want (100g?)
  3. stop instance
  4. detach old small volume
  5. attach new bigger volume
  6. start instance

Comments