i created public vpc , added bunch of nodes can use spark cluster. unfortunately, of them have partition setup looks following:
ec2-user@sparkslave1: lsblk /dev/xvda 100g /dev/xvda1 5.7g / i setup cloud manager on top of these machines , of nodes have 1g left hdfs. how extend partition takes of 100g?
i tried created /dev/xvda2, created volume group, added of /dev/xvda* /dev/xvda1 doesn't added it's mounted. cannot boot live cd in case, it's on aws. tried resize2fs says root partition takes of available blocks, cannot resized. how solve problem , how avoid problem in future?
thanks!
i don't think can resize running root volume. how you'd go increasing root size:
- create snapshot of current root volume
- create new volume snapshot of size want (100g?)
- stop instance
- detach old small volume
- attach new bigger volume
- start instance
Comments
Post a Comment