DX NetOps Manager

 View Only

Tech Tip: How do I migrate off LVM from a DR running Vertica 6.0.2? (requirement for DR 2.3.4) 

Sep 29, 2014 06:06 AM

As in CA Infrastructure Management Data Aggregator Readme 2.3.4.

 

The following procedures describe how to transition from a Data Repository that is running Vertica 6.0.2 using LVM (Logical Volume Manager) for data and catalog directories to Vertica 6.0.2 using non-LVM. The Vertica database backs Data Repository and Vertica does not support its database running on LVM volumes. Vertica has never supported running its database on LVM. However, starting with Vertica 7.0.1-2 (Data Aggregator Release 2.3.4 requires Vertica 7.0.1-2), the Vertica installer enforces this requirement of not allowing Vertica to run on LVM.

 

The steps to migrate database directories that reside on LVM partitions to non-LVM partitions are described for both single node Data Repository deployments and clustered Data Repository deployments. If Data Repository is using volumes that LVM manages, Data Aggregator Release 2.3.4 cannot be installed.

 

Data Repository - Single Node

Important! Back up Data Repository before proceeding. Make sure that no scheduled backups will run during this time.

Important! You must have a local or networked partition with adequate free space to store the database contents temporarily while you convert the LVM partition.

 

Assumptions:

  • The data directory is /data.
  • The catalog directory is /catalog.
  • LVM manages the data and catalog directories.
  • The database administrative user is dradmin.
  • The database is named drdata.

 

To proceed with the migration, do the following steps:

 

1. Stop each Data Collector instance:

a. ssh dc_hostname -l root

b. /etc/init.d/dcmd stop

c. /etc/init.d/dcmd status

 

2. Stop Data Aggregator:

a. ssh da_hostname -l root

b. /etc/init.d/dadaemon stop

c. /etc/init.d/dadaemon status

 

3. As dradmin, stop the database:

a. ssh dr_hostname -l dradmin

b. Stop the database using /opt/vertica/bin/adminTools

Important! Do the following steps as the root user, unless otherwise specified.

 

4. Make a temp directory, /tmp_data, to store the data directory contents temporarily. Make sure that the directory is located on a partition that has enough space to accommodate a full copy of the /data/drdata folder. This is a temporary storage location. The data will be moved from this location later.

a. mkdir /tmp_data

b. Verify that /tmp_data is mounted to the temporary partition:

mount data_partition /tmp_data

c. Make a note of the size of the /data directory for future reference in step #7:

du -ch /data | grep -i total

d. Determine the amount of free disk space on the destination partition:

df -h /tmp_data

e. Verify that there is enough free disk space on the destination partition (the partition for /tmp_data) to accommodate a full copy of the /data directory.

 

5. Change the permissions of the /tmp_data folder:

chown dradmin:verticadba /tmp_data

dradmin

     Is the database administrator user.

 

6. Move the database into the new directory.

mv /data/drdata /tmp_data

 

7. Ensure the file size matches the size reported by step 4.c.:

du -ch /tmp_data | grep -i total

 

8. Make a temp directory, /tmp_catalog, to store the catalog directory. Make sure that the directory is located on a partition that has enough space to accommodate a full copy of the /catalog/drdata folder. This is a temporary storage location. The data will be moved from this location later.

a. mkdir /tmp_catalog

b. Verify that /tmp_catalog is mounted to the temporary partition:

mount data_partition /tmp_catalog

c. Make a note of the size of the /catalog directory for future reference in step 11:

du -ch /catalog | grep -i total

d. Determine the amount of free disk space on the destination partition:

df -h /tmp_catalog

e. Verify that there is enough free disk space on the destination partition (the partition for /tmp_catalog) to accommodate a full copy of the /catalog directory.

 

9. Change the permissions of the /tmp_catalog folder:

chown dradmin:verticadba /tmp_catalog

dradmin

     Is the database administrator user.

 

10. Move the catalog into the new directory.

mv /catalog/drdata /tmp_catalog

 

11. Ensure the file size matches the size reported by step 8.c.:

du -ch /tmp_catalog | grep -i total

 

12. Make a note of the lvm mount points by recording output of mount:

mount

 

13. Unmount /data and /catalog:

umount /data

umount /catalog

Note: If you get a "busy" related error, please ensure that all windows and applications are not accessing these directories.

 

14. Re-establish non-LVM volume on /data and /catalog. There are three approaches:

 

EXISTING NON-LVM FILE SYSTEM: When you unmount /data and /catalog, the empty folders will be relocated to the same partition as the root of the Linux file system. If this partition fits the sizing requirements and is where you want to keep your data going forward, remove the lines pertaining to /data and /catalog directories from /etc/fstab:

vim /etc/fstab

 

OR


CONVERT EXISTING LVM FILE SYSTEM TO NON-LVM: If you want to convert your lvm partition to a non-LVM partition, complete the following steps, using information from step 12:

a. lvremove /dev/<lvmvolumegroup>/<lvmlogicalvolume>

b. vgremove <lvmvolumegroup>

c. pvremove /dev/<sdaX>

d. mkfs.ext3 /dev/sdaX

e. Add entries in /etc/fstab such as the following:

/dev/sdaX      /catalog                  ext3        defaults 0 0

/dev/sdaY      /data                       ext3        defaults 0 0

 

OR


NEW NON-LVM FILE SYSTEM: If you want to use a new non-LVM partition, format the file system and add the mount points for /data and /catalog:

a. mkfs.ext3 /dev/sdaX

b. Add entries to /etc/fstab such as the following:

/dev/sdaX         /catalog               ext3       defaults 0 0

/dev/sdaY         /data                    ext3       defaults 0 0

 

15. Remount all filesystems:

mount -a

 

16. Move the data from the temporary directories back into the /data and /catalog directories that Vertica knows:

a. mv /tmp_data/drdata /data

b. mv /tmp_catalog/drdata /catalog

 

17. Ensure that the size of the /data directory matches the size reported by step 4.c.:

du -ch /data | grep -i total

 

18. Ensure that the size of the /catalog directory matches the size reported by step 8.c.:

du -ch /catalog | grep -i total

 

19. Restart the database:

a. su – dradmin

b. /opt/vertica/bin/adminTools

Note: This can take several minutes to occur.

 

20. Verify that the database is running:

a. su - dradmin

b. /opt/vertica/bin/adminTools

c. Select "View Database Cluster State" and verify that the database state is "UP".

 

21. Restart Data Aggregator:

a. ssh da_hostname -l root

b. /etc/init.d/dadaemon start

c. /etc/init.d/dadaemon status

 

22. Start each Data Collector instance:

a. ssh dc_hostname -l root

b. /etc/init.d/dcmd start

c. /etc/init.d/dcmd status

 


Data Repository - Cluster

Important! Back up Data Repository before proceeding. Make sure that no scheduled backups will run during this time.

 

Assumptions:

  • The data directory is /data.
  • The catalog directory is /catalog.
  • LVM manages the data and catalog directories.
  • The database administrative user is dradmin.
  • The database is named drdata.

 

To proceed with the migration, do the following steps:

1. Stop each Data Collector instance:

a. ssh dc_hostname -l root

b. /etc/init.d/dcmd stop

c. /etc/init.d/dcmd status

 

2. Stop Data Aggregator:

a. ssh da_hostname -l root

b. /etc/init.d/dadaemon stop

c. /etc/init.d/dadaemon status

 

Steps to Migrate a Node In a Cluster


Important! Do the following steps as the root user, unless otherwise specified.


Do the following steps for each node in the cluster. Follow all of the steps (steps 1-15) for one node at a time.


Important! Use adminTools to verify that the database is running.

 

1. Make note of the IP address for the current node:

ifconfig

 

2. As the dradmin user, access adminTools:

a. su - dradmin

b. /opt/vertica/bin/adminTools

 

3. Stop Vertica on the host:

a. Navigate to "Advanced Tools Menu". Press enter.

b. Navigate to "Stop Vertica on Host". Press enter.

c. Select the appropriate host IP address as found in step 1 in the section, "Steps to Migrate a Node In a Cluster". Press Enter.

d. Navigate to "Main Menu". Press enter.

e. Navigate to "Exit". Press enter.

 

4. Switch back to the root user:

exit

 

5. Verify that the following command outputs "root":

whoami

 

6. Remove the files from the /data directory:

rm -rf /data/drdata

 

7. Remove the files from the /catalog directory:

rm -rf /catalog/drdata

 

8. Record the output of the following commands for debugging purposes:

a. mount

b. cat /etc/fstab

 

9. Unmount the /data LVM directory:

umount /data

 

10. Unmount the /catalog LVM directory:

umount /catalog

 

 

11. Re-establish non-LVM volume on /data and /catalog. There are three approaches:

 

EXISTING NON-LVM FILE SYSTEM: When you unmount /data and /catalog, the empty folders will be relocated to the same partition as the root of the Linux file system. If this partition fits the sizing requirements and is where you want to keep your data going forward, remove the lines pertaining to /data and /catalog directories from /etc/fstab:

vim /etc/fstab

 

OR

 

CONVERT EXISTING LVM FILE SYSTEM TO NON-LVM: If you want to convert your lvm partition to non-LVM partition, complete the following steps, using information from step 8:

a. lvremove /dev/<lvmvolumegroup>/<lvmlogicalvolume>

b. vgremove <lvmvolumegroup>

c. pvremove /dev/<sdaX>

d. mkfs.ext3 /dev/sdaX

e. Add entries in /etc/fstab such as the following:

/dev/sdaX      /catalog                  ext3        defaults 0 0

/dev/sdaY      /data                       ext3        defaults 0 0

 

OR


NEW NON-LVM FILE SYSTEM: If you want to use a new non-LVM partition, format the file system and add the mount points for /data and /catalog:

a. mkfs.ext3 /dev/sdaX

b. Add entries to /etc/fstab such as the following:

/dev/sdaX         /catalog               ext3       defaults 0 0

/dev/sdaY         /data                    ext3       defaults 0 0

 

12. Remount all file systems:

mount -a

 

13. Create the drdata folder with correct permissions within /data and /catalog:

a. mkdir -p /data/drdata

b. mkdir -p /catalog/drdata

c. chown -R dradmin:verticadba /data

d. chown -R dradmin:verticadba /catalog

 

14. Restart Vertica on the host:

a. su - dradmin

b. /opt/vertica/bin/adminTools

c. Use the down arrow key to navigate to "Restart Vertica on host". Press enter.

 

15. Continue to monitor adminTools. The status for the current node will remain as "Recovering" while the data is rebuilt. Do not continue until the database is back "UP". It can take a considerable amount of time for the database to transition to the“UP” state.

a. Select "View Database Cluster State". Press enter.

b. Press enter to escape to the Main Menu.

 

After the database is back up, repeat steps 1-15, "Steps to Migrate a Node In a Cluster", for the next node. Continue through these steps until all Data Repository nodes are migrated off LVM.

 

After you complete the steps in the section "Steps to Migrate a Node In a Cluster" for all Data Repository nodes, do the following steps:

1. Log in to any Data Repository node:

su - dradmin

/opt/vertica/bin/vsql -U dradmin –w drpass

 

2. Run the following vsql commands to re-establish custom application settings:

a. SELECT set_config_parameter('MaxClientSessions',1024);

b. SELECT set_config_parameter('StandardConformingStrings','0');

 

3. Start Data Aggregator:

a. ssh da_hostname -l root

b. /etc/init.d/dadaemon start

c. /etc/init.d/dadaemon status

 

4. Start all Data Collector instances:

a. ssh dc_hostname -l root

b. /etc/init.d/dcmd start

c. /etc/init.d/dcmd status

Statistics
0 Favorited
0 Views
0 Files
0 Shares
0 Downloads

Tags and Keywords

Comments

Jun 23, 2015 03:19 AM

1.    Vertica has told us that only the partition(s) where data and catalog exist must not be LVM.   The vertica tools can be on a LVM partition.

2.    The reason we give instructions for data and catalog is some customers may put them on separate disks/LVM partitions.  We wanted to make sure customers did both areas.  If they are on the same partition, you can do it all as 1 task.

Oct 01, 2014 09:22 PM

If we can't use LVM, what does HP recommend? 

Related Entries and Links

No Related Resource entered.