Layer7 API Management

 View Only
  • 1.  MySQL 8 transaction-based replication with Global Transaction Identifiers

    Posted Sep 22, 2021 11:18 AM
    Has anyone tried using the new MySQL 8 replication between Gateway clusters?
    It sounds more reliable than the binary log position-based replication we have been using.


  • 2.  RE: MySQL 8 transaction-based replication with Global Transaction Identifiers

    Posted Sep 24, 2021 10:29 AM
    My apologies - this is in the documentation at https://techdocs.broadcom.com/us/en/ca-enterprise-software/layer7-api-management/api-gateway/congw-10-1/reference/clone-a-replication-slave-from-a-live-master-using-gtid-and-mysqldump.html


  • 3.  RE: MySQL 8 transaction-based replication with Global Transaction Identifiers

    Posted Sep 30, 2021 01:36 PM

    We are in process to cutting over to Gateway 10 using MySQL8 and GTID replication. 

    The replication seems much more stable and resilient to replica (slave) outages and will recover when restarted w/o any intervention. 

    I have left the secondary MySQL process down over the weekend and restarted it on Monday w/o issues. Killed Mysql w/o warning and tried to mess up replication. It has been pretty solid. 

    Since Replication is really outside of the gateway operation I don't anticipate issues with the gateway cluster itself. 

    One thing you will need to learn and practice if you go to GTID replication is to is figure out how to drop your database and reimport to the slave as your CA/Broadcom scripts are not going to work. (create_slave.sh etc will not work). 

    If your interested I can share some notes on what to set in MySQL to drop the database on the secondary DB server and import from primary. (or vice versa)  

    I am in process of rewriting the slave related bash scripts functionality in Ansible so we do not have to send updated createsql8_slave.sh scripts to all gateways just run the Ansible playbook automation to fix replication. 




  • 4.  RE: MySQL 8 transaction-based replication with Global Transaction Identifiers

    Posted Oct 01, 2021 06:21 PM

    Question for CA/Broadcom??

    I now see (since I posted the above reply) in the documentation for Gateway 10.1, there is reference to the create_slave.sh script.

    Has this script been updated for GW10 and GTID replication? 




  • 5.  RE: MySQL 8 transaction-based replication with Global Transaction Identifiers

    Posted Oct 01, 2021 06:23 PM

    Dell - Internal Use - Federal

    Very interested.

    It seems to be more reliable and it supports multiple replication targets, so we can have a local database for availability, and a remote one for DR.

     

    I agree, establishment of status at the start,  and after disconnection is always a problem with Disaster Recovery, but no different in Layer7 than in any other DR scenario.

     

    Regards,

     

    Philip Mead

    Email: Philip_Mead@federal.dell.com

    Phone: +1-703-439-9841

     






  • 6.  RE: MySQL 8 transaction-based replication with Global Transaction Identifiers

    Posted Oct 07, 2021 02:21 PM
    Edited by Mark Stewart Oct 07, 2021 02:28 PM


  • 7.  RE: MySQL 8 transaction-based replication with Global Transaction Identifiers

    Posted Oct 07, 2021 02:27 PM

    Try this post again! Not sure why above is merged. 

    This is how I drop a DB on the secondary and do mysqldump on primary and successfully import dump on secondary DB and restore replication.  There may be better ways but this has worked for me every time.

    I hope the new create_slave.sh script will do this for us. I still plan to duplicate the below steps in Ansible. 

    Also have downloaded the Gateway 10.1 files but haven't had a chance to look for or at the .sh script. 

    These are just my high level reminder build notes so if you don't understand let me know I will elaborate. (or if I forgot or broke something :D )

    They should be pretty easy to follow if you have been looking at GTID.  I did clarify some of it.


    This assumes cloning from Primary (eg testprimaryserver) to Secondary (eg testsecondaryserver) .

    as root on RedHat command line on testPrimaryserver (the good database)

    mysql -e 'stop replica;'

    mysqldump --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob --port=3306 --user=repluser --password > mysqlbackup_dump.sql

    Copy scp mysqlbackup_dump.sql to testSecondaryserver GW,

      

    On GW to restore DB: (testSecondaryserver)
    After dump from primary node to  mysqlbackup_dump.sql and copying file to Secondary node.

    Make Backup secondary database if needed!

    ssh to testSecondaryserver gateway: (and cd to the directory with the mysqlbackup_dump.sql file you copied above and log into mysql shell)


    in mysql shell: 

    mysql> stop replica;

    mysql> show databases;       ##(to get DB name for DROP DATABASE statement below)

    mysql>  SET sql_log_bin = 0;

    mysql>  DROP DATABASE IF EXISTS testssgDB;        ##(DB name found above)

    mysql>  SET sql_log_bin = 1;

    mysql>  source mysqlbackup_dump.sql

    mysql>  CHANGE MASTER TO MASTER_HOST="testprimaryserver1", MASTER_AUTO_POSITION = 1;     ## (Adjust hostname to your primary)

     

    On BOTH primary and on secondary while replication is stopped and in mysql shell

    mysql>  reset master;

    mysql>  reset replica;

    mysql>  start replica;

    mysql> show replica status\G;

     

    Should now have successful replication again.

    After it's all good and works you can delete the dump on both secondary and primary. 

    on RHEL CLI:

    rm -f /tmp/mysqlbackup_dump.sql