25.6.9 Importing Data Into MySQL Cluster (original) (raw)
25.6.9 Importing Data Into MySQL Cluster
It is common when setting up a new instance of NDB Cluster to need to import data from an existing NDB Cluster, instance of MySQL, or other source. This data is most often available in one or more of the following formats:
- An SQL dump file such as produced bymysqldump or mysqlpump. This can be imported using the mysql client, as shown later in this section.
- A CSV file produced by mysqldump or other export program. Such files can be imported into
NDB
usingLOAD DATA INFILE
in the mysql client, or with the ndb_import utility provided with the NDB Cluster distribution. For more information about the latter, seeSection 25.5.13, “ndb_import — Import CSV Data Into NDB”. - A native
NDB
backup produced usingSTART BACKUP in theNDB
management client. To import a native backup, you must use the ndb_restore program that comes as part of NDB Cluster. SeeSection 25.5.23, “ndb_restore — Restore an NDB Cluster Backup”, for more about using this program.
When importing data from an SQL file, it is often not necessary to enforce transactions or foreign keys, and temporarily disabling these features can speed up the import process greatly. This can be done using the mysql client, either from a client session, or by invoking it on the command line. Within amysql client session, you can perform the import using the following SQL statements:
SET ndb_use_transactions=0;
SET foreign_key_checks=0;
source path/to/dumpfile;
SET ndb_use_transactions=1;
SET foreign_key_checks=1;
When performing the import in this fashion, you_must_ enablendb_use_transaction
andforeign_key_checks
again following execution of the mysql client'ssource
command. Otherwise, it is possible for later statements in same session may also be executed without enforcing transactions or foreign key constraints, and which could lead to data inconcsistency.
From the system shell, you can import the SQL file while disabling enforcement of transaction and foreign keys by using themysql client with the--init-command option, like this:
$> mysql --init-command='SET ndb_use_transactions=0; SET foreign_key_checks=0' < path/to/dumpfile
It is also possible to load the data into anInnoDB table, and convert it to use the NDB storage engine afterwards using ALTER TABLE ... ENGINE NDB). You should take into account, especially for many tables, that this may require a number of such operations; in addition, if foreign keys are used, you must mind the order of theALTER TABLE
statements carefully, due to the fact that foreign keys do not work between tables using different MySQL storage engines.
You should be aware that the methods described previously in this section are not optimized for very large data sets or large transactions. Should an application really need big transactions or many concurrent transactions as part of normal operation, you may wish to increase the value of theMaxNoOfConcurrentOperations data node configuration parameter, which reserves more memory to allow a data node to take over a transaction if its transaction coordinator stops unexpectedly.
You may also wish to do this when performing bulkDELETE orUPDATE operations on NDB Cluster tables. If possible, try to have applications perform these operations in chunks, for example, by addingLIMIT
to such statements.
If a data import operation does not complete successfully, for whatever reason, you should be prepared to perform any necessary cleanup including possibly one or more DROP TABLE statements, DROP DATABASE statements, or both. Failing to do so may leave the database in an inconsistent state.