Data migration between databases is a regular task for database administrators. Bulk data import is useful for handling large amounts of data. It entails using a BCP utility, a SQL BULK INSERT statement, the command line, or SQL Server export/import tools like dbForge Data Pump to load data from a file into a SQL Server table.

In this article, we’ll look at how to use dbForge Data Pump and the PowerShell command line to bulk transfer data from MS Access tables to SQL Server.

Manual data entry may take a long time when moving large datasets. Instead, bulk data import is preferable because it minimizes the possibility of data entry errors while quickly transferring data between sources without requiring manual entry of every record.

To maintain regular data synchronization and keep data current, you can also automate and schedule bulk data import. You can transfer data between tables in a variety of formats using data import.

As you can see, using bulk data import has a lot of advantages. The BCP utility and a SQL Bulk Insert statement are where we begin our quick look at how it can be implemented.

SQL BULK INSERT vs BCP utility

Large amounts of data can be inserted from an external file into a SQL Server table or view using the BULK INSERT statement. It is appropriate for situations in which data loading effectiveness and efficiency are crucial.

The syntax of the claim is as follows:

BULK INSERT table_name
FROM 'path_to_file'
WITH (options);

where:

  • table_name is the name of the table into which the data will be inserted. The name should be specified in the following format – database_name.schema_name.table_name.
  • path_to_file is the full path to the file with the data you want to import.
  • options are various options or configurations for the bulk insert operation.

Using the bcp command-line tool is another well-liked method for bulk data import. Bulk Copy Program is known as BCP. Users can quickly import and export data from files into tables and views on SQL Server.

High-performance data transfer is the aim of BCP. It is frequently used for data migration, data warehousing, and ETL (Extract, Transform, Load) processes because it can effectively handle large datasets. To customize the import or export process to your needs, BCP lets you specify a variety of options, including data format, field and row terminators, and authentication details.

The following is an illustration of a BCP command that imports data from a CSV file into a SQL Server database table:

bcp MyDatabase.dbo.MyTable in datafile.csv -c -T ','

where:

  • MyDatabase.dbo.MyTable is the target table into which data should be imported. In this illustration, data is being imported into the “MyTable” table, which is part of the “dbo” schema of the “MyDatabase.”
  • in identifies an import operation, or the copying of data from an external file into the specified table, as being in progress.
  • datafile.csv is a CSV file that you want to import data from on the outside.
  • -c demonstrates that you are importing using character data format. CSV files and other text-based data sources are imported using it.
  • -',' is the field terminator used in the CSV file. In this case, it’s a comma (,), which is common for CSV files. This parameter tells BCP how to recognize the boundaries between fields in the CSV data.

The BULK INSERT and BCP methods both handle bulk data operations, but they do so in different ways. Let’s examine their parallels and discrepancies.

Similarities:

  • Efficient transfer of large volumes of data into SQL Server tables.
  • Optimized for high-performance data loading.
  • Support for data import from external sources into SQL Server tables.

Differences:

  • A command-line tool called BCP is used to import and export data into SQL Server.
    A Transact-SQL statement executed by SQL Server is called BULK INSERT. It is limited to data import.
  • Database administrators and developers who are comfortable using command-line tools can use BCP. As a result, additional setup and configurations might be needed.
    For SQL Server users and database developers who are accustomed to writing SQL queries, BULK INSERT is more convenient.
  • Security issues may arise if BCP asks for login information and passwords to be passed as command-line arguments.
    When using the SQL Server login, BULK INSERT can be more secure, especially when combined with security features built into the SQL Server database.

Introduction to Data Pump

DbForge Data Pump for SQL Server is your go-to option when you need to easily and quickly export/import data from external files to SQL Server databases. It is a powerful and adaptable SQL Server Management Studio data migration add-in for transferring data between different database systems and populating databases with data from external sources, regardless of their formats or structures. The tool supports the export and import of data in popular formats like Excel, CSV, XML, JSON, Text, MS Access, etc. Additionally, you can use a template file that has the same settings that apply to export/import operations to handle repetitive situations.

Let’s use dbForge Data Pump to import data from MS Access tables to brand-new SQL Server tables.

Retrieving all tables from an MS Access database

Make the Microsoft Access file with the data that will be imported into SQL Server tables first. First, run the following SELECT statement to get a list of all tables from an MS Access database:

SELECT name FROM MSysObjects WHERE type = 4

where:

  • name specifies the “name” column from the “MSysObjects” table that you want to retrieve. Query names as well as those of other database objects are typically stored in the “name” column.
  • MSysObjects is a Microsoft Access system table that contains data on various database objects.
  • type = 4 indicates a filtering condition for the results. A “type” with the value 4 typically refers to tables or saved queries in Microsoft Access. Consequently, the query instructs the database to only choose rows where the “MSysObjects” table’s “type” column equals 4.

Once finished, save a file with the extension.accdb.

Proceed to using dbForge Data Pump to import data after that.

Let’s make sure the tables that will receive the data import are empty. Open SSMS and run the SELECT statements to retrieve the data to accomplish this.

As you can see, the tables are currently empty, so we can move on to filling them with data.

Create a template to import settings

To make things simple, we’ll use the Data Import wizard in dbForge Data Pump to create a template with predefined import settings. By removing the need to manually configure import settings for each import operation, templates save time and effort. You can reuse a template with the desired settings rather than starting from scratch, which streamlines the import procedure and lowers the possibility of human error.

To facilitate data transfer between Microsoft Access files and non-Microsoft Office applications to other data sources like Microsoft SQL Server, download and install Microsoft Access Database Engine beforehand. If not, the Import wizard will display the following error:

Once all the necessary steps have been taken, right-click the required database in SSMS Object Explorer and choose Data Pump > Import Data to launch the Data Import Wizard.

Select an.mdb format in the wizard and the source file from which data will be imported before clicking Next.

Choose a database, a table to import data to, a source table, and a target connection on the destination page. Next, select Next.

The Data Import wizard opens with the predefined connection parameters of the chosen table if it is launched from Object Explorer.

You can, at your discretion, go through every page of the wizard to enter the necessary details and settings.

To create a template, click Save Template at the bottom of the wizard, enter the name and location of your template, and then click Save. It is advised to keep the template and all related files on your primary D: hard drive for proper operation.

Done! The template is ready to use. Now, we can import data in bulk with the PowerShell script.

Automating bulk data import

The PowerShell script is made up of a series of commands that PowerShell is told to run in order to carry out various tasks.

Let’s write a PowerShell script that details every action it takes to import data in bulk.

1. Open the Windows Integrated Scripting Environment (ISE) console for PowerShell.

2. Click New Script on the toolbar and type the following guidelines:

# Define the path to the Microsoft Access database
$dbPath = "D:\og.accdb"

# Path to your DBForge
$diffToolLocation = "C:\Program Files\Devart\dbForge SQL Tools Professional\dbForge Data Pump for SQL Server\datapump.com"

# Define the provider and the data source
$connectionString = "Provider=Microsoft.ACE.OLEDB.12.0;Data Source=`"$dbPath`";"

# Create a new OleDb connection
$connection = New-Object System.Data.OleDb.OleDbConnection($connectionString)

# Open the connection
$connection.Open()

# Get the schema information for the data source
$tables = $connection.GetSchema('Tables')

# Filter out system tables to get only user tables
$userTables = $tables | Where-Object { $_.TABLE_TYPE -eq 'TABLE' -and $_.TABLE_NAME -notlike 'MSys*' }

# Process each table
$userTables | ForEach-Object {
    $tableName = $_.TABLE_NAME
    Write-Output "Processing table: $tableName"
    
    #Command-line string for importing data into your SQL Server database
    $process = Start-Process -FilePath $diffToolLocation -ArgumentList "/dataimport /templatefile:`"D:\imptemplate.dit`" /connection:`"Data Source=<server_name>;Integrated Security=True;User ID=<username>`" /inputfile:`"D:\og.accdb`" /inputtable:$tableName /table:`"Olympic_Games_Dev.dbo.$tableName`" /errormode:abort" #-PassThru -Wait -windowstyle hidden         
    
    #If you need to process the tables one by one to reduce server load - uncomment it.
    #Start-Sleep -Seconds 10                        
                            }

# Close the connection
$connection.Close()

3. Substitute the following arguments with your relevant data:

  • $dbPath: The source MS Access file’s path, from which data should be imported.
  • $diffToolLocation: The location of the dbForge Data Pump installation folder.
  • $connectionString: Microsoft Access databases in the.accdb file format can be interacted with using the OLE DB Provider.
  • /templatefile: How to get to the template.you used the Data Import wizard to create the dit file.
  • /connection: Connection specifications for the target database tables that you want to import data into.
  • /inputfile: Name of the source file that should be used to import data.
  • /table: Full name of the target SQL Server table, including the database and associated schema, for instance, Olympic_Games_Dev.dbo.$tableName.

Due to the fact that we declared the variable $tableName, all tables present in the database will be processed without your having to manually enter their names. This will streamline the procedure and lower the likelihood of errors.

After the script has been written, you can run it by selecting Run Script from the toolbar.

Done! Return to dbForge Data Pump at this time to confirm that the data has been moved from MS Access to the SQL Server database tables. Execute the SELECT statements to achieve this.

The result shows the tables that have been filled with information.

Conclusion

In conclusion, bulk data import is a crucial task in database administrators’ day-to-day responsibilities. In this article, we reviewed the procedures for transferring data between databases, specifically how to import Microsoft Access data into SQL Server. Additionally, we have shown how to use the PowerShell ISE script and dbForge Data Pump to import data in bulk. Making a template, updating the script’s arguments with your actual data, and running the script are all that are required.

Leave a comment

Your email address will not be published.