HiveBrain v1.2.0
Get Started
← Back to all entries
patternsqlMinor

Bulk Data Loading and Transaction Log

Submitted by: @import:stackexchange-dba··
0
Viewed 0 times
logbulktransactionloadinganddata

Problem

I'm currently working on a project which bulk import data from flat files (csv) about 18 different files each linking to a specific table through some stored procedure.

I followed the steps as advised in Data Loading Performance guide.

The database is in BulkLogged recovery mode to minimize the logging, when executing the stored procedure below on a file containing 600000 rows I get an error


Msg 9002, Level 17, State 4, Procedure SP_Import__DeclarationClearanceHistory_FromCSV, Line 34

The transaction log for database is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

(for testing purposes I do a full backup before starting the import).

Looking at the log_reuse_wait_desc I see the following:


log_reuse_wait_desc CHECKPOINT. All other import get imported successfully.

Any input in solving this would be welcomed.

```
PROCEDURE [dbo].[SP_Import_DeclarationClearanceHistory_FromCSV]
@FilePath nvarchar
AS
BEGIN
-- Creating a Temproary Table for importing the data from csv file.
DBCC TRACEON(610)

CREATE TABLE #DeclarationClearanceHistory
(
[ItemID] [int] IDENTITY(1, 1) NOT NULL ,
[CMSDeclarationID] [bigint] NOT NULL ,
[StatusCode] nvarchar NOT NULL ,
[SubStatus] nvarchar NULL ,
[DepartmentCode] nvarchar NULL ,
[StartDate] [datetime] NULL ,
[EndDate] [datetime] NULL ,
PRIMARY KEY CLUSTERED ( [ItemID] ASC )
WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]
)
ON [PRIMARY]

-- Inserting all the from csv to temproary table using BULK INSERT
EXEC ('BULK INSERT #DeclarationClearanceHistory
FROM ''' + @FilePath + '''
WITH ( FIELDTERMINATOR = '''', ROWTERMINATOR =''\n'', FIRSTROW = 2, KEEPIDENTITY, CODEPAGE = ''A

Solution

When you see CHECKPOINT as the log_reuse_wait_desc for that database, it is because no checkpoint has happened since the last time the log was truncated.

You can alleviate this issue by manually kicking off a CHECKPOINT command.

Supporting references:

Factors That Can Delay Log Truncation

Checkpoints and the Active Portion of the Log

Context

StackExchange Database Administrators Q#30734, answer score: 4

Revisions (0)

No revisions yet.