Importing Data (dbimport)

dbimport can import both binary and CSV-formatted data files into the Lawson database. If the import encounters errors, the errors for each table are written to CSV files and listed in log files, which are then collected into a jar file named dbimportErr*.zip (where * is a numeric string to give the file a unique name). To help avoid some obvious import issues, you can run dbimport with a -V option (Validation option).

Note: 

When using dbimport to populate data into a Boolean type field, any value other than zero or blank will be imported and will set the Boolean field to true.

To import data

At a Landmark command prompt, type

dbimport -I InputDirectory | imputFile [OPTIONS} dataArea [name]

- or -

dbimport -z zipFile [OPTIONS} dataArea [file]

If you use -C, you can also use the following options:

[-nNF] [-UW] [-D datePattern] [-E encoding] [-m delimiter] [-s separator] [-S decSep] [-T timePattern] [--userfields] [--translations] [--workfile] [--workfilename] --errorfile errorFileName | --noerrorfile

If you use -D datePattern , you can also use the following options:

[-L localeName]

Consider the following:

Program Option Description

-I inputDirectory

(uppercase i)

Import the data from the specified directory or file. If importing from a file, the file need not be named after the business class, such as Employee (for a binary data file) or Employee.csv (for a CSV file).

Note: 

It is also possible to export to stdout and then import from stdin using a single hyphen to represent the file. For example, the following would do a dbexport out of dataArea1 and pipe the output into a dbimport to dataArea2. The data is never written to disk in this case.

dbexport –Co – dataArea1 businessClass | dbimport –CI – dataArea2 businessClass

-z zipFile.zip

Import the data from the specified compressed (.zip) file. You can refer to the entire zip file or you can refer to files within a zip file, for example:

someDir/someZipFile.zip/Employee.csv

-b or --brief Brief mode; prints a summary for each file.
-C or --csv

Read the input from a comma-separated value (CSV) format file.

If you do not use this option, a binary input file will be read.

-e or --haltonerror

Stop processing if a database error occurs.

--firstrow=N Start the import from the specified row.
--helpers Allow helper threads for multiple threads loading a single table. Using this feature can potentially improve the performance of importing large tables. If you use dbimport with multiple threads and this option, when the number of tables remaining to be processed is fewer than the running threads, dbimport will use the now available threads to help in the import of the remaining tables.
-M nn The threshold in megabytes used to determine when to write large binary objects to a temporary file on disk during the import. The default is 20MB. If the import fails or is likely to fail because of memory issues from large binary objects, lower this threshold. If your system has sufficient resources, you can consider increasing the threshold in order to speed up the import. Specify -1 to disable this option.
--ignoredups Ignore duplicate records.
-o or --overwrite Overwrite duplicate records.
-q or --quiet Quiet mode; display only errors; do not print informational messages.
-u or --update Update duplicate records.
-V Validate only. This option enables you to see potential errors. The list of errors will not be exhaustive and may include items that might only be a problem on a specific database. Also, the list does not attempt to discover all the errors that might be encountered when the actual import is attempted. For example, the validate option does not check for duplicates.
-w Do not import into views.
x transactionsize

Specify the transaction size, in number of records to be held in the buffer before committing or ending transactions.

This specification will override any other specification for the INSERTBUFSIZE parameter made in the db.cfg file.

--noconfigclear Do not fire configuration clear event. Using this option means the system will be out of sync. This is useful when a system is off-line and these utilities are being used. In that case, the system will be restarted anyway and the cache for configuration data will be automatically built upon access so processing the event is not needed.
--showtimings Show timings for each file.
--threads=n Run dbimport with the specified number of threads. If you use multiple threads, you may see a performance improvement. However, performance depends on other factors as well. Note that a very large table will be processed in a single thread.
dataarea The data area to import data to.
name A specific business class, module, database space, or business class pattern to execute the command against. Wildcard characters can also be used in order to specify multiple business classes.
-v errorlevel or --errorlevel= errorlevel

errorlevel, tracelevel, and verbose specified in any order, where:

  • errorlevel = {n}none | {f}atal | {e}rror | {w}arning (warning is the default)

  • tracelevel = 0:6 where 0 is no message and 6 is everything

  • verbose = whether to display exception stacktrace

Options valid with -C
-D dateformat

Specify the date format that dates have in the input file. You need to use this option if the dates are not already in the format yyyyMMdd. When specifying the format, use uppercase M's to indicate the month format, and lowercase d's and y's to indicate the day and year formats, respectively. For example, MM/dd/yyyy is correct but mm/dd/yyyy is not correct.

For a full explanation of supported formats, see the Java API documentation for the java.text.SimpleDateFormat class.

-E encoding Specify the encoding that the CSV file has. The default is UTF-8 since dbexport creates files with UTF-8 encoding. If you have a CSV file, with a different encoding, use this option. Valid values are ISO-8859-1, windows-1252, and UTF-8.
-F or --force Force data import, even when the headers do not match.
-m delimiter Specify the field delimiter character.
-n or --noheaders Specify that the input file has no field headers. See the descriptions for the -U and -W options.
-N or --normalize Normalize records; enforce strict data type casting.
-O or --iso ISO data and time patterns. This overrides the -D and -T options.
-s separator

Specify the field separator character.

The default separator is a comma (,).

-S decimalSeparator Specify the decimal separator.
-T timeformat

Specify the time format that times have in the input file. You need to use this option if the times are not already in the format HHmmss. When specifying the format, use uppercase H's for the hour format, and m's, and s's to indicate the minute, and second formats, respectively.

For a full explanation of supported formats, see the Java API documentation for the java.text.SimpleDateFormat class.

-U or --unixdata Use UNIX line separators. You do not need to use this parameter if you include headers in the input file. If the input file does not include headers (-n or --noheaders option), be sure to use the correct parameter (either -U or -W, depending on your platform).
-W or --windowsdata Use Windows line separators. You do not need to use this parameter if you include headers in the input file. If the input file does not include headers (-n or --noheaders option), be sure to use the correct parameter (either -U or -W, depending on your platform).
--classicnames Use classic names for files and columns.
--userfields

Include user fields in the import from CSV. This can apply in two scenarios:

  • One, you want to import data from a CSV file into a delivered business class, and that business class and the CSV file to import include user-defined fields.

  • Two, you want to import data from a CSV file into a user-defined business class.

In the CSV files, be sure to include a header row with comma-separated field names as well as rows with comma-separated values for the fields.

--translations Import data (field value) translations. In order to import data translations, the import file must contain columns in the format fieldName{locale}, for example name{es_ES}. If you create a file via dbexport using its --translations option and the file contained data translations, then the file created via dbexport will contain additional columns for the data translations. You can also manually add the necessary columns and data translations to import file.
--workfile Import workfiles.
--workfilename The name of a workfile to import. The default is the input filename.
--errorfile errorFileName | --noerrorfile

If you do not specify --errorfile errorFileName or --noerrorfile, then rows with errors are written to CSV files along with comment lines describing the errors, all of which are collected into a zip file with the name dbimportErr*.zip (where * is a numeric string to give the file a unique name).

If you specify --errorfile errorFileName , then rows with errors are written to CSV files that are collected into a zip file with the name you provide. You cannot specify the name of an existing file.

If you specify --noerrorfile, then no error files are created.

Options valid with -D
-L localeName

Specify a locale in the format language[_COUNTRY[_variant]]

For example, en_US

Valid only with the -D option.