Imports are configurable objects that define how your source data (i.e. data from another system
or file) should be converted and imported into data sets in XLReporting. You can give an import
a name, define the source data, the target data set, the column mapping, any (optional) lookups,
conversions, and data filters, and set the user permissions. Once defined, you can run the
import as often as you want to import new (or update existing) data.
Let's look again at our diagram (which we explained in the introduction) :
Imports are crucial to getting well-structured and validated data into XLReporting. They are usually the next thing you set up after you have created your data sets.
Imports provide very powerful functions to convert, validate, and filter the data. This enables you to define data sets that are optimal for reporting purposes, without being constrained to the structure or format of your existing source data.
When you sign up, your account already contains demo data with reports, imports, and data sets for common financial reporting. So you don't have to start from scratch, you'll have working examples which you can start to use, and amend or expand over time. You can create, change, and delete imports at any point in time.
Some common examples of imports are:
You can create an import in two ways:
Either way, the rest of the process is the same, and you can enter a name for your new import, set user permissions for it, and define its column mapping. The column mapping of an import is shown in Excel-style with a preview of the converted data:
You can define the settings for the import via these fields:
Click on Source Data to select your source data. You can choose from the following data sources:
After you have selected your source data, you will see a data preview:
If you selected an Excel workbook, you will be able to select which sheet within that workbook you want to import. For Excel workbooks or text files, you can also indicate at which row the data starts (enabling you to ignore empty or title rows in files) and whether your file contains column headers. It is preferable to use files that contain header names, because it is easier to work with column names when importing data. If your file does not have header names, the columns are referred to by letters (e.g. Excel style such as A-Z) or column numbers (1-99).
Once you have reviewed the data, click on Apply and you can start to work on the column mapping.
When importing from Excel workbooks, you may often find that information is laid out in columns, for example period-by-period amounts. XLReporting enables you to transpose that column data into rows, so you can store it optimally in data sets. Simply select the column names in your source file that you want to transpose:
This will transpose the selected column data and you can use the 2 special columns TRANSPOSE in the column mapping to your data set.
By default, the transpose operation will skip empty values and zeros. If you want to transpose all columns onto all rows even if they are empty, you can do so by selecting the Also include empty option.
Once you have selected your target data set and your source data, XLReporting will try to automatically match the columns in your source data to those in your target data set. You can always edit this where required.
The column mapping is shown in Excel-style with a preview of its data (a sample of the first 100 rows):
If you want to change the order of the columns, just drag them across to the left or right. You can edit a column mapping by clicking on its Mapping header:
You can select the source or other content for each column in your target data set, by choosing from the following:
Once you have selected the source, you can optionally convert or recalculate the source data. You can enter a simple function or an expression with multiple functions and operators. An expression can contain the following elements:
Here is a practical example with a screenshot:
In many cases you might want the user to select some value when they start an import. For example, the company they're about to import, or the period. You can achieve that with the SELECT() function.
The below example uses a SELECT() to present the user with a dropdown list of company codes (using the values from another data set). The selected company code is then imported in the target data set:
Another common scenario is to look up some value in another data set, based on data in your import file or some other logic. Let's assume an import file contains local Chart of Account numbers that need to be converted to a central Chart of Accounts.
The below example uses a LOOKUP() to lookup the central account code based on the combined company code and local account code in your import file. The looked up account code is then stored in the target data set:
Another common scenario in imports is the need to "ungroup" grouped data, in other words repeat data from the row above where applicable. That will ensure every row has all the relevant data so it can be stored in a data set. The option Repeat previous values in the When empty field will do just that.
When you define an import, you can decide how newly imported data relates to existing content of the data set: do you want to simply add to the existing content, or do you want to completely overwrite all existing content, or do you want to update existing data? Or perhaps you want to selectively replace or delete existing content based on certain criteria?
If you choose to replace or delete data, you need to select one or more column(s) that will be used to decide which data to replace or delete. The import will use the filter values on these column(s) (e.g. SELECT, SELECTED, or fixed values) if these are defined, or else the realtime values in the source data, to replace or delete existing data.
A practical example is importing financial data for a given financial period: most users want to be able to import subsequent (updated) versions, but without duplicating anything. To achieve that, you should select Replace in and Replace existing data for the relevant column:
You can select multiple columns, for example company and period. In that case, any existing data for the same company and period will be deleted, before the new data is imported.
The option Delete from is similar in that it deletes matching existing content in the data set, but without importing any new data.
Once you have selected the source column, and optionally converted or recalculated its value, you can filter the data. Filtering means that you can exclude rows in your source data from the actual data import.
You can specify a static value, a simple function, or a complex expression and you can use all common operators. You can choose from various filter functions.
The below example filters the source data on the Account column: only rows with accounts between 30000 and 39999 will be imported into the data set:
You can also use SELECT() to present the user with a dropdown list of values.
When defining an import, you can use the Save and Actions buttons in the right-top of the screen:
These buttons enable you to do the following:
Click on Actions - Define import batch to combine multiple imports into a batch which you can then run as one single action. This is useful if you want to process related or dependent data during, before, or after a given import.
You can insert or delete imports at any time, and re-order them, using the dropdown menu in the last column.
By default, imports will be started in the order that you define them, and processed in parallel (i.e. near simultaneously). This gives the fastest performance in general, but if you have certain imports that require other imports to be completed first, you can set "Wait in line". When this option is set for an import, it will only be started and processed once all its predecessor imports have completed.
Click on Actions - Review this import to review a number of aspects of this import and its target data set: