Uploading CSV/XLS data
DEPRECATED
Please note that this instruction is deprecated. To follow the up-to-date file upload process use the Request Upload.
Overview
This step-by-step instruction guides the user through data upload process. Uploaded data can be used with Use Case Enablers.
Scenario
In this instruction user will learn how to:
- Create the Data Storage,
- Upload the desired data,
- Read uploaded data,
- Delete unused data storage.
Step 1. Create a storage
attention
id
of your Data Storage and go directly to the Step 2To upload desired files the storage is needed! Before upload:
- Use the Create Storage endpoint to send the below request
- Select Create a storage example
- Replace the
dataStorageExample
and thedataSourceExample
with new names
- Payload
- curl
{- "name": "dataStorageExample",
- "dataMatchingDefinitionId": "string",
- "dataSources": [
- {
- "dataSourceName": "dataSourceExample",
- "dataMapperDefinitionId": "61a5d4dc773d1012a3dfa5aa"
}
], - "featuresOn": [
- "LOOKUP",
- "DATA_MIRROR"
], - "sharedWithOrganization": false,
- "labels": [
- "string"
]
}
- Check the above example in the Try It Console now:
Important information:
- Your storage is created if the
status
:CREATED_SUCCESSFULLY
is displayed in the response body. - The
id
refers to the unique identifier of a newly created storage (for exampleid
:3a10645d8be23f53a20b30bfa936e63d
). - The storage is empty by default.
Congratulations!
id
in the next stepStep 2. Upload data to the storage
Select the file
To start the import job of a csv or xlsx file:
- Use the Start Import Job endpoint to send the below request
- Paste the existing storage
id
- Point the file path
- curl
- Check the above example in the Try It Console now:
In the response you will find 3 important parameters:
- The id with the value representing
YOUR JOB ID
- the unique number of every upload job. - The progress :
0
representing the progress of the job. - The status :
SCHEDULED
- the job is in the job queue.
attention
YOUR JOB ID
for polling the job status.Pool for the completion status
To check the import job status:
- Use the Poll Import Job endpoint to send the below request
- Use the
YOUR JOB ID
from the previous step and replace thejobId
- curl
- Check the above example in the Try It Console now:
- If the job is still running the status with the progress will be displayed:
{
"id" : "YOUR JOB ID",
"progress" : "71",
"status" : "RUNNING"
}
info
The import job runtime is dependent on the size of the imported file. Poll the status regularly to find out when the import is done.
- When the job is done the
FINISHED
status will be displayed
{
"id" : "YOUR JOB ID",
"progress" : "100",
"status" : "FINISHED"
}
success
The data is fully imported to your storage now. The uploaded data can be processed now by wide range of CDQ Solutions.
Step 3. Read data
To read uploaded data:
- Use the Read Business Partners endpoint to send the below request
- Use the existing storage
id
- curl
- Check the above example in the Try It Console now:
value
object:{
"values" : [
{
`businessPartnerData1`
},
{
`businessPartnerData2`
}
]
}
attention
businessPartnerData1
and businessPartnerData2
represent the sets of parameters for a particular Business Partner. It is used for simplification.Step 4. Delete storage
GOOD PRACTISES
Delete the storage after you no longer need it.
- Use the Delete Storage endpoint to send the below request
- Use the existing storage
id
no longer in use and replace thestorageId
in the request
- curl
- Check the above example in the Try It Console now:
Your opinion matters!
We are constantly working on providing an outstanding user experience with our products. Please share your opinion about this tutorial!
Mail our developer-portal team: developer-portal@cdq.com