# Uploading CSV/XLS data Deprecated Please note that this instruction is deprecated. To follow the up-to-date file upload process use the [Request Upload](/documentation/instructions/url-upload/). **Overview** This step-by-step instruction guides the user through data upload process. Uploaded data can be used with Use Case Enablers. **Scenario** In this instruction user will learn how to: 1. Create the Data Storage, 2. Upload the desired data, 3. Read uploaded data, 4. Delete unused data storage. ## Step 1 Create a storage Skip this step if a **Data Storage** was created before. You can use it multiple times for your files. Get the `id` of your **Data Storage** and go directly to the [Step 2](#step-2-upload-data-to-the-storage) To upload desired files the storage is needed! Before upload: 1. Use the **Create Storage** endpoint to send the below request 2. Select **Create a storage** example 3. Replace the `dataStorageExample` and the `dataSourceExample` with new names 4. Check the above example in the **Try It Console** now: Important information: * Your storage is created if the `status`: `CREATED_SUCCESSFULLY` is displayed in the response body. * The `id` refers to the unique identifier of a newly created storage (for example `id`:`3a10645d8be23f53a20b30bfa936e63d`). * The storage is empty by default. Congratulations The Data Storage is successfully created. Use the storage `id` in the next step. ## Step 2 Upload data to the storage ### Select the file To start the import job of a csv or xlsx file: 1. Use the **Start Import Job** endpoint to send the below request 2. Paste the existing storage `id` 3. Point the file path 4. Check the above example in the **Try It Console** now: In the response you will find 3 important parameters: * The **id** with the value representing `YOUR JOB ID` - the unique number of every upload job. * The **progress** : `0` representing the progress of the job. * The **status** : `SCHEDULED` - the job is in the job queue. Note down received `YOUR JOB ID` for polling the job status. ### Pool for the completion status To check the import job status: 1. Use the **Poll Import Job** endpoint to send the below request 2. Use the `YOUR JOB ID` from the previous step and replace the `jobId` 3. Check the above example in the **Try It Console** now: * If the job is still running the status with the progress will be displayed: ```json { "id" : "YOUR JOB ID", "progress" : "71", "status" : "RUNNING" } ``` The import job runtime is dependent on the size of the imported file. Poll the status regularly to find out when the import is done. * When the job is done the `FINISHED` status will be displayed ```json { "id" : "YOUR JOB ID", "progress" : "100", "status" : "FINISHED" } ``` The data is fully imported to your storage now. The uploaded data can be processed now by wide range of CDQ Solutions. ## Step 3 Read data To read uploaded data: 1. Use the **Read Business Partners** endpoint to send the below request 2. Use the existing storage `id` 3. Check the above example in the **Try It Console** now: In the response the Business Partners data will be displayed in the `value` object: ```bash { "values" : [ { `businessPartnerData1` }, { `businessPartnerData2` } ] } ``` The `businessPartnerData1` and `businessPartnerData2` represent the sets of parameters for a particular Business Partner. It is used for simplification. ## Step 4 Delete storage Good practises Before the secret expires create a new one and contact CDQ to switch to the new secret. Otherwise, users will not be able to log into the CDQ WebApps after the expiration date. After the old secret has expired and the switch to the new secret has happened the old secret can be deleted. 1. Use the **Delete Storage** endpoint to send the below request 2. Use the existing storage `id` no longer in use and replace the `storageId` in the request 3. Check the above example in the **Try It Console** now: ## Your opinion matters! We are constantly working on providing an outstanding user experience with our products. Please share your opinion about this tutorial!