Skip to content

Pipelines🔗

Pipeline is an easy way to create or edit all components required for using an endpoint and, if needed, an asyncjob.

Usage🔗

Creation🔗

Datasources🔗

Form fields🔗
  • Datasources: Required. One or more datasources can be selected in select box

Endpoint🔗

Configuration of endpoint. Select between Direct and Cache endpoint; Direct endpoint only need a slurper; Cache endpoint need an async job and a collection.

Form fields🔗
  • Endpoint name: Required. Pre-filled with the datasource name (see above for more informations). This will be used to pre-fill other name components (see above)
  • Endpoint name formatted: Required. Pre-filled with the datasource name (see above for more informations). This will be the url endpoint and endpoint name.

Fields endpoint name and endpoint name formatted are pre-filled when you select a datasource. The pattern is the following :

  • For one datasource My new datasource, the endpoint name is My new datasource and the endpoint name formatted is my-new-datasource.
  • For multiple datasources My new datasource 1, My new datasource 2 and My new datasource 3, the endpoint name is My new datasource 1 and 2 more and endpoint name formatted is my-new-datasource-1-and-2-more.

These 2 fields can be modified by the user.

Slurper🔗

Form fields🔗
  • Name: Read-only. Pre-filled with the name of endpoint. The pattern is the same as endpoint name field.
  • Type: Only Python available.
  • Source code: The default source code is :

    def slurp(ctx, data_map):
        data = data_map.get("MY DATASOURCE NAME")
        return data
    
    When a datasource is selected the datasource name is replaced in code.
    def slurp(ctx, data_map):
        data = data_map.get("My new datasource")
        return data
    
    If multiple datasources are selected the code is adapted.

    Each datasources (data_0, data_1, data_2) are combined and available in the array result.

    def slurp(ctx, data_map):
        data_0 = data_map.get("My new datasource 1")
        data_1 = data_map.get("My new datasource 2")
        data_2 = data_map.get("My new datasource 3")
        result = [
            *data_0,
            *data_1,
            *data_2
        ]
        return result
    

Async job🔗

This form is only visible if Cache endpoint is selected in endpoint form.

Form fields🔗
  • Name: Read-only. Pre-filled with the endpoint name. The pattern is the same as endpoint name field.
  • Recurrence: Define the recurrence of async job. See Async job page for more details.
  • Pause: Check to pause the job

Collection🔗

Form fields🔗
  • Name: Read-only. Pre-filled with the endpoint name. The pattern is the same as endpoint name field.
  • Type: Types available are : Normal collection, Upserted collection, Accumulative collection, Versioned collection, Upserted and versioned collection and Accumulative and versioned collection.

Edition🔗

On each endpoint page you can edit the pipeline.

Limitations🔗

  • In edition mode, the type of endpoint and the collection type are not editable.
  • If the endpoint is of type Cache the Async job form will be displayed, otherwise (if type Direct) the form will be hidden.
  • If datasources change the fields endpoint name and endpoint name formatted will be synchronized, as in creation mode. The changes will be propagated in slurper name and async job name. fields.
  • Warning : The source code will not be updated to avoid deleting code that has already been written by the user.