Gridscript

Scripting

In GridScript Pipelines, scripting allows you to write and execute custom code inside Code Stages. Each stage can process data from previous stages, perform transformations, or create visual outputs such as tables, charts, or logs.

Why Use Scripting in Pipelines?

  • Automate multi-step data transformations and visualizations.
  • Link multiple datasets and processing stages through a shared context.
  • Prototype analytical workflows that can be reused and extended.
  • Combine code execution with visual tools like charts and tables.
  • Export results or intermediate data for reporting and reuse.

Code Stages and Context

Each Code Stage runs in its own isolated environment and communicates with the pipeline’s shared context object. This context stores all imported datasets and variables created during execution.

For example, if an Import Stage loads a dataset under the field namesales, the next Code Stage can access it directly:

// Access imported data from previous stages
const data = context.sales;

// Transform it
for (let i = 1; i < data.length; i++) {
  data[i][1] = Number(data[i][1]) * 1.2; // Apply markup
}

// Return the modified context
context.sales = data;

JavaScript Scripting

JavaScript is the primary language for scripting inside pipelines. Each code stage executes asynchronously and supports helper functions for producing visual outputs.

  • table(data) – Display a dataset as an interactive grid.
  • chart(options) – Render a chart using AgCharts.
  • log(message, type) – Print informational or error messages to the stage log.

Example: Creating a Chart from Pipeline Data

const data = context.sales;

chart({
  title: { text: "Sales by Region" },
  data: data.slice(1).map(row => ({
    region: row[0],
    sales: Number(row[1]),
  })),
  series: [{
    type: "bar",
    xKey: "region",
    yKey: "sales",
  }],
  axes: [
    { type: "category", position: "bottom", title: { text: "Region" } },
    { type: "number", position: "left", title: { text: "Sales ($)" } },
  ],
});

You can chain multiple Code Stages to perform sequential transformations — for example, cleaning data in one stage, then visualizing it in the next.

With JavaScript code stages you get access to the Tensorflow.js library allowing you to create, train and execute AI/ML models.

Python Scripting

Pipelines also support Python scripting, allowing you to manipulate data using a familiar and expressive syntax. The pipeline automatically synchronizes your changes back to the shared context object.

Example: Transforming Data in Python

data = context["sales"]

for row in data[1:]:
    row[1] = float(row[1]) * 1.5

context["sales"] = data

Using GridScript Python Helpers

To use helper functions like table(), chart(), or log() in Python, you must first import the gridscript library inside your Code Stage:

from gridscript import table, chart, log

data = context["sales"]

# Display a preview of the dataset
table(data)

# Create a simple chart
chart({
    "title": { "text": "Sales Overview" },
    "data": [
        {"region": row[0], "sales": float(row[1])}
        for row in data[1:]
    ],
    "series": [{
        "type": "column",
        "xKey": "region",
        "yKey": "sales",
    }]
})

# Log a message
log("Sales chart generated successfully")

These helpers work the same way as their JavaScript counterparts, allowing you to visualize, inspect, or log pipeline data directly from Python.

With Python code stages you get access to the numpy, pandas and scikit libraries allowing you to transform and manipulate data as well as create, train and execute AI/ML models.

Working with Views

Code stages can generate multiple views within the same execution: tables for data, charts for visualizations, and logs for textual feedback. Each appears directly below the stage for clarity and can be zoomed or exported.

  • Tables can be exported as .CSV, .XLSX, or .JSON.
  • Charts can be downloaded as .PNG images or exported as JSON.
  • Logs show messages and errors emitted during script execution.

Execution Flow

Pipelines execute from top to bottom. Each stage receives the current context, modifies it, and passes it along. You can run a single stage or the entire pipeline using the Run All command in the Toolbar.

Saving and Reusing Pipelines

When you save your workspace, all pipeline stages and their scripts are stored automatically. You can also export your entire pipeline as a .gspp file and re-import it later to continue your work or share it with others.

Next Steps

Now that you understand scripting in pipelines, explore the Pipelines page to learn about multi-stage workflows, or visit Project Scripting to see how scripting works within individual datasets.