In GridScript Pipelines, scripting allows you to write and execute custom code inside Code Stages. Each stage can process data from previous stages, perform transformations, or create visual outputs such as tables, charts, or logs.
Each Code Stage runs in its own isolated environment and communicates with the pipeline’s shared context object. This context stores all imported datasets and variables created during execution.
For example, if an Import Stage loads a dataset under the field namesales, the next Code Stage can access it directly:
// Access imported data from previous stages
const data = context.sales;
// Transform it
for (let i = 1; i < data.length; i++) {
data[i][1] = Number(data[i][1]) * 1.2; // Apply markup
}
// Return the modified context
context.sales = data;JavaScript is the primary language for scripting inside pipelines. Each code stage executes asynchronously and supports helper functions for producing visual outputs.
table(data) – Display a dataset as an interactive grid.chart(options) – Render a chart using AgCharts.log(message, type) – Print informational or error messages to the stage log.const data = context.sales;
chart({
title: { text: "Sales by Region" },
data: data.slice(1).map(row => ({
region: row[0],
sales: Number(row[1]),
})),
series: [{
type: "bar",
xKey: "region",
yKey: "sales",
}],
axes: [
{ type: "category", position: "bottom", title: { text: "Region" } },
{ type: "number", position: "left", title: { text: "Sales ($)" } },
],
});You can chain multiple Code Stages to perform sequential transformations — for example, cleaning data in one stage, then visualizing it in the next.
With JavaScript code stages you get access to the Tensorflow.js library allowing you to create, train and execute AI/ML models.
Pipelines also support Python scripting, allowing you to manipulate data using a familiar and expressive syntax. The pipeline automatically synchronizes your changes back to the shared context object.
data = context["sales"]
for row in data[1:]:
row[1] = float(row[1]) * 1.5
context["sales"] = dataTo use helper functions like table(), chart(), or log() in Python, you must first import the gridscript library inside your Code Stage:
from gridscript import table, chart, log
data = context["sales"]
# Display a preview of the dataset
table(data)
# Create a simple chart
chart({
"title": { "text": "Sales Overview" },
"data": [
{"region": row[0], "sales": float(row[1])}
for row in data[1:]
],
"series": [{
"type": "column",
"xKey": "region",
"yKey": "sales",
}]
})
# Log a message
log("Sales chart generated successfully")These helpers work the same way as their JavaScript counterparts, allowing you to visualize, inspect, or log pipeline data directly from Python.
With Python code stages you get access to the numpy, pandas and scikit libraries allowing you to transform and manipulate data as well as create, train and execute AI/ML models.
Code stages can generate multiple views within the same execution: tables for data, charts for visualizations, and logs for textual feedback. Each appears directly below the stage for clarity and can be zoomed or exported.
.CSV, .XLSX, or .JSON..PNG images or exported as JSON.Pipelines execute from top to bottom. Each stage receives the current context, modifies it, and passes it along. You can run a single stage or the entire pipeline using the Run All command in the Toolbar.
When you save your workspace, all pipeline stages and their scripts are stored automatically. You can also export your entire pipeline as a .gspp file and re-import it later to continue your work or share it with others.
Now that you understand scripting in pipelines, explore the Pipelines page to learn about multi-stage workflows, or visit Project Scripting to see how scripting works within individual datasets.