core.table.*.
Use them when your workflows need durable, queryable records such as asset inventories, user allowlists, enrichment results, or investigation evidence.
What tables are good for
- Persisting enrichment data across workflow runs
- Looking up records by a known field such as hostname, email, or indicator
- Searching and exporting structured data for analysts
- Attaching case context to reusable datasets
Common workflow pattern
Most table workflows follow the same lifecycle:- Create the table once with a schema that fits your data.
- Insert or upsert rows as new events arrive.
- Look up, search, or export rows later from another workflow step.
Column schema
core.table.create_table takes columns as a JSON array of column objects.
This is the same schema you use for tables that you later link to cases.
name: Required string. Use letters, numbers, and underscores, and start with a letter or underscore.type: Required uppercase string. UseTEXT,INTEGER,NUMERIC,BOOLEAN,DATE,TIMESTAMPTZ,JSONB,SELECT, orMULTI_SELECT.nullable: Optional boolean. Defaults totrue.default: Optional value. It must match the column type.options: Optional array of strings. Required forSELECTandMULTI_SELECT, and invalid for other types.
type values match the custom tables picker.
Case custom fields use the same storage type family: TEXT, INTEGER, NUMERIC, BOOLEAN, DATE, TIMESTAMPTZ, JSONB, SELECT, and MULTI_SELECT.
In the case field picker, raw JSONB is currently surfaced through the URL kind, and Long text is layered on top of TEXT.
Notes
- Use
lookuporis_inwhen you already know the column and value you need. - Use
search_rowswhen you need broader text search or paginated results. - Use
downloadwhen you want to export rows as JSON, NDJSON, CSV, or Markdown.
FAQ
How do I insert more than 1000 rows into a table?
How do I insert more than 1000 rows into a table?
core.table.insert_rows is best for batch inserts, but you should split large imports into smaller batches first.
Create batches upstream, then run one insert_rows action per batch.var.batch should be a list of rows that stays within your chosen batch size.
This keeps imports predictable and makes retry behavior easier to reason about.What column types should I use for tables and case-linked fields?
What column types should I use for tables and case-linked fields?
Use the documented uppercase type names:
TEXT, INTEGER, NUMERIC, BOOLEAN, DATE, TIMESTAMPTZ, JSONB, SELECT, and MULTI_SELECT.- Use
SELECTandMULTI_SELECTonly when you also provideoptions. - Use
TEXTorJSONBfor flexible payloads. - Case-linked custom fields follow the same storage type family as tables.
core.table.create_table
Create a new lookup table with optional columns.
Inputs
The name of the table to create.
List of column definitions. Each item is a JSON object with required
name and uppercase type, plus optional nullable, default, and options fields. Use options only with SELECT or MULTI_SELECT.Default: null.If true, raise an error if the table already exists.Default:
true.Examples
Create and inspect a tablecore.table.list_tables
Get a list of all available tables in the workspace.
Inputs
This action does not take input fields.Examples
Create and inspect a tablecore.table.get_table_metadata
Get a table’s metadata by name. This includes the columns and whether they are indexed.
Inputs
The name of the table to get.
Examples
Create and inspect a tablecore.table.lookup
Get a single row from a table corresponding to the given column and value.
Inputs
The column to lookup the value in.
The table to lookup the value in.
The value to lookup.
Examples
Look up rowscore.table.is_in
Check if a value exists in a table column.
Inputs
The column to check in.
The table to check.
The value to check for.
Examples
Look up rowscore.table.lookup_many
Get multiple rows from a table corresponding to the given column and values.
Inputs
The column to lookup the value in.
The table to lookup the value in.
The value to lookup.
The maximum number of rows to return.Default:
100.Examples
Look up rowscore.table.search_rows
Search for rows in a table with optional filtering.
Inputs
The table to search in.
Cursor for pagination.Default:
null.Filter rows created before this time.Default:
null.The maximum number of rows to return.Default:
100.If true, return cursor pagination metadata along with items.Default:
false.Reverse pagination direction.Default:
false.Text to search for across all text and JSONB columns.Default:
null.Filter rows created after this time.Default:
null.Filter rows updated after this time.Default:
null.Filter rows updated before this time.Default:
null.Examples
Search table rowscore.table.insert_row
Insert a row into a table.
Inputs
The data to insert into the row.
The table to insert the row into.
If true, update the row if it already exists (based on primary key).Default:
false.Examples
Insert, update, and delete rowscore.table.insert_rows
Insert multiple rows into a table.
Inputs
The list of data to insert into the table.
The table to insert the rows into.
If true, update the rows if they already exist (based on primary key).Default:
false.Examples
Insert, update, and delete rowscore.table.update_row
Update a row in a table.
Inputs
The new data for the row.
The ID of the row to update.
The table to update the row in.
Examples
Insert, update, and delete rowscore.table.delete_row
Delete a row from a table.
Inputs
The ID of the row to delete.
The table to delete the row from.
Examples
Insert, update, and delete rowscore.table.download
Download a table’s data by name as list of dicts, JSON string, NDJSON string, CSV or Markdown.
Inputs
The name of the table to download.
The format to download the table data in.Default:
null.The maximum number of rows to download.Default:
1000.