Consuming structured logs
From within an IPC
Structured logs can be fetched from within an IPC using the RPCs provided by the DataLogger service, which are accessible through the structured logging clients in the SDK.
This guide covers the various RPCs available for fetching logs using the C++ StructuredLoggingClient.
INTR_ASSIGN_OR_RETURN(
StructuredLoggingClient client,
StructuredLoggingClient::Create("logger.app-intrinsic-base.svc.cluster.local:8080", absl::Now() + absl::Seconds(5)));
List available log sources
Before fetching logs, you may want to see which event_source streams are available. You can do this with the ListLogSources RPC.
INTR_ASSIGN_OR_RETURN(std::vector<std::string> event_sources, client.ListLogSources());
Fetch the most recent log
If you only need the very last log item for an event_source, GetMostRecentItem is a more direct and efficient RPC.
INTR_ASSIGN_OR_RETURN(intrinsic_proto::data_logger::LogItem most_recent_item,
client.GetMostRecentItem("event_source_name"));
Fetch a time range of logs
The GetLogItems RPC is the most common way to retrieve a time range of log items. It offers several options for filtering and shaping the data you receive.
Basic fetch
The simplest way to fetch logs is by providing an event_source. This will retrieve all logs for that source within the default time window (the last 5 minutes).
INTR_ASSIGN_OR_RETURN(StructuredLoggingClient::GetResult get_result,
client.GetLogItems("event_source_name"));
for (const intrinsic_proto::data_logger::LogItem& item : get_result.log_items) {
// Process the log items.
}
Pagination
When fetching a large number of logs, the GetLogItems RPC uses pagination to split up the response.
The GetResult contains a next_page_token field. If this token has a value, it means there are more logs to retrieve. You can pass this token in another request to get the next page of logs.
// Request the first page of logs.
INTR_ASSIGN_OR_RETURN(StructuredLoggingClient::GetResult get_result,
client.GetLogItems("event_source_name", /*page_size=*/100));
for (const intrinsic_proto::data_logger::LogItem& item : get_result.log_items) {
// Process the log items.
}
// Keep fetching pages using the overload until there are no more.
std::optional<std::string> page_token = get_result.next_page_token;
while (page_token.has_value()) {
INTR_ASSIGN_OR_RETURN(
StructuredLoggingClient::GetResult next_result,
client.GetLogItems(/*page_size=*/100, *page_token));
for (const intrinsic_proto::data_logger::LogItem& item : next_result.log_items) {
// Process the log items.
}
page_token = next_result.next_page_token;
}
Time-based filtering
You can filter logs by time range, using start_time and end_time to specify a time window for your query.
INTR_ASSIGN_OR_RETURN(
StructuredLoggingClient::GetResult get_result,
client.GetLogItems("event_source_name",
absl::Now() - absl::Hours(1),
absl::Now()));
Label-based filtering
You can, in addition to time-based filtering, filter by (string key, string value) labels using the filter_labels map argument of the GetLogItems RPC. This is useful, for example, for filtering on other dimensions other than time (e.g., operation ID).
Only LogItems that have a labels map that contains (either exactly or as a superset) all of the key-value pairs in the filter_labels map will be returned.
For this filtering to work, the logs must have originally been logged with labels.
absl::flat_hash_map<std::string, std::string> filter_labels;
filter_labels["key_0"] = "val_0";
filter_labels["key_1"] = "val_1";
INTR_ASSIGN_OR_RETURN(
StructuredLoggingClient::GetResult get_result,
client.GetLogItems("event_source_name",
/*page_size=*/100,
/*start_time=*/absl::Now() - absl::Hours(24),,
/*end_time=*/absl::Now(),
filter_labels));
Downsampling
For high-frequency logs, you can use DownsamplerOptions in addition to filtering to reduce the number of logs returned.
intrinsic_proto::data_logger::DownsamplerOptions downsampler_options;
// You can do count based sampling.
downsampler_options.set_sampling_count(10);
// Or time-based sampling.
downsampler_options.mutable_sampling_interval_time()->set_seconds(1);
INTR_ASSIGN_OR_RETURN(
StructuredLoggingClient::GetResult get_result,
client.GetLogItems("event_source_name",
/*page_size=*/100,
/*start_time=*/absl::Now() - absl::Hours(24),,
/*end_time=*/absl::Now(),
/*filter_labels=*/{},
downsampler_options));
From the cloud APIs
Currently, there isn't a structured logging client library for the data available on the cloud APIs. You can still use inctl to download data. Below is an example for downloading perception data.
inctl logs cp --historic --context $WORKCELL_NAME --org $INTRINSIC_ORGANIZATION --historic_start_timestamp start_time --historic_end_timestamp start_time event_source local_dir
Here's what each part of the command means:
- workcell_name: The workcell that you want to get the perception data for
- start_time: Start time of the window for which you want to query the perception data for
- end_time: End time of the window for which you want to query the perception data for
- event_source: This is either "perception.frames.raw" or "perception.frames.annotated" for the image frames pre pose estimation and post pose estimation respectively
- local_dir: The local directory we want to download the images to
So, putting it all together, as an example, this command will:
- Download all the pose estimation results (
perception.frames.annotated) - For the workcell
node-xxx - Between
2025-01-15T12:30:24-08:00and2025-01-15T23:00:24-08:00 - To your local directory of
~/blobs_test
inctl logs cp --context node-xxx --historic --org $INTRINSIC_ORGANIZATION --historic_start_timestamp 2025-01-15T12:30:24-08:00 --historic_end_timestamp 2025-01-15T23:00:24-08:00 "perception.frames.annotated" ~/blobs_test