Skip to main content

Create an HMI Service

This guide will walk you through the creation of an HMI as a service. An HMI enables control of a deployed solution using a fully custom interface.

This guide builds upon the foundation of other service-related guides. Please ensure that you have run through/understood the prerequisite guides.

  1. Create your first service
  2. Handle HTTP requests

This guide is meant to illustrate the concepts around HMI services and how they would be implemented. Code for a more complete HMI examples can be found on GitHub.

What makes an HMI service

warning

Installing and accessing an HMI is only supported with on-prem devices available on your local network. The remainder of this guide will assume that the on-prem device is available at workcell.lan.

An HMI service is a special kind of service. It provides an interface whith controls and information related to a deployed solution. The HMI service has two jobs.

  • It serves a frontend that can be accessed in a browser.
  • It provides an HTTP REST API that lets the frontend talk with Intrinsic platform services (e.g. the executive).

The frontend makes calls to the API, then the HMI service handles gRPC communication with the relevant Intrinsic platform services.

Development setup

This guide requires a project with the Intrinsic SDK. Follow the guide on how to set up the development environment if you haven't already.

Bazel workspace

You will need a Bazel workspace to create an HMI service. The workspace can be created at the root of the project using inctl. You can skip this step if you already have a MODULE.bazel file in your project.

inctl bazel init --sdk_repository https://github.com/intrinsic-ai/sdk.git --sdk_version latest

Python or Go

This guide is written with code examples in both Python and Go.

tip

You can use any programming language to implement an HMI service.

Follow the basic Python setup guide for services: link Make sure you have the correct setup in MODULE.bazel.

Package an Intrinsic service

Every service runs a binary that provides the services' functionality. This binary is the entrypoint of the service container image and usually serves a certain kind of traffic (e.g. HTTP) at a specific port. The container image running the binary is finally packaged as a service with a manifest to create a deployable unit.

Service binary

Begin by creating a new directory called hmi in your development container, at the root of the project. This directory will contain all the code for the HMI service.

In the hmi directory, create a new file called server.py. Put the following code into it:

"""This script works as the binary for the HMI server."""
#!/usr/bin/env python3

def main():
print("Hello world!")

if __name__ == '__main__':
main()

Since every service is built using Bazel, you must set up the correct rules for building the binary in a BUILD file. In the hmi directory, create a file called BUILD alongside server.py.

To make Bazel create a binary for our server:

  1. Create a file called BUILD next to server.py.
  2. Add the following py_binary rule the BUILD file.
py_binary(
name = "service",
srcs = ["service.py"],
main = "service.py",
)

At this point, your project file tree should look similar to the following:

├── bazel
│ └── content_mirror
│ └── permissive.cfg
├── .bazelignore
├── .bazelrc
├── .bazelversion
├── .devcontainer
│ └── devcontainer.json
├── hmi
│ ├── BUILD
│ └── server.py
├── MODULE.bazel
└── .vscode
└── settings.json

The binary file can now be run with Bazel to check that it builds and executes correctly. Open a terminal in VSCode and navigate to /workspaces/hmi. Then run:

bazel run //hmi:server
note

The initial build may take a while (5+ minutes). Subsequent builds will be much faster.

This should print Hello world!.

Create a container image

All services run a container that runs the associated binary as its entrypoint. This means that creating a service necessitates creating a container image. This must be done with Bazel as well.

Container images are created in multiple steps when using Bazel.

  1. Every container image is made up of layers, where each layer is simply a set of changes to the file system in the container. The container image for this guide only needs a single layer for the server binary. The layer is created (as a tar archive) using a pkg_tar rule.
  2. The layers (or in this case layer) are provided to an oci_image rule. This rule creates the container image from a specified base image and the provided layers. It also specifies the entrypoint, i.e. the binary to execute when the container runs.
  3. The image must be wrapped in a tarball using an oci_load rule. Tarballs can be loaded directly by container runtimes, such as the container runtime on any on-prem device.

The default bazel workspace has everything we need except for rules_pkg. Add this directive to MODULE.bazel to get access to rules_pkg.

bazel_dep(name = "rules_pkg", version = "1.0.1")

Now add the required load statements in the BUILD file of the hmi directory:

load("@ai_intrinsic_sdks//bazel:python_oci_image.bzl", "python_oci_image")
load("@rules_oci//oci:defs.bzl", "oci_load")
load("@rules_pkg//:pkg.bzl", "pkg_tar")

In order to build the container image, put these rules at the end of the BUILD file.

pkg_tar(
name = "server_layer",
srcs = [":server"],
extension = "tar.gz",
)

python_oci_image(
name = "hmi_image",
binary = "server",
base = "@distroless_python",
entrypoint = ["python3", "-u", "/hmi/server"],
data_path = "/frontend/",
extra_tars = [":server_layer"],
)

oci_load(
name = "hmi_tarball",
image = ":hmi_image",
repo_tags = ["hmi:latest"],
)

Ensure that your image build setup is valid by building it using Bazel.

bazel build //hmi:hmi_tarball

Create a service manifest

Each service requires a service manifest. The service manifest contains two key pieces of information about the service: metadata and the service definition.

tip

Refer to the service introduction for more information about the service manifest.

Service metadata is general information about the service, such as its unique ID, the vendor, documentation and a display name. You can put whatever is appropriate for you.

The service definition specifies how the service behaves. It references the image the service should run and for an HMI also configures HTTP routing.

Create a manifest next to your BUILD file called manifest.textproto.

note

You must specify the .textproto extension for the manifest file.

# proto-file: https://github.com/intrinsic-ai/sdk/blob/main/intrinsic/assets/services/proto/service_manifest.proto
# proto-message: intrinsic_proto.services.ServiceManifest

metadata {
id {
package: "my.company"
name: "hmi"
}
vendor {
display_name: "My Company"
}
documentation {
description: "A simple HMI for My Company."
}
display_name: "My Company HMI"
}
service_def {
http_config: {}
real_spec {
image {
archive_filename: "hmi_image.tar"
}
}
sim_spec {
image {
archive_filename: "hmi_image.tar"
}
}
}

Make sure to include an empty value for http_config in the service definition. This enables the HMI to receive HTTP traffic and serve a frontend.

Create the deployable service

The last step required to package the service is to feed both the tarball from the oci_load rule and the manifest into a special intrinsic_service build rule. This will package the image so that it can be installed. First load the intrinsic_service rule from the correct repository by adding the correct load statement at the top of the BUILD file.

load("@ai_intrinsic_sdks//intrinsic/assets/services/build_defs:services.bzl", "intrinsic_service")

Add the intrinsic_service build rule to the BUILD file.


filegroup(
name = "hmi_tarball.tar",
srcs = [":hmi_tarball"],
output_group = "tarball",
)

intrinsic_service(
name = "hmi_service",
images = [":hmi_image.tar"],
manifest = "manifest.textproto",
)

You can now build your service using Bazel. This will create a bundle archive that can be installed in a solution.

bazel build //hmi:hmi_service

Read runtime context

The HMI service in this example serves a frontend over HTTP. This is the interface that someone (e.g. an operator) interacts with to control the deployment of the solution. The server binary must serve HTTP traffic at a specific port to provide this interface and the associated functionality.

tip

Learn more about handling HTTP traffic in your service in the full guide on handling HTTP requests.

An HMI service can serve HTTP traffic to users through a specific URL exposed on the cluster. The routing to enable this is set up by Intrinsic automatically. In order to serve HTTP traffic on the pre-defined route, the service must run an HTTP server at a specified port. This port is determined dynamically when the service starts up and cannot be encoded statically. Instead, every service can read the HTTP port it should be serving on from the runtime context.

The runtime context contains information that can be relevant to services at runtime. It is provided by Intrinsic infrastructure to every service through a file. The file is placed in a defined, consistent location inside the service container. It contains an encoded RuntimeContext proto. Service authors can read and decode this proto file to get access to the relevant information in their service.

info

The runtime context file is always placed in /etc/intrinsic/runtime_config.pb.

Update your server.py file to read the runtime context.

"""This script works as the binary for the HMI server."""
#!/usr/bin/env python3

import logging
import sys
from intrinsic.resources.proto import runtime_context_pb2

def get_runtime_context():
with open('/etc/intrinsic/runtime_config.pb', 'rb') as fin:
return runtime_context_pb2.RuntimeContext.FromString(fin.read())

def main():
context = get_runtime_context()

if __name__ == '__main__':
logging.basicConfig(stream=sys.stderr, level=logging.INFO)
main()

Now that we added the dependency to runtime_context_pb2, the py_binary rule must declare it as a dependency.
Add the dependency to the py_binary() rule.
Your rule should now look like this:

important

All imports in code must be backed by an entry in the deps attribute of the associated BUILD rule.

py_binary(
name = "service",
srcs = ["service.py"],
main = "service.py",
deps = [
"@ai_intrinsic_sdks//intrinsic/resources/proto:runtime_context_py_pb2",
]
)

The py_binary rule should now build successfully.

warning

Running the binary locally will produce errors because the runtime context file does not exist inside the development container. The file will exist when the service is deployed to an on-prem device.

Create a frontend

Let's start by creating a page for the frontend. A frontend is usually some HTML, CSS and JavaScript. The entry file is an index.html.

tip

You can use any JS framework or other method to create your frontend. You will need to set up BUILD rules for the framework with Bazel so that you can provide static files (HTML, CSS, JS) to the binary for serving. Some frameworks document this (e.g. Angular) and some do not.

Begin by creating a new directory under the hmi directory. Call this frontend. Now create an index.html file in this directory (hmi/frontend/index.html). As a first step, simply put a dummy HTML template into the index.html file.

<!DOCTYPE html>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>HMI</title>

<h1>This is the HMI frontend.</h1>

Bundle frontend files

Next, the index.html from the frontend folder needs to be bundled with the server binary so that it can serve it. In Bazel this is done through runfiles. Runfiles are specified for rules using the data field. The files to specify as runfiles should be wrapped using a filegroup rule.

Create the filegroup rule before the existing py_binary rule in the BUILD file. Then add a data attribute to the existing py_binary rule that references the filegroup.

filegroup(
name = "frontend_files",
srcs = glob(["frontend/**"]),
)

py_binary(
name = "server",
srcs = ["server.py"],
main = "server.py",
data = [":frontend_files"],
deps = [
# deps omitted...
],
)

Whenever the binary is now built and run with Bazel, the runfiles will be placed in a special location alongside the compiled binary and can be referenced from it. The pkg_tar rule for the server layer will not include runfiles automatically. You must specify include_runfiles on the existing pkg_tar rule in the BUILD file to enable this.

pkg_tar(
name = "server_layer",
srcs = [":server"],
include_runfiles = True,
extension = "tar.gz",
)

Serve frontend files

With the HTTP port from the runtime context you can set up the HTTP server to serve traffic.

The http.server library provides the HTTPServer class.
Our service will use HTTPServer to listen for HTTP requests, and serve the frontend.
Add the following imports to create the http server in server.py.

from http.server import HTTPServer, SimpleHTTPRequestHandler
try:
from rules_python.python.runfiles import runfiles
except ImportError:
# https://github.com/bazelbuild/rules_python/issues/1679
from python.runfiles import runfiles

Now, in the main function of server.py, retrieve the http_port from the runtime_context, and create an HTTP server that listens on that port.

def main():
context = get_runtime_context()
http_port = context.http_port
logging.info(f" HTTP port provided by runtime context: {http_port}")

logging.info(f" Creating HTTP server.")
http_server = HTTPServer(
server_address=("", http_port),
RequestHandlerClass=MyHandler
)
logging.info(f" Starting HTTP server.")
http_server.serve_forever()

The MyHandler class also needs to be defined. It dictates how the http server handles requests. In order to get the index.html into the HMI you must now serve it from the root path (/) of the HTTP server in the server binary. The files will be placed in a special runfiles directory by bazel. Use the runfiles library to find the directory that Bazel put the files in. Remember to change the Rlocation path to match the name of your bazel package. This package name is defined in your MODULE.bazel file.

class MyHandler(SimpleHTTPRequestHandler):
"""Handler for the HMI server."""
def __init__(
self,
*args,
**kwargs,
):
# Uses the runfiles library to determine where Bazel put the static files.
r = runfiles.Create()
logging.info("Created runfiles object.")
self.bazel_runfiles_dir = r.Rlocation(path="<package_name>/hmi/frontend")
logging.info(f"Runfiles directory: {self.bazel_runfiles_dir}")
super().__init__(*args, directory=self.bazel_runfiles_dir, **kwargs)

def do_GET(self):

if self.path == "/":
# Serving of HTML file.
self.path='/index.html'
with open(self.frontend_directory + self.path, "r") as f:
file_content = f.read()
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(bytes(file_content, encoding="utf-8"))

else:
# Serve other static files as usual.
super().do_GET()

warning

http_server.serve_forever() blocks further program execution while the HTTP server is listening. It must be the very last thing called in the main function Any code placed after http_server.serve_forever() will not be executed.

Communicate with an Intrinsic platform service

The HMI provides control over a solution through communication with Intrinsic platform services. Different services can provide different functionality for the HMI. The HMI service talks to Intrinsic platform services through their gRPC API. The API definition for each service can be found in the service's .proto file in the SDK.

The Executive service

tip

Consider reading the full documentation for the executive service.

The executive service provides the ability to run processes in the form of behavior trees. It also contains methods for stopping, pausing, resuming and stepping through these processes and provides information (such as errors) about each execution.

This guide utilizes the executive service to allow the HMI to display some very basic information to illustrate its use. The example code available from GitHub expands on this by showing how to start and stop processes as well as how to view execution status.

Establish a connection

Communication with any gRPC service requires a client for it. The client is automatically generated from the service definition and can be used directly in any supported language, including Python and Go.

Clients for gRPC services are created using a connection. This connection is a network channel to a certain address and port where the relevant service should be listening. Connections also specify the required credentials for the service. An HMI service must connect to any Intrinsic platform services using the cluster-internal address of the cluster ingress. Don't worry if these terms are not very meaningful to you, all you need to know is that services you can connect to will be available at a specific address.

info

Cluster services are available from HMI services at istio-ingressgateway.app-ingress.svc.cluster.local:80.

note

Connecting to Intrinsic platform services from an HMI service does not require credentials (insecure credentials) because the connection is internal to the on-prem device.

Import the grpc library

import grpc

Add the cluster ingress address, then create a function that returns the executive stub. A stub is used to call gRPC service methods.

GRPC_INGRESS_ADDRESS = "istio-ingressgateway.app-ingress.svc.cluster.local:80"

def create_executive_stub(connect_timeout: float):
channel = grpc.insecure_channel(GRPC_INGRESS_ADDRESS)
grpc.channel_ready_future(channel).result(timeout=connect_timeout)
return executive_service_pb2_grpc.ExecutiveServiceStub(channel)

The client provides methods for all the operations defined inside the service. You can find all available methods in the service definition proto file.

Provide a REST API

The HMI frontend communicates with the cluster through an HTTP API. The HTTP API connects the browser frontend with Intrinsic platform services since the frontend cannot communicate with gRPC services (which all Intrinsic platform services are) directly.

Providing an API is as simple as choosing a path and writing a lightweight handler function that performs some logic and returns a response. You will be adding a handler in the section on communication with an Intrinsic platform service involving the executive service.

Each HTTP handler serves as a bridge between the frontend (which can call the HTTP handler) and the Intrinsic platform services that are able to control the deployed solution.

There are multiple ways to implement handlers, and you may have as many handlers as you like.

With the HTTP server up and running you can now serve a frontend to people that open the HTTP route for the HMI service in their browser. The frontend is served by the same HTTP server as the API described in the section above.

Use service methods

Once you have a stub for the service you're trying to communicate with, you can begin using the service methods for that service. This is done by simply calling one of the methods available on the stub object. For the HMI service, service methods should usually be called inside HTTP handlers. This means that the service is called only when the HMI frontend (i.e. the user) makes a specific request.

Every call to a gRPC service using Python requires a stub and a request message. The stub serves as a client-side representation of a GRPC service. The request message is usually specific to each operation and contains all the information the service needs to process the request.

To provide an example, consider the ListOperations RPC on the ExecutiveService. It requires a context and a google.longrunning.ListOperationsRequest. The request message is a proto that can be created using the language-specific implementation generated from its definition.

When handling an HTTP request, you can write plain text or something more advanced like JSON depending on your needs. Below is a json example.

Import the json_format library to be able to convert protos to json.

from google.longrunning.operations_pb2 import ListOperationsRequest  # type: ignore
from google.protobuf import json_format

Add the following case to the do_GET method of the MyHandler class. This adds the first endpoint to the REST API. When an HTTP GET request is made on the path ../api/executive/operations, the HMI service will send a ListOperationsRequest to the executive using the executive service stub. The executive responds with a proto, which is converted to json. This json is then sent as a response to the HTTP GET request.

  elif self.path == '/api/executive/operations':
# Lists all active operations in the executive.
executive = create_executive_stub(60)
response_proto = executive.ListOperations(request=ListOperationsRequest())
for operation in response_proto.operations:
operation.ClearField('metadata')
response_json = json_format.MessageToJson(response_proto)
logging.info('Operations in the executive: %s', response_json)
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
self.wfile.write(response_json.encode())

Add the relevant dependencies to the existing py_binary rule in your BUILD file to satisfy the build requirements.

py_binary(
name = "hmi_service_bin",
srcs = ["hmi_service.py"],
main = "hmi_service.py",
data = [":frontend_files"],
deps = [
# Add to the existing deps:
"@com_google_googleapis//google/longrunning:operations_py_proto",
"@ai_intrinsic_sdks//intrinsic/executive/proto:executive_service_py_pb2_grpc",
],
)
warning

Running the binary locally will produce errors because the connection to the service using the specified address is only possible inside the on-prem device cluster.

You may use any of the methods provided on the client and you can freely combine multiple clients to perform actions.

Call from the frontend

There is now an HTTP handler at /api/executive/operations that will make a call to the executive service when invoked. The frontend can call this HTTP handler and parse/print the response.

The frontend can call the HTTP handlers at a relative path because they are registered in the same HTTP server under subpaths. The example frontend below uses the HTTP API to retrieve the ID of the first operation returned from the executive service.

You can add the required javascript to your html file as shown below, or you can create a script.js file in the frontend folder, and import it into the html file.

<!DOCTYPE html>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<head>
<title>HMI</title>
</head>
<body>
<h1>This is the HMI frontend.</h1>

<div>
<button id="load-operation-id">Load operation ID</button>
</div>

<p>Latest operation ID: <strong id="operation-id">(press button to load)</strong></p>
</body>
<script>
const loadOperationIdBtn = document.getElementById("load-operation-id");
const operationIdEl = document.getElementById("operation-id");

loadOperationIdBtn.addEventListener("click", async () => {
operationIdEl.textContent = await fetchLatestOperationId();
});

async function fetchLatestOperationId() {
try {
const res = await fetch("api/executive/operations");
const s = await res.json();
if(Array.isArray(s.operations) && s.operations.length > 0) {
return s.operations[0].name;
} else {
return "No operation ID found";
}
} catch (e) {
console.error("Failed to get operations:", e);
return "(error, see console for details)";
}
}
</script>
</html>

Any HTTP handler in the HMI can be called this way. The response needs to be parsed appropriately based on what kind of data the handler returns.

Installation

You can now install the HMI service to a solution. Installing will make the service available to add from the Services panel in Flowstate.

Begin by building the HMI. Do this by running a bazel build for the intrinsic_service in the BUILD file.

bazel build //hmi:hmi_service

The output of the bazel build will show where the bundle has been written. This should be something like bazel-bin/hmi/hmi_service.bundle.tar. You can now use inctl to install the service. Replace ORGANIZATION_NAME with your organization name and then run the command.

inctl asset install bazel-bin/hmi/hmi_service.bundle.tar \
--org=ORGANIZATION_NAME \
--address="workcell.lan:17080"

The service image will be uploaded directly to the on-prem device. Once this is complete, you will be presented with a message like this:

Finished installing the service: ...

Now open the solution in Flowstate and follow these steps:

  1. Find the Services tab on the right side.
  2. Select Add service. The HMI service you just installed should be shown in the list with the display name from metadata in the service manifest.
  3. Select the HMI service and click Add.
  4. You will be prompted for a service name. This can be any unique identifier you like. Use the name hmi.
  5. Select Apply to add the HMI to the solution. This should be very quick.

The HMI service will start up and should now be available. Follow the steps in the next section to view and try it.

Access the HMI

The HMI is now installed, added to your solution and can be accessed in any web browser.

The HMI service is available on the on-prem device at /ext/services/{name}/, where {name} is the name chosen during service deployment in Flowstate. If your on-prem device is available at workcell.lan:17080 and you chose hmi as the name when adding the service, the HMI can be accessed in any web browser at workcell.lan:17080/ext/services/hmi/.

warning

Ensure that you add a trailing slash (/) to the end of the HMI URL in the browser. Otherwise networks requests may fail.

Try pressing the Load operation ID button in the HMI. If you open Flowstate and run any process, you can press the button again and should see the ID changing.

success

Congratulations, your HMI is working and communicating with Flowstate!

Next steps

The HMI you just built is very basic. Intrinsic provides code for a more advanced example HMI on GitHub.

The HMI example on GitHub offers much more functionality:

  • list all saved processes of a running solution
  • start a process
  • stop execution
  • view execution status (including errors)
  • query and modify states of service instances in a solution

It also offers some guidance on local testing of HMIs for faster iteration.