Let’s use the Hume AI Platform to measure the facial expression Dr. Keltner is forming in the image here.
To do so, we’ll need to find our API access key.
Step 1: Finding Your API Access Key
This is a key-code specific to your account that lets you authenticate into our platform. To retrieve your API key, visit beta.hume.ai, click on your profile icon in the upper right corner, and choose Settings. Your key is listed as part of your Profile. For a more detailed tutorial on accessing the API, check out our help page.
Next, we need to decide how we want to feed our data into the platform.
Step 2: Deciding Between Batch and Streaming APIs
There are two APIs we can choose from: a batch API and a streaming API.
The batch API can process a single media file or multiple files in parallel. It measures all of the expressions found in each file and notifies you when the results are ready, usually within a few minutes.
The streaming API can be connected to a live webcam or microphone input and returns measures of expressive behavior in real time.
For a saved image, the batch API is the best fit. We’ll work with the batch API in this tutorial, but spend more time with the streaming API in future tutorials.
So, now that we’ve got an API key and we’ve decided to go with the batch API, we can tell the Hume AI Platform where our data is and how we want to analyze it.
Step 3: The API Call
We’ll be using curl to call the API. We’ll first need to specify the URL of the data we want to analyze and the model(s) we want to use (to explore the models we have available, see our Products page).
We’ll also need a link to our data that is accessible to our API (our example will use a publicly available URL), along with our Platform Access Key.
Once we have those ready, we can package up the information into a format our APIs can understand: