Metrics#

Once an appropriate descriptor has been created, the metric recordings can be submitted to its corresponding endpoints using the descriptor’s ID or name in the path.

Submit metric#

Metric records can be submitted using a POST operation on the specific metric endpoint, using a json file that contains one or multiple records. Its important that each of these record objects contains a timestamp and the exact same keys and values as defined in the descriptor or it will be considered invalid. There is also the option to ignore these invalid metrics by setting the property ignoreOnFailed on the top level of the json.

It should be noted that metrics requires a customer owner, and if none is given it defaults to the user’s current customer. The exception for this is global metrics, for which the customer field should be left blank for both submission, search and aggregation.

curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/metrics/v1/metric/{descriptor ID or short name} -d '{
  "records": [
    {
      "timestamp": 10000,
      "keys": {
        "ipAddress": "10.10.0.1",
        "vlan": "101"
      },
      "values": {
        "packages": 5,
        "frames": 10
      }
    }
  ]
}'

Tip

| You can check the Swagger API documentation for a more detailed description of the endpoint and the values that can be used in the Json object.

Search metrics#

The submitted metrics can be searched through for specific time frames using a POST operation on the search endpoint for the specific metric descriptor. A json object encapsulates the search request, which only requires the start- and end- timestamps for the time frame searched. The search can be refined by expanding the json with specific key values to search for, searching across multiple customer’s records or adding additional criterias for the data to be included or excluded in the result.

Default values for the timestamps are 0, and ‘now’ respectively for start and end timestamp.

Additional properties for the json can be added to help sort the result by grouping different keys together, set subcriterias, specify customers, or even whether the metrics should be aggregated across multiple customers.

Like most Mnemonic search API’s the request can take limit and offset values, but the implementation is constrained by the Elasticsearch search window that allows for a maximum of ten thousand results and no pagination beyond this. As such the maximum limit is 10000, and the combined total of limit and offset can not exceed 10000 either. A limit of 0 is considered ‘unlimited’, but if the total hits for the search criteria would be more than 10000 an error is thrown, asking you to narrow your search.

The search supports ‘sort by’ either by keys in the descriptor or by values. If the name of the field you want to sort by is not a valid field for the descriptor an error will be thrown. If the descriptor contains a key and a value field with the same name the results will be sorted by the value field.

Search for partial key values, wildcards, scrolling or streaming are not supported yet.

Specifying keys#
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/metrics/v1/metric/{descriptor ID or short name}/search -d '{
    "startTimestamp": 0,
    "endTimestamp": 1000000,
    "keys": {
        "ipAddress": "10.10.0.1"
    }
}'
Searching the records of multiple customers#
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/metrics/v1/metric/{descriptor ID or short name}/search -d '{
    "startTimestamp": 0,
    "endTimestamp": 1000000,
    "customer": [
        "mnemonic", "notmnemonic"
    ]
}'
Adding subcriterias to search#
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/metrics/v1/metric/{descriptor ID or short name}/search -d '{
    "startTimestamp": 0,
    "endTimestamp": 1000000,
    "subCriteria": [
    {
      "customer": [
        "mnemonic"
      ],
      "keys": {
        "ipAddress": "10.10.0.1"
      },
      "required": true
    }
  ]
}'

Tip

| You can check the Swagger API documentation for a more detailed description of the endpoint and the values that can be used in the Json object.

Aggregate metrics#

An aggregation search of metric data can be done by using a POST operation on the aggregation endpoint for the specific metric descriptor, with a json object to encapsulate the aggregation search. An aggregation is much like a search, it requires the start- and end- timestamps, and you can add subcriterias or search across multiple customers. The difference is that it requires one or more value property that is to be aggregated, with an optional aggregation function which if missing will just be the default function given when creating the descriptor. Additional options is to specify the time resolution for the aggregation points, days, hours, seconds or milliseconds, sorting the results together by keys, and if you want the aggregation to be done across multiple customers or not.

Default values for the timestamps are 0, and ‘now’ respectively for start and end timestamp.

Aggregation using partial key values or wildcards are not supported yet.

Specifying the aggregation function and grouping the result#
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/metrics/v1/metric/{descriptor ID/shortName}/aggregate -d '{
    "values": [{
        "name": "packages",
        "aggregationFunction": "min"              
    }],
    "groupBy": [ { 
        "key": "vlan",
        "limit": 25
    }],
    "startTimestamp": 0,
    "endTimestamp": 100000
}'
Setting the time resolution#
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/metrics/v1/metric/{descriptor ID/shortName}/aggregate -d '{
    "values": [{
        "name": "packages"        
    }],
    "resolution": 2,
    "resolutionUnit": "day",
    "startTimestamp": 0,
    "endTimestamp": 100000
}'

When using groupBy there are two main use cases that the service is focused on, both are two-dimensional, which is handled a bit differently to ensure accuracy.

The first is a grouping of a specific key over time, with a specified resolution. For this the service will in the background make an additional requests to the ES cluster to find the X most relevant of the specified key and then use those in the histogram aggregation. X in this case is the limit given in the request.

Additional keys in the groupBy field will not have the accuracy guarantee of the first key.

Specifying the aggregation function and grouping the result#
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/metrics/v1/metric/{descriptor ID/shortName}/aggregate -d '{
    "values": [{
        "name": "packages",
        "aggregationFunction": "min"              
    }],
    "groupBy": [ {
        "key": "vlan",
        "limit": 2 
    }],
    "startTimestamp": 0,
    "endTimestamp": 100000,
    "resolution": 2, 
    "resolutionUnit": "day",
}'

The second use case is an aggregation without histogram, but will instead make the additional request for each key in the groupBy list to ensure the accuracy of the data.

Specifying the aggregation function and grouping the result#
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/metrics/v1/metric/{descriptor ID/shortName}/aggregate -d '{
    "values": [{
        "name": "packages",
        "aggregationFunction": "min"              
    }],
    "groupBy": [ {
        "key": "vlan",
        "limit": 2 
    },
    {
        "key": "interface",
        "limit": 2
    }],
    "startTimestamp": 0,
    "endTimestamp": 100000,
    "resolution": 0
}'

Note

When using weighted average, ‘wavg’ as the aggregation function, it is required to add the field ‘valueAsWeight’ to the ‘values’ objects using ‘wavg’

Tip

| You can check the Swagger API documentation for a more detailed description of the endpoint and the values that can be used in the Json object.