Page tree

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Tip

For more technical documentation of the endpoints, API models, and how to use them, visit the Swagger API documentation.


Introduction

The sample service is a database used for storing and retrieving sample files that may or may not be malicious. Along with the sample file itself, there is also stored metadata, analysis results, evidence, links, and submissions.
Have a look at the table below for an explanation of some key concepts related to the sample service.


Concept

Description

Sample

A file that may or may not be malicious. A sample can be uploaded and downloaded via the API if the user has the appropriate access. 

See the samples section for a description of the available API endpoints and how to use them.


Classification

A classification contains basic information about the sample, such as file type, size, mime type, and platform. A sample is automatically classified after it is uploaded. The classification information is exposed when fetching sample metadata.

Analysis

The data produced from an analysis of a sample. This analysis can contain information about f.ex., specific characteristics from executing the sample, like network traffic, loaded libraries, or other things.

See the analysis section for a description of the available API endpoints and how to use them.

Evidence

An entity added to an analysis. An evidence entry can contain information such as mime type and other data resulting from the analysis.

See the evidence section for a description of the available API endpoints and how to use them.

Link

An entity used to indicate a relationship between samples. If, f.ex a sample downloads or loads another sample they have a relationship of that given type.

See the link section below for a description of the available API endpoints and how to use them.

Submission

A recorded event of a spotted sample in the wild. This is typically submitted by a customer that has spotted a sample on their network in some way. A submission contains information about f.ex., file name, mime type of the sample, and the timestamp when it was discovered.

See the submission section for a description of the available API endpoints and how to use them.

Challenge

A challenge the user must solve in order to prove that he/she possesses a given sample. The challenge solution is used when adding a sample submission.

See the challenge section for a description of the available API endpoints and how to use them.

Job

A job represents the analysis of a sample and is used to track its progress. On its own it holds no analysis results or verdicts, but contains information such as when an analysis job was started and completed, and whether it failed or not. A job also contains one or more job tasks.

See the job section for a description of the available API endpoints and how to use them.

Job task

A job task is always part of a job. A job task represents the progress and outcome of a single analysis step of a sample; typically an analysis worker. It contains information such as when the worker started and completed, whether it failed or not, and a reference to the analysis result (if it completed successfully).

See the job task section for a description of the available API endpoints and how to use them.

Verdict

A verdict contains an assessment about the sample such as whether or not it is malicious. The verdict is created automatically based on analysis results as part of an analysis job. A verdict can also be added manually by using the add verdict API endpoint.

See the verdict section for a description of the available API endpoints and how to use them.


Authentication

Before any of the sample API endpoints can be used, the user needs to obtain an API key. See the general integration guide for details on how to obtain and use such a key.

There are four role groups related to Sample that may be assigned to an API key. Each with its own intended use case. It is recommended to use one of these instead of individually assigning permission functions.

Name

Description

SAMPLEDB-VIEWER

Read-only access to samples. The user is allowed to download samples, view metadata, analysis results, evidence attached to analysis, submissions, and links

SAMPLEDB-UPLOADER

Grants permission to add samples and submissions to the Sample service

SAMPLEDB-USER

Access to view and upload samples to the Sample service. The key will inherit all permissions granted by the SAMPLEDB-VIEWER and SAMPLEDB-UPLOADER roles. In addition, access is granted to request analysis jobs to be aborted.

SAMPLEDB-ANALYZER

Grants permission to add analysis results for samples. Only granted to superusers. The key will inherit all permissions granted by the SAMPLEDB-VIEWER and SAMPLEDB-USER roles. In addition, access is granted to add analysis results, evidence attached to analysis results, links, and verdicts

Anchor
sample
sample
Sample

Uploading a sample

The role SAMPLEDB-UPLOADER or higher is required to be able to upload samples.

To upload a sample file, the sample endpoint can be used with a POST operation, along with the raw file.
Upon success, the HTTP response will contain either a 201 or 200 status code, depending on whether the sample already exists or not. The response body will be JSON formatted, and contain, among other things, the ID of the sample which will be the calculated sha256 hash of the file. This ID is used in all requests relating to a sample, like downloading a sample, adding an analysis, or submission.

Code Block
languagebash
curl -XPOST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/octet-stream" -d @./sample-file.exe https://api.mnemonic.no/sampledb/v2/sample

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Downloading a sample

There are two ways of downloading a sample. Using the safe method will download a sample that has been zipped and password protected with the word "infected". The unsafe method will download a raw sample without any form of protection.

Safe

The role SAMPLEDB-VIEWER or higher is required to be able to download a safe, zipped sample. In addition, the customer needs to have access to at least one submission for the same sample.
The example below will download a sample and store it in a file named safe-sample.zip.

Code Block
curl -X GET -H "Argus-API-Key: my/api/key" -o safe-sample.zip https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/safe

For more detailed information, you can check out the swagger documentation.

Unsafe

The role SAMPLEDB-VIEWER or higher is required to be able to download the raw sample. In addition, the customer needs to have access to at least one submission for the same sample.
The example below will download a sample and store it in a file named raw-sample.

Code Block
curl -X GET -H "Argus-API-Key: my/api/key" -o raw-sample https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/raw

For more detailed information, you can check out the swagger documentation.

Anchor
sample-fetch-metadata
sample-fetch-metadata
Fetching sample metadata

The role SAMPLEDB-VIEWER or higher is required to be able to fetch sample metadata. In addition, the customer needs to have access to at least one submission for the same sample.

To fetch metadata about a sample, the sample endpoint can be used with a GET operation along with the sample ID. The response body will be JSON formatted and contain information like size, and timestamp.

Code Block
languagebash
curl -X GET -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Searching for samples

The role SAMPLEDB-VIEWER or higher is required to be able to search for samples.

Simple search

To perform a simple search the sample endpoint can be used with a GET operation along with the keywords query parameter. The response body will be JSON formatted and contain a list of sample metadata objects.
The example below will search for samples updated within the last month, and that also contain at least one submission, analysis, link, classification, or sample ID matching either of the keywords "evil" and "apex".

Code Block
curl -X GET -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/sample?indexStartTimestamp=now%20-%201%20months&indexEndTimestamp=now&keywords=evil&keywords=apex

For more detailed information on what the response model looks like, you can check out the swagger documentation.


The table below contains a list of valid query parameters and how they work

Query parameter

Default value

Description

limit

25

Used to limit the number of search results. Can be anywhere in the range of 0 to 10,000.

indexStartTimestamp

now - 1 month

Limit the search to samples updated after this timestamp. Supports milliseconds since epoch, ISO8601 time string, and a relative time format. For more information, see the Time formats section of the general integration guide.

indexEndTimestamp

now

Limit the search to samples updated before this timestamp. Supports milliseconds since epoch, ISO8601 time string, and a relative time format. For more information, see the Time formats section of the general integration guide.

keywords


A list of strings used to match against submissions, analyses, links, the classification, and IDs of samples. This parameter can be specified multiple times. If there are more than one keywords parameters, samples containing one or more of the keywords will match.
e.i "sample contains keyword1 OR keyword2 OR keyword3". If no keywords parameter is specified, all samples falling within the indexStartTimestasmp and indexEndTimestamp will match the search


Advanced search

To perform an advanced search the sample search endpoint can be used with A POST operation along with a JSON body making up the search criteria. The response body will be JSON formatted and contain a list of sample metadata objects.

The example below will search for samples updated within the last month, and that also contain at least one submission, analysis, link, classification, or sample ID matching either of the keywords "evil" and "apex".

Code Block
curl -X POST -H 'Content-Type: application/json' -H "Argus-API-Key: my/api/key" -d '{"indexStartTimestamp":"now - 1 month","indexEndTimestamp":"now","keywords":["evil","apex"],"keywordFieldStrategy":["all"],"keywordMatchStrategy":"any"}' https://devapi.mnemonic.no/sampledb/v2/sample/search


For more detailed information on all the different search criteria and possible values check out the swagger documentation.

Search criteria

Field name

Default

Description

Keyword search

keywords


A list of strings used to match against submissions, analyses, links, the classification, and IDs of samples.

keywordFieldStrategy

all

Defines what fields to match keywords against.

keywordMatchStrategy

any

Whether all or any of the keyword fields must match the keywords

Time search

startTimestamp


The timestamps indicated by the timeFieldStrategy must be after this timestamp. Supports milliseconds since epoch, ISO8601 time string, and a relative time format. For more information, see the Time formats section of the general integration guide.

endTimestamp


The timestamps indicated by the timeFieldStrategy must be before this timestamp. Supports milliseconds since epoch, ISO8601 time string, and a relative time format. For more information, see the Time formats section of the general integration guide.

timeFieldStrategy

all

Defines what time fields to match startTimestamp and endTimetamp against.

timeMatchStrategy

any

Whether all or any of the time fields must fall within the time range.

User search

user


A list of user IDs or short names. Matching samples must have resources, indicated by userFieldStrategy, created by any of these users.

userFieldStrategy

all

Defines what fields to match the list of users against.

userMatchStrategy

any

Whether all or any of the user fields must match the users.


sha256


A list of sha256 sample IDs. The search will be restricted to these samples only if defined.


classification


A list of classification criteria. The matching samples must contain a classification matching any of the defined classification criteria.


submission


A list of submission criteria. The matching samples must contain at least one submission matching any of the defined submission criteria.


analysis


A list of analysis criteria. The matching samples must contain at least one analysis matching any of the defined analysis criteria.


link


A list of link criteria. The matching samples must contain at least one link matching any of the defined link criteria.


tlp


A list of TLP strings. The matching samples must contain at least one submission, analysis, or link matching any of the defined TLP values.

Sub criteria

subCriteria (recursive)


A list of sub-criteria. The matching samples must contain at least one matching sub-criteria.

The sub-criteria model contains all of the fields described above, in addition to itself (making recursive).


It also contains these parameters

  • exclude: whether or not to exclude samples matching this sub-criteria

  • required whether or not to require samples to match this sub-criteria


limit

25

Used to limit the number of search results. Can be anywhere in the range of 0 to 10,000.


indexStartTimestamp

now - 1 month

Limit the search to samples updated after this timestamp. Supports milliseconds since epoch, ISO8601 time string, and a relative time format. For more information, see the Time formats section of the general integration guide.


indexEndTimestamp

now

Limit the search to samples updated before this timestamp. Supports milliseconds since epoch, ISO8601 time string, and a relative time format. For more information, see the Time formats section of the general integration guide.


If multiple different search criteria are defined in the request, the matching samples must match all the criteria. Fields defined in the object of the JSON request are ANDed together, while objects in arrays are ORed together if both exclude and required are false (default). See the table above for an explanation of the exclude  and required  parameters.

For example, a user issues a search containing keywords, time, user, classification, and TLP parameters. The search results returned to the user will be samples satisfying all these criteria. For example, the search criteria below can be translated into the following SQL-like query.

Code Block
{
  "keywords": ["evil", "apex"],
  "keywordFieldStrategy": ["submission"],
  "keywordMatchStrategy": "any",
  "user": [123, 234, 345],
  "userFieldStrategy": ["submittedByUser"],
  "classification": [{
    "version": ["0.9", "1.0"],
    "superType": ["executable"]
  }, {
    "version": ["1.1"],
    "type": ["elf"]
  }],
  "tlp": ["white", "green"] 
}
Code Block
languagesql
SELECT * FROM samples WHERE
  (submission.* CONTAINS "evil" OR "apex") AND
  (submission.submittedByUser CONTAINS 123 OR 234 OR 345) AND
  ((classification.version CONTAINS "0.9" OR "1.0") AND classification.superType IS "executable") AND
  ((classification.version IS "1.1" AND classification.type IS "elf") AND
  ((submission.tlp CONTAINS "white" or "green") OR (analysis.tlp CONTAINS "white" or "green") OR (link.tlp CONTAINS "white" or "green"));


Anchor
analysis
analysis
Analysis

An analysis is the result of an analysis of a sample and a submission. An analysis thus belongs to a sample and must be added or fetched from a specific sample.

Adding an analysis

The role SAMPLEDB-ANALYZER is required to be able to add an analysis. In addition, read access to the sample is required. Below is an example of how to add an analysis.

Code Block
languagebash
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/analysis -d '{
  "customer": "mnemonic",
  "userAgent": {
    "name": "user agent",
    "version": "1.0"
  },
  "tlp": "white",
  "acl": [],
  "analysisResult": {
    "meta": "data"
  }
}'

The field analysisResult may contain arbitrary data, as long as it is valid Json. In other words, it can be a string, array, object, ..

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Fetching analyses

The role SAMPLEDB-VIEWER or higher is required to be able to fetch or list analyses.

List

To list all analyses of a sample, the analysis endpoint can be used with a GET operation along with the sample ID. The response body will be JSON formatted, and contain all information in the analysis.

Code Block
languagebash
curl -X GET -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/analysis/

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Fetch

To fetch a specific analysis of a sample, the analysis endpoint can be used with a GET operation along with the sample ID and the analysis ID. The response body will be JSON formatted, and contain all information in the analysis.

Code Block
curl -X GET -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/analysis/<analysis id>

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Anchor
evidence
evidence
Evidence

Evidence is an entity that belongs to an analysis. It can contain information such as mime type and other data resulting from the analysis.

Adding evidence

The role SAMPLEDB-ANALYZER is required to be able to add an evidence entity.

To add an evidence entity, the evidence endpoint can be used with a POST operation along with the sample ID, and the analysis ID. The request and response body will be JSON formatted and contain the submitted evidence.

Code Block
languagebash
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/analysis/<analysis ID>/evidence -d '{
  "evidence": [65, 65, 65, 65],
  "mimeType": "mime type of the evidence",
  "name": "evidence name"
}'

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Fetching evidence metadata

The role SAMPLEDB-VIEWER or higher is required to be able to fetch or list evidence metadata.

List

To list evidence entries of an analysis, the evidence endpoint can be used with a GET operation along with the sample ID, and analysis ID. The response body will be JSON formatted and contain metadata about the evidence.

Code Block
languagebash
curl -X GET -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/analysis/<analysis ID>/evidence/

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Fetch

To fetch a specific evidence entry of an analysis, the evidence endpoint can be used with a GET operation along with the sample ID, analysis ID, and the evidence ID. The response body will be JSON formatted and contain metadata about the evidence.

Code Block
curl -X GET -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/analysis/<analysis ID>/evidence/<evidence ID>

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Fetching evidence data

The role SAMPLEDB-VIEWER or higher is required to be able to download the raw data in an evidence entry.

To download the raw data of an evidence entity the evidence download endpoint can be used with a GET operation along with the sample ID, analysis ID, and evidence ID. The response body will be the raw bytes of the evidence, with the Content-Type  HTTP header set accordingly. 

Code Block
curl -X GET -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/analysis/<analysis ID>/evidence/<evidence ID>/download

For more detailed information you can check out the swagger documentation.

Anchor
link
link
Link

A link is an entity used to indicate a relationship between two samples. If, for example, a sample downloads or loads another sample they have a relationship of that given type.

The role SAMPLEDB-ANALYZER is required to be able to add a link.

To add a link to a sample, the link endpoint can be used with a POST operation along with the sample ID. The request and response body is JSON formatted and contains information about the link. The example request below adds a link between the given samples defined in the context path in the URL, and in the reference -field in the JSON request, of type downloads.

Code Block
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://devapi.mnemonic.no/sampledb/v2/sample/<sample sha256 ID>/link -d '{
  "customer": "customer",
  "userAgent": {
    "name": "user agent",
    "version": "version"
  },
  "tlp": "white",
  "acl": [],
  "type": "downloads",
  "reference": "<sample sha256 ID 2>"
}'

For more detailed information on what the request and response model looks like, you can check out the swagger documentation.

The role SAMPLEDB-VIEWER or higher is required to be able to fetch or list links.

List

To list links of a sample, the link endpoint can be used with a GET operation along with the sample ID. The response body will be JSON formatted and contain data about the link.

Code Block
languagebash
curl -X GET -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 ID>/link/

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Fetch

To fetch a specific link, the link endpoint can be used with a GET operation along with the sample ID, and the link ID. The response body will be JSON formatted and contain data about the link.

Code Block
curl -X GET -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 ID>/link/<link ID>

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Anchor
submission
submission
Submission

A submission is a record of a sample file being spotted in a customer network in some way. A submission contains information about f.ex., file name, mime type of the sample, and the timestamp when it was discovered. For the customer to get access to a sample file, and its metadata, it needs to have at least one submission for the given sample.

Adding a submission

The role SAMPLEDB-USER or higher is required to be able to add a submission.

Additionally, the user needs to prove that he/she actually possesses the sample file in question. This is done in accordance with the challenge protocol described in the section below. First, the challenge must be generated, then solved. The solution to the challenge will be the challenge token and must be present in the request when adding the submission.

To add a submission, the submission endpoint can be used with a POST operation along with the sample ID. The request and response body will be JSON formatted and contain all the submission information.

Code Block
languagebash
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/submission -d '{
  "fileName": "sample file name",
  "timestamp": 1608557674,
  "mimeType": "sample file mime type",
  "metaData": {
    "metadata-key1": "metadata-value1"
  },
  "tlp": "green",
  "acl": [],
  "userAgent": {
    "name": "user agent",
    "version": "1.0"
  },
  "challengeToken": {
    "id": "challenge id",
    "sha256": "challenge token"
  }
}'

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Fetching submissions

The role SAMPLEDB-VIEWER or higher is required to be able to fetch or list submissions.

List

To list submissions of a sample, the submission endpoint can be used with a GET operation along with the sample ID. The response body will be in a JSON format, and contain all information in the submission. Note that only submissions that the customer has access to will be listed.

Code Block
languagebash
curl -X GET -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/submission/

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Fetch

To fetch a specific submission of a sample, the submission endpoint can be used with a GET operation along with the sample ID and the submission ID. The response body will be JSON formatted and contain all information in the submission.

Code Block
curl -X GET -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/submission/<submission ID>

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Anchor
challenge
challenge
Challenges

A challenge can be generated and used by the user to prove to the sample service that he/she possesses a particular sample file. The answer to this challenge will be the challenge token and must be used in the request when adding a sample submission. A challenge can only be used once – if you want to submit multiple sample submissions a challenge must be created and answered for each submission.

Generating a challenge

To generate a challenge, the challenge endpoint must be used with a POST operation along with the sample ID. The response body will be JSON formatted and contain the challenge itself.

Code Block
curl -X POST -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/sample/<sample sha256 hash>/challenge

For more detailed information on what the response model looks like, you can check out the swagger documentation.

This request can return three different responses, depending on whether the sample file exists and its file size. For how to handle each scenario, see the section solving a challenge below.

Solving a challenge

Generating a challenge might return three different kinds of responses in the form of HTTP status code 404, 422, or 200.

HTTP status code 404

This means that the sample file does not exist in the system. In this case, you need to upload the sample file using the sample-upload endpoint. See the sample section for how to do this. When the upload is finished, the JSON response will contain an object challenge with an id and sha256. This is then the solution to a challenge and can be used in the request when adding a submission.

HTTP status code 422

This means that the sample file is so small that a challenge cannot be generated. Instead, a full upload of the sample file must be done. When the upload is finished, the JSON response will contain an object challenge with an id and sha256. This is then the solution to a challenge and can be used in the request when adding a submission.

HTTP status code 200

The response will in this case contain a JSON object with three fields inside the object data: id, offset, and length.

Field name

Description

id 

The ID of the challenge itself. This must be submitted along with the solution to the challenge

offset 

Represents an offset (in bytes) into the sample file

length 

Represents the number of bytes starting from offset in the sample file

The solution to the challenge will be the sha256 hash of length bytes, starting from the given offset into the sample file. Below is an example of how to solve the challenge for an arbitrary sample file in bash.

Code Block
languagebash
$ curl -X POST -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/sample/36621fe568d2bea6c98fb388d1e36a696bbc1885ad7a72e83b86c279e14e4177/challenge
{
  "responseCode": 200,
  "limit": 0,
  "offset": 0,
  "count": 0,
  "metaData": {},
  "messages": [],
  "data": {
    "id": "4e8341a0-ba36-4eaa-8c0a-ce70ea92879b",
    "offset": 1239,
    "length": 1024
  },
  "size": 0
}
$ dd if=sample-file bs=1 skip=1239 count=1024 | shasum -a 256
1024+0 records in
1024+0 records out
1024 bytes transferred in 0.004667 secs (219411 bytes/sec)
7d97adb3783d3e136eba8542a1bb388eb70a291fdbde9bd63bc06f66a3ef7460 -

The solution is 7d97adb3783d3e136eba8542a1bb388eb70a291fdbde9bd63bc06f66a3ef7460, and must be submitted along with the challenge id 4e8341a0-ba36-4eaa-8c0a-ce70ea92879b when adding a submission.

Anchor
job
job
Jobs

A job represents the analysis of a sample and is used to track its progress. On its own it holds no analysis results or verdicts, but contains information such as when an analysis job was started and completed, and whether it failed or not. A job also contains one or more job tasks. The jobs API is intended to be used by the external system managing the analysis itself.

Job creation

A job can not be created manually via any API endpoint. It is automatically created by the Sample service based on incoming submissions.

Whenever a new job is created, it is posted to the Kafka topic Sample.Analysis.Job.OSL/SVG. The format of messages in this topic will be in JSON format and contain the job ID, the submission it was created from, and the sample classification.

Code Block
{
	"jobID": "12345678-1234-1234-1234-123456789abc",
	"submission": {
		"id": "abcdef01-abcd-abcd-abcd-abcdef012345",
		"customer": "customerid",
		"sample": {
          "id": "8f84837cf04c1694512256ef78fa67c051b92326e31af2598efad37ccd2e01d9"
        },
		"createdByUser": {
          "id": 0,
          "shortName": "sn",
          "name": "name",
          "customer": null,
          "domain": null
        },
		"createdTimestamp": 123,
		"name": "fielname.exe",
		"timestamp": 124,
		"mimeType": "application/octet-stream",
		"metaData": {
          "key": "value"
        },
		"tlp": "white",
		"acl": [],
		"userAgent": {
          "name": "python-requests",
          "version": "3"
        }
	},
	"classification": {
		"sample": "8f84837cf04c1694512256ef78fa67c051b92326e31af2598efad37ccd2e01d9",
		"classificationID": "12345678-1234-1234-1234-123456789012",
		"createdTimestamp": 123,
		"version": "1",
		"type": "PE",
		"superType": "executable",
		"arch": "x86",
		"platform": "win",
		"meta": {}
	}
}

Updating jobs

The role SAMPLEDB-ANALYZER is required to be able to update jobs.

As the analysis job progresses the status should be updated. This can be done using the job endpoint with a PUT operation. The response body will be in JSON format, and contain the updated job.

Code Block
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/sampledb/v2/job/<job id> -d '{
	"state": "executing"
}'

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Aborting jobs

The role SAMPLEDB-USER or higher is required to be able to abort jobs.

Since the analysis jobs themselves are executed externally, the Sample service can't abort analysis jobs directly. It can however request the job to be aborted. This can be done using the jobs endpoint using the DELETE operation. The response body will be in JSON format and contain the updated job status.

Code Block
curl -X DELETE -H "Argus-API-Key: my/api/key" https://devapi.mnemonic.no/sampledb/v2/job/<job id>

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Fetching jobs

The role SAMPLEDB-VIEWER or higher is required to be able to fetch or list jobs.

List

To list analysis jobs, the jobs endpoint can be used with a GET operation. The response body will be in JSON format, and contain a list of objects containing information about the job. By default only active jobs are returned (enqueued or executing)

Code Block
curl -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/job

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Fetch

To fetch a single instance of an analysis job, the fetch job endpoint can be used with a GET operation. The response body will be in JSON format, and contain all information about the job.

Code Block
curl -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/job/<job id>

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Searching for jobs

To perform an advanced search the job search endpoint can be used with A POST operation along with a JSON body making up the search criteria. The response body will be JSON formatted and contain a list of job objects.

The example below will search for jobs that were aborted or timed out in the last week

Code Block
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://devapi.mnemonic.no/sampledb/v2/job/search -d '{
	"state": [
		"aborted",
		"timeout"
	],
	"startTimestamp": "now - 1 week",
	"endTimestamp": "now",
	"timeFieldStrategy": [
		"enqueuedTimestamp",
		"startTimestamp",
		"endTimestamp"
	],
	"timeMatchStrategy": "any"
}'
For more detailed information on what the response model looks like, you can check out the swagger documentation.

The table below contains a list of valid query parameters and how they work

Query parameter

Default valueDescription

limit

25Limit the number of search results. Can be anywhere in the range of 0 to 10,000.

offset

0Skip the first 'offset' number of objects
sortByenqueuedTimestampSort the result by the value of certain fields. This parameter may be a list of parameter values. Refer to the swagger documentation for a list of possible values
stateenqueued, executingReturn jobs in the defined states only
sampleSha256
Return jobs for certain samples only
customer
Return jobs for certain customers only

startTimestamp


The start timestamp to search for jobs within a time range

endTimestamp


The end timestamp to search for jobs within a time range

timeFieldStrategy


Which timestamp fields to use when searching within a time range

timeMatchStrategy


Whether all or any of the time fields must fall within the specified time range

Anchor
job-task
job-task
Job tasks

A job task is always part of a job. A job task represents the progress and outcome of a single analysis step of a sample; typically an analysis worker. It contains information such as when the worker started and completed, whether it failed or not, and a reference to the analysis result (if it completed successfully)

Adding job tasks

The role SAMPLEDB-ANALYZER is required to be able to add an analysis job task.

To add a task to a job, the job task endpoint can be used with a POST operation along with the job ID. The request and response body is JSON formatted and contains information about the task. The example request below adds a task to a job with an initial state and the name of the analysis worker.

Code Block
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/sampledb/v2/job/<job id>/task -d '{
	"analyzerName": "analysis-worker-name",
	"state": "enqueued"
}'

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Updating job tasks

The role SAMPLEDB-ANALYZER is required to be able to update job tasks.

As the task progresses the state, and result, should be updated. This can be done using the job task endpoint using the PUT operation. The response body will be in JSON format, and contain the updated job task.

Code Block
curl -X PUT -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://devapi.mnemonic.no/sampledb/v2/job/<job id>/task/<task id> -d '{
	"state": "success",
	"analysisID": "7512c89c-2308-4d4c-86af-94462b6ac3ad"
}'

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Fetching job tasks

The role SAMPLEDB-VIEWER or higher is required to be able to fetch or list job tasks.

List

To list job tasks, the job task endpoint can be used with a GET operation. The response body will be in JSON format, and contain a list of objects containing information about the tasks. 

Code Block
curl -H "Argus-API-Key: my/api/key" https://api.mnemonic.no/sampledb/v2/job/<job id>/tasks

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Anchor
verdict
verdict
Verdicts

A verdict contains an assessment about the sample such as whether or not it is malicious. The verdict is created automatically based on analysis results as part of an analysis job. A verdict can also be added manually by using the add verdict API endpoint

Adding verdicts

The role SAMPLEDB-ANALYZER is required to be able to add verdicts.

To manually add a verdict the verdict endpoint can be used with the POST operation. When doing this, the verdict will be marked with the flag 'manual' to indicate that it was manually added.

The response body will be in JSON format and contain the full verdict.

Code Block
curl -X POST -H "Argus-API-Key: my/api/key" -H "Content-Type: application/json" https://api.mnemonic.no/sampledb/v2/sample/<sample id>/verdict -d '{
	"comment": "test",
	"status": "benign",
	"analysisID": ["be99793e-b994-4b2c-baf8-1b518001223b"]
}'

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Fetching verdicts

The role SAMPLEDB-VIEWER or higher is required to be able to fetch or list verdicts.

List

To list verdicts, the verdict endpoint can be used with a GET operation. The response body will be in JSON format and contain a list of objects containing information about the verdicts.

Code Block
curl -H "Argus-API-Key: my/api/key" https://devapi.mnemonic.no/sampledb/v2/sample/<sample id>/verdict

For more detailed information on what the response model looks like, you can check out the swagger documentation.

Fetch

To fetch the latest (current) verdict, the fetch sample metadata endpoint must be used. This endpoint is described in detail in the Sample section.

Table of Contents