The Artifact Evaluation process is a service provided by the community to help authors of accepted papers provide more substantial supplements to their papers so that future researchers can more effectively build on and compare with previous work.

Call for Reviewers

Please use the Self-Nomination Form to nominate yourself or a colleague for the ISSTA 2023 Artifact Evaluation Program Committee.

Call for Artifacts

Research artifacts denote digital objects that were either created by the authors of a research article to be used as part of their study or generated by their experiments (see

The artifact evaluation (AE) aims to foster reproducibility and reusability. Reproducibility refers to researchers or practitioners being able to validate the paper’s results using the provided artifact. Reusability means that researchers can extend the artifact or use it in a different context or for a different use case. Overall, the artifact evaluation process allows our field to progress by incentivizing and supporting authors to make their artifacts openly available and improve their quality. Furthermore, a formal artifact evaluation documents the outstanding nature of the published research through recognizable and recognized badges stamped directly on the published papers. Therefore, it is common to offer the authors of accepted papers at high-quality conferences, such as ISSTA, an artifact evaluation service before publication.

More details can be found here: ACM guidelines on Artifact Review and Badging Version.

Submission and Preparation Overview

The following instructions overview the preparation of an artifact for submission. Please also read the instructions and explanations in the subsequent sections before moving on with the submission.

  1. Prepare your artifact evaluation package. The package should contain at least:
  • The artifact (more details below)
  • A README file (with a .txt, .md, or .html extension) including two sections:
    • “Getting Started” describes how to set up the artifact and validate its general functionality (e.g., based on a small example) in less than 30 min.
    • “Detailed Instructions” describes how to validate the paper’s claims and results in detail.
    • The accepted version of the paper to ease the artifact evaluation process (e.g., double-checking the results)
  1. Upload the artifact to Zenodo or a similar service (e.g., figshare) to acquire a DOI.
  2. Submit the DOI and additional information about the artifact (e.g., the paper abstract) using HotCRP.

The Artifact Evaluation Process

The following provides a detailed explanation of the artifacts’ scope, the evaluation process’s goal, and the submission instructions.

Scope of Artifacts

Artifacts can be of various types, including but not limited to the following: Tools, which are standalone systems. Data repositories storing, for example, logging data, system traces, or survey raw data. Frameworks or libraries, which are reusable components. Machine-readable proofs (see the guide on Proof Artifacts by Marianna Rapoport) Artifacts can also be a combination of the above. For instance, an Artifact Evaluation Package might contain a runnable tool and sample and test input data to showcase it. Please contact the AE Chairs if you doubt whether your artifact can be submitted to the AE process.

Evaluation Objectives and Badging

The evaluation of the artifacts subsequently target three different objectives:

  • ACM Availability Availability: The artifact should be available and accessible for everyone interested in inspecting or using it. As detailed below, an artifact has to be uploaded to Zenodo to obtain this badge.

  • ACM Functionality Functionality: The main claims of the paper should be backed up by the artifact.

  • ACM Reusability Reusability: Other researchers or practitioners should be able to inspect, understand, and extend the artifact. Each of the different objectives is handled as part of the AE process, with each successful outcome awarded an ACM badge.


Your artifact should be made available via Zenodo that provides a publicly-funded platform aiming to support open science. The artifact needs to be self-contained and versioned.

During upload, you will be required to select a license and provide additional information, such as a description of the artifact. The open science platform (e.g., Zenodo) will generate a DOI. The DOI is necessary for the artifact evaluation submission.

Note that the artifact is immediately public and can no longer be modified or deleted. However, it is possible to upload an updated version of the artifact that receives a new DOI (e.g., to address reviewer comments during the kick-the-tires response phase).

Note on Zenodo: The default storage for Zenodo is currently limited to 50GB per artifact but can be extended on request Zenodo FAQ - Section Policies. Please, to ease the AE process, keep the size of the artifact as small as possible.


To judge the functionality and reusability of an artifact, two to three reviewers will evaluate every submission. The process happens in two stages. First, reviewers will check for the artifact’s basic functionality (using the information provided in the “Getting Started” section of the README file). Reviewers will communicate potential issues to the authors that they can fix or respond to (as part of the response phase). Second, reviewers will thoroughly evaluate the artifact and validate that it backs up the paper’s important claims.

As mentioned above, the README file has to account for these two phases and should be structured in two sections: “Getting Started” and “Detailed Instructions”

The “Getting Started” section has to describe:

  • the artifact’s requirements;
  • the steps to check the basic functionality of the artifact.

For the requirements, please consider that the AE reviewers could use a different operating system and, likely, a different environment than yours. If you decide to, for example, submit only the source code of a tool, ensure that all the requirements are documented and widely available. If the artifact is packaged as a virtual machine or a container, the section should contain detailed instructions on how to run the image or container.

To help reviewers validate your artifact’s basic functionality, describe which basic commands of your artifact to execute, how much time these commands will likely take, and what output to expect. Overall, please ensure that the overall time to evaluate the Getting Started section does not exceed 30 minutes.

The “Detailed Instructions” section should present how to use the artifact to back up every claim and experiment described in the paper.

Meeting the above requirements is necessary to achieve the “Artifacts Evaluated – Functional” badge.


For the “Artifacts Evaluated - Reusable” badge, all requirements for the “Artifacts Evaluated - Functional” need to be met as a prerequisite. When submitting an artifact, authors must argue if and why their artifact should also receive an “Artifacts Evaluated – Reusable” badge.

Typically, a reusable artifact is expected to have one or multiple of the following characteristics:

  • The artifact is highly automated and easy to use.
  • The artifact is comprehensively documented. The documentation describes plausible scenarios on how it could be extended. Providing an example extension is a plus.
  • The artifact contains all means necessary such that others can extend it. For example, a tool artifact includes its source code, all not commonly available requirements, and a working description of compiling it. Container or virtual machines with all requirements are preferred.
  • The README should contain or point to other documentation that is part of the artifact and describes use case scenarios or details beyond the scope of the paper. Such documentation is not limited to text; for example, a video tutorial could demonstrate how the artifact could be used and evaluated more generally. In general, the wide variety of artifacts makes it difficult to come up with an exact list of expectations to obtain the “Artifacts Evaluated – Reusable” badge.

The above points should be seen as a guideline for authors and reviewers regarding what to provide and expect. In case of any doubt, feel free to contact the AE Chairs.

Distinguished Artifact Awards

Artifacts that go above and beyond the expectations of the Artifact Evaluation Committee will receive a Distinguished Artifact Award.


  • Is the reviewing process double-blind? No, the reviewing process is single-blind. The reviewers will know the authors’ identities, whereas the reviewers’ identities are kept hidden from the authors. Authors can thus submit artifacts that reveal the authors’ identities.
  • How can we submit an artifact that contains private components (e.g., a commercial benchmark suite)? An option would be to upload only the public part of the artifact to Zenodo and share a link to the private component that is visible only to the reviewers by specifying the link separately (e.g., using the Bidding Instructions and Special Hardware Requirements HotCRP field). If this is not possible, another option would be to provide reviewers access to a machine that allows them to also interact with the artifact’s private component. Both options must adhere to the single-blind reviewing process (i.e., they must not reveal the reviewers’ identities). Whether an “Availability” flag will be awarded for partially available artifacts will be determined based on the AEC’s evaluation.


If you have any questions or comments, please reach out to the AE Chairs