Skip to yearly menu bar Skip to main content


Suggested Practices for ECCV 2024 Authors

Reproducibility: Refer to this Reproducibility Checklist as a guide for making sure your paper is reproducible. Reviewers should follow this guide when evaluating papers as well. We highly encourage authors to voluntarily submit their code as part of supplementary material or link to a properly anonymized GitHub repository, especially if they plan to release it upon acceptance. Reviewers may optionally check this code to ensure the paper’s results are reproducible and trustworthy, but are not required to. We expect (but do not require) that authors either submit the accompanying code with accepted papers or link to a public GitHub repository.

Release of code and data: In the spirit of reproducibility, we strongly encourage researchers to release the code and data associated with their papers. Both code and data (or representative samples or details thereof) can be submitted as part of the supplementary material to be optionally considered by reviewers.

If a paper promises the release of code, it is expected that the code will be released by the camera-ready deadline. If a paper submission is claiming a dataset release as one of its contributions, the authors must make the dataset publicly available no later than the camera-ready deadline. Specifically, papers claiming a dataset as one of its contributions must provide a link to the released dataset together with the camera-ready paper. Note that this does NOT imply that all datasets used in ECCV submissions must be public. The use of private or otherwise restricted datasets for training or experimentation is acceptable, but such datasets cannot be claimed as contributions of the paper as they do not become available to the scientific community.

Attribution of existing assets: Just like papers are expected to cite previous work that inspired a submission or on which a submission is built, we expect ECCV papers to cite assets, such as code or datasets, that have been used in the creation of the submitted manuscript. If there are multiple versions of an asset, specify the version you have been using. This attribution of assets can be made either in the main paper or in the supplemental material. We furthermore encourage authors to discuss the license and/or copyright terms of the assets used. The inclusion of a URL is encouraged as well, where appropriate.

Personal data / human subjects: If a paper makes use of personal data and/or data from human subjects, including personally identifiable information or offensive content, we expect that the collection and use of such data has been conducted carefully in accordance with the ethics guidelines. In many countries and institutions, the collection and use of personally identifiable data or data from human subjects is subject to approval from an Institutional Review Board (IRB), or equivalent. If the use of such data was approved by an IRB, stating this is sufficient. If the use of such data has not (yet) been approved by an IRB, authors should provide information on any pending approval process, how the data was obtained, as well as discuss if and how consent was obtained (or why it, perhaps, could not be obtained). This discussion can be included either in the main paper or in the supplemental material.

IRB reviews for the US or the appropriate local ethics approvals are typically required for new datasets in most countries. It is the dataset creators’ responsibility to obtain them. If the authors use an existing, published dataset, we encourage, but do not require them to check how data was collected and whether consent was obtained. Our goal is to raise awareness of possible issues that might be ingrained in our community. Thus we would like to encourage dataset creators to provide this information to the public.

Refer to the ethics guidelines for more detail.

Discussion of limitations: Considering the limitations of an approach is an important part of good academic scholarship. Such discussion should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, asymptotic approximations only held locally). Authors should reflect on how these assumptions might be violated in practice and what the implications would be.

Authors should also reflect on the scope of their claims, e.g., if they only tested their approach on a specific type of imagery or did a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. The discussion should reflect on the factors that influence the performance of the approach. For example, a recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting.

We understand that authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection. It is worth keeping in mind that a worse outcome might be if reviewers discover limitations that are not acknowledged in the paper. In general, we advise authors to use their best judgment and recognize that individual actions in favor of transparency play an important role in developing norms that preserve the integrity of the community.