Guidance to Reviewers on Contribution Types
The goal of the contribution types and accompanying guidelines is to support fair, nuanced, and context-aware reviewing at ECCV. Computer vision research encompasses a wide range of work—from theoretical foundations to practical systems, from new algorithms to community datasets, and from early-stage ideas to mature deployments. A single set of uniform review criteria is therefore neither appropriate nor desirable.
To reflect this diversity, authors will select a primary contribution type that best describes the main focus of their paper. Reviewers are asked to evaluate each submission primarily through the lens of this declared contribution type, using the corresponding guidelines as a framework for assessment rather than a rigid checklist.
These instructions are meant to help reviewers:
- Align their expectations with the nature of the work,
- Recognize different forms of scientific value,
- And avoid penalizing papers for not optimizing along dimensions that are not central to their claimed contribution.
At the same time, reviewers should exercise their own judgment, consider the paper holistically, and assess whether the work makes a meaningful and credible contribution to the ECCV community.
Important Note on Contribution Types
These criteria are intended as guidance rather than strict requirements. Not every criterion must apply to every paper within a given contribution type, and some papers may reasonably sit at the boundary between categories or combine elements of multiple types. Reviewers should exercise judgment and focus on whether the paper makes a compelling contribution consistent with its primary claimed type, rather than checking every item on a list.
For example, a paper tagged as Theory/Foundational may include strong experimental evaluation but lack formal proofs if the nature of the theory does not require them; conversely, an algorithmic paper may contain insightful analysis even if its main contribution is methodological. In such cases, reviewers should assess the work holistically and avoid penalizing papers simply because they do not satisfy every listed criterion or exhibit characteristics of other contribution types.
The goal of these categories is to enable fair, appropriate, and context-aware reviewing, not to impose rigid boundaries on how research must be presented.
Familiarity with Author Instructions on Contribution Types
Reviewers are encouraged to familiarize themselves with the Instructions for Authors on Selecting a Contribution Type, which explain how authors are expected to choose their primary contribution category. Understanding these guidelines will help reviewers better interpret the authors’ selection and evaluate whether the authors’ choice of contribution type is reasonable in the context of the paper.
The author instructions are available here.
Reviewers should consider these instructions as context when assessing whether the declared contribution type aligns with the substance of the submission and when deciding how to frame their evaluation.
On Disagreement with the Selected Contribution Type
Reviewers are asked to primarily evaluate each submission through the lens of the contribution type selected by the authors. However, if a reviewer believes that the chosen contribution type is inappropriate or misleading, they may evaluate the paper according to what they consider to be the correct category only if this decision is made explicit and carefully justified in the review.
In such cases, reviewers must clearly explain:
- why they disagree with the authors’ selected contribution type,
- which contribution type they believe is more appropriate, and
- how this alternative framing influenced their evaluation.
This justification should be stated explicitly in the “Justification of Rating” section of the review, so that authors, area chairs, and the program committee can properly understand and interpret the assessment.
Reviewers should avoid silently applying criteria different from those associated with the declared contribution type; any departure from the authors’ selection must be transparent, well-reasoned, and clearly documented.
1. Algorithms / General — Reviewer Guidelines
This contribution type indicates that a paper’s primary contribution is an algorithmic or methodological advance. Please evaluate the work using criteria appropriate for this contribution type, recognizing that the goal is to advance methods rather than theory, datasets, or systems.
This category also serves as a default contribution type. Some papers may lie on the boundary between multiple categories (e.g., combining methodological, theoretical, or applied elements), and in such cases, authors may reasonably select Algorithms/General when no single other category clearly dominates.
Accordingly, this category should be understood as corresponding to the standard, general review style traditionally used at major computer vision conferences for papers without an explicit contribution type tag. Reviewers should therefore apply broadly appropriate scientific standards of novelty, rigor, and impact, rather than expecting the paper to fit narrowly into more specialized categories such as Theory/Foundational, Applied/Systems, Datasets/Benchmarks, or Concept & Feasibility.
When reviewing, consider the following:
(Note that this list of evaluation criteria is not exhaustive. Some criteria may not be applicable to all papers within a Contribution Type, and, depending on the context of the paper, certain criteria associated with other Contribution Types may also be relevant. Reviewers should therefore not restrict their assessment solely to the listed criteria. Instead, they are encouraged to evaluate submissions based on their own expertise, the specifics of the paper, and their professional judgment. These criteria are intended as guidelines to support and improve the quality and consistency of reviews, not as a rigid checklist.)
1. Novelty and Contribution
- Does the proposed method meaningfully differ from prior work?
- Is the core idea clearly articulated and motivated?
- Does the proposed method advance the state of the art in a substantive way (performance, efficiency, robustness, generality, simplicity, or interpretability)?
2. Technical Soundness
- Is the method technically sound and clearly described?
- Are assumptions reasonable and clearly stated?
- Are ablations and analyses sufficient to justify key design choices?
3. Empirical Evaluation
- Are experiments appropriate for the claims being made?
- Are comparisons to relevant baselines fair and comprehensive?
- Are the results convincing, reproducible, and statistically meaningful?
- Are limitations clearly acknowledged?
4. Generality and Impact
- Is the method likely to generalize beyond the specific tasks tested?
- Could this approach influence future research directions in computer vision?
Important Reminder
Do not penalize these submissions for:
- Lack of new datasets
- Absence of real-world deployment
- Limited theoretical analysis
Their acceptance should be based primarily on:
- The strength, novelty, and rigor of the proposed method.
2. Theory / Foundational — Reviewer Guidelines
This contribution type indicates that a paper’s primary contribution is theoretical or foundational. Please evaluate the work using criteria appropriate for this contribution type, recognizing that the goal is to deepen understanding rather than to produce novel algorithms / components / systems, or the best empirical results.
When reviewing, consider the following:
(Note that this list of evaluation criteria is not exhaustive. Some criteria may not be applicable to all papers within this Contribution Type, and, depending on the context of the paper, certain criteria associated with other Contribution Types may also be relevant. Reviewers should therefore not restrict their assessment solely to the listed criteria. Instead, they are encouraged to evaluate submissions based on their own expertise, the specifics of the paper, and their professional judgment. These criteria are intended as guidelines to support and improve the quality and consistency of reviews, not as a rigid checklist.)
1. Significance of the Theory
- Does the work address a meaningful conceptual or theoretical question?
- Does it clarify, formalize, or explain important aspects of computer vision or learning systems?
- Does it challenge or refine existing assumptions?
2. Rigor and Correctness
- Are definitions, assumptions, and statements precise?
- Are proofs or arguments logically sound and well-presented?
- Are results non-trivial and insightful rather than purely technical?
3. Relevance to Computer Vision
- Even if abstract, does the theory connect clearly to vision problems, models, or phenomena?
- Does it offer insights that could inform future methods or evaluations?
4. Empirical Support (if included)
- If experiments are provided, they should be seen as illustrative rather than exhaustive.
- Lack of large-scale experiments should not be a major penalty if the theoretical contribution is strong / does not require such experiments.
Important Reminder
Do not penalize these submissions for:
- Not achieving state-of-the-art performance
- Limited empirical scale
- Lack of a new algorithm
Their acceptance should be based primarily on:
- The depth, originality, and clarity of the theoretical contribution.
3. Applied / Systems — Reviewer Guidelines
This contribution type indicates that a paper’s primary contribution is an applied system, real-world deployment, or engineering-focused solution. Please evaluate the work using criteria appropriate for this contribution type, recognizing that impact may come from practicality rather than methodological novelty.
When reviewing, consider the following:
(Note that this list of evaluation criteria is not exhaustive. Some criteria may not be applicable to all papers within this Contribution Type, and, depending on the context of the paper, certain criteria associated with other Contribution Types may also be relevant. Reviewers should therefore not restrict their assessment solely to the listed criteria. Instead, they are encouraged to evaluate submissions based on their own expertise, the specifics of the paper, and their professional judgment. These criteria are intended as guidelines to support and improve the quality and consistency of reviews, not as a rigid checklist.)
1. Problem Relevance
- Does the work address a real and meaningful application?
- Is the practical motivation clearly explained?
2. System Design and Engineering Quality
- Is the system well-designed and technically sound?
- Are constraints (efficiency, latency, cost, scalability, reliability) clearly addressed?
- Are design choices justified, e.g., through ablation studies?
3. Real-World Impact
- Is there evidence of real-world use, deployment, or realistic evaluation?
- Does the work solve a practical problem better than existing approaches?
4. Evaluation
- Are experiments or case studies convincing and appropriate?
- Do the used datasets / benchmarks model real-world scenarios?
- Is the selection of baselines sufficient/extensive enough?
5. Method vs. System
If a new method is included:
- It should be judged as part of a system, not as a standalone research advance.
- The system’s value must remain clear independent of any single algorithmic component.
Important Reminder
Do not penalize these submissions for:
- Limited methodological novelty
- Not advancing theory
Their acceptance should be based primarily on:
- Practical impact, feasibility, and system-level contribution to the community.
4. Datasets / Benchmarks — Reviewer Guidelines
This contribution type indicates that a paper’s primary contribution is a dataset, benchmark, or challenge. Please evaluate the work using criteria appropriate for this contribution type, recognizing that the goal may not be methodological novelty.
When reviewing, consider the following:
(Note that this list of evaluation criteria is not exhaustive. Some criteria may not be applicable to all papers within this Contribution Type, and, depending on the context of the paper, certain criteria associated with other Contribution Types may also be relevant. Reviewers should therefore not restrict their assessment solely to the listed criteria. Instead, they are encouraged to evaluate submissions based on their own expertise, the specifics of the paper, and their professional judgment. These criteria are intended as guidelines to support and improve the quality and consistency of reviews, not as a rigid checklist.)
1. Significance and Relevance
- Does the dataset address an important and well-motivated problem?
- Is there a clear potential impact on the research community?
- Does it enable new research directions or stronger evaluation?
2. Data Quality and Design
- Is the data collected and annotated with sufficient rigor?
- Are annotation protocols clearly described and justified?
- Is the dataset comprehensive, diverse, and representative of the domain?
- Are ethical considerations addressed (e.g., consent, bias, privacy compliance)?
3. Accessibility and Documentation
All papers that select Datasets/Benchmarks as their Contribution Type are subject to the Dataset Release Policy. By choosing this category, authors confirm that the proposed dataset and/or benchmark will be publicly available by the time of camera-ready submission. Please verify whether any statements in the paper contradict this commitment.
Assess whether:
- The planned form of access is reasonable and not categorically restrictive (e.g., requiring completion of an access request form is acceptable; categorical denial of access is not).
- Licensing terms, hosting arrangements, and plans for long-term availability are clearly specified.
- Documentation is sufficient to enable reuse by the community (e.g., data formats, annotation protocols, metadata, and supporting scripts).
If the dataset and/or benchmark cannot be made publicly available due to legal, privacy, or ethical constraints, they should not be considered a contribution, and the paper should be evaluated accordingly. Authors must also clearly specify in the submission any parts of the dataset and/or benchmark that they do not plan to make publicly available. These parts should not be considered a contribution as well.
In such cases, carefully consider whether the remaining elements of the work that the authors plan to make publicly available still justify selecting Datasets/Benchmarks as the primary contribution type (e.g., if the main contribution is a publicly available evaluation protocol, partial dataset, metadata, or tooling), or whether the true primary contribution lies elsewhere (e.g., in a new method or analysis), in which case the selected contribution type may be inappropriate.
If the authors indicate that the dataset or benchmark (or some of their parts) will remain private due to legal, privacy, or ethical constraints, please notify the Area Chair in the confidential comments, so that this issue can be taken into account during the decision process.
4️. Benchmark Setup & Baselines
- Are evaluation protocols well-defined, standardized, and reproducible?
- Are baseline results sufficient and reasonable?
- If a leaderboard / challenge structure is part of the dataset / benchmark, is the leaderboard or challenge structure fair and sound?
5. Method vs. Dataset
If a new method is included:
- It should be judged only as a baseline, not the main contribution.
- The dataset value must remain clear independent of the method.
Important Reminder
Do not penalize these submissions for:
- Limited methodological novelty
- Lower SOTA performance on unrelated benchmarks
Their acceptance should be based primarily on:
Whether the dataset/benchmark will provide clear value to the ECCV community.
5. Concept & Feasibility — Reviewer Guidelines
This contribution type indicates that a paper’s primary contribution is an original idea with feasibility-level validation rather than a fully developed, scaled, or optimized solution. This category is intended for high-risk/high-reward work that introduces new concepts, paradigms, or directions for computer vision research.
Importantly, this tag does not exempt the paper from methodological rigor. Rather, it signals that extensive scaling, large benchmarks, or exhaustive comparisons are not the primary contribution—the core contribution is the idea itself and its demonstrated feasibility.
When reviewing, consider the following:
(Note that this list of evaluation criteria is not exhaustive. Some criteria may not be applicable to all papers within this Contribution Type, and, depending on the context of the paper, certain criteria associated with other Contribution Types may also be relevant. Reviewers should therefore not restrict their assessment solely to the listed criteria. Instead, they are encouraged to evaluate submissions based on their own expertise, the specifics of the paper, and their professional judgment. These criteria are intended as guidelines to support and improve the quality and consistency of reviews, not as a rigid checklist.)
1. Originality and Vision
- Is the central idea genuinely novel and intellectually stimulating?
- Does the paper propose a new way of thinking about a problem, model, or paradigm in vision?
- Would this idea, if further developed, meaningfully influence the field?
2. Correctness and Soundness
- Are the claims logically coherent and technically reasonable?
- Are assumptions clearly stated and justified?
- Does the proposed concept make sense within existing scientific understanding?
3. Claim–Evidence Alignment
- Do the experiments, proofs-of-concept, or analyses appropriately support the paper’s claims?
- Are the authors careful not to overstate what their preliminary results show?
- Is the feasibility evidence sufficient to make the core idea credible?
4. Quality of Feasibility Validation
- Is there at least some meaningful validation (e.g., toy experiments, controlled studies, simulations, small-scale prototypes, or case studies)?
- Are these demonstrations appropriate for the nature of the idea?
5. Clarity and Framing
- Is the idea clearly explained and well-motivated?
- Are limitations, risks, and open questions discussed transparently?
Important Reminder
Do not penalize these submissions for:
- Lack of large-scale experiments
- Not achieving SOTA results
- Not providing exhaustive ablations
- Limited deployment or benchmarking
However, reviewers should still expect:
- Clear reasoning,
- Careful experimental or conceptual validation, and
- Honest alignment between claims and evidence.
Their acceptance should be based primarily on:
- Novelty, correctness, feasibility, and potential long-term impact on the field.
—----------------------------------------------------------------------------------------------------------------------
Reviewer Preferences for Contribution Types
During registration, reviewers may select one or more Contribution Types of papers that they would prefer to review (with Algorithms/General as the default option).
These preferences are not binding and do not guarantee that you will only be assigned papers of those types. Reviewer expertise, topical fit, reviewer suggestions from Area Chairs, and workload balance will remain the primary factors in assigning papers to reviewers.
Your stated preferences will be visible to Area Chairs in OpenReview and will be used as an additional signal when they suggest or match reviewers to papers. The goal is simply to help Area Chairs identify potentially better matches—especially for more specialized Contribution Types—while preserving flexibility in the assignment process.
You are therefore encouraged to indicate preferences honestly, but you should expect that you may still be asked to review papers outside your selected types when your expertise is valuable.