Hero Image

Reviewing Guidelines

This document guides participants in the ISMAR 2026 reviewing process and is directed toward those who perform full reviews of papers (i.e., secondary review coordinator 2AC, international program committee (IPC) members, and external reviewers) and meta-reviews (primary review coordinator, or 1AC).

This guideline covers recommendations and best practices. Therefore, please read the guidelines below carefully. Finally, to ensure the integrity of the review process, we ask that reviewers refrain from uploading papers to any AI system (e.g., Claude AI, Wordtune, etc.) for analysis, support, or guidance during the review period, nor write their review using these systems.

ISMAR 2026 Paper Reviewing Guidelines

  1. Review Model for ISMAR 2026
  2. General Reviewing Principles
  3. Review Scores and Recommendations
  4. Writing a High-Quality Review
  5. Human-Subjects Research and Ethics
  6. Role of the Rebuttal
  7. Writing a Meta-Review (for 1ACs)
  8. Frequently Asked Questions (FAQs)

1. Review Model for ISMAR 2026

ISMAR 2026 follows a single-cycle review process with rebuttal, using a unified decision framework for IEEE TVCG journal papers and ISMAR conference papers.

Each submission will receive:

After reviews are released, authors will have an opportunity to submit a rebuttal. Final decisions will be made after considering:

Possible outcomes are:

Conditionally accepted papers must satisfy all required revisions before final acceptance.

Desk rejections may occur at any stage for submissions that clearly violate formatting, anonymity, scope, or ethical requirements.


2. General Reviewing Principles

When reviewing for ISMAR 2026, reviewers are expected to:

A paper may be an edge case or explore emerging directions. If it addresses a topic that is novel, timely, or underexplored—but relevant to AR, MR, or VR—it should be reviewed thoughtfully and fairly.

ISMAR welcomes work across the full spectrum of Extended Reality, embracing AR, MR, and VR, including applied systems, interaction, perception, design, theory, and evaluation—not only tracking or core systems work.


3. Review Scores and Recommendations

The section explains the ISMAR 6-point ranking and explains when we think one should select a particular score. We are aware that the decision can be subjective in many cases and that selecting between two is often a judgment call. We hope this explanation removes some gray area in decision-making and helps align reviewers on how to make borderline rating decisions. Reviewers will provide a numerical rating together with a written justification. Scores must always be supported by clear reasoning in the public comments.

Definitely accept (rating of 6): I would argue strongly for accepting this submission. Select this option if the paper is acceptable as-is (except for minor edits), with a strong contribution and merits for the ISMAR community.

Probably accept (rating of 5): I would argue for accepting this submission. Select this option if the paper has a valid contribution and merits for the ISMAR community. Some additional explanations or minor corrections are required.

Rather accept (rating of 4): The paper has weaknesses, but the contributions outweigh the weaknesses. Select this option if the research is relevant, the topic is of value for the ISMAR community, and the attitude towards this contribution is overall positive despite the identified weaknesses.

Rather reject (rating of 3): The paper has contributions, but the weaknesses outweigh the contributions. Select this option if the research is relevant, the topic is of value for the ISMAR community, but the attitude is overall negative because of the identified weaknesses.

Probably reject (rating of 2): I would argue for rejecting this submission. Select this option if the research is relevant and the topic is of value for the ISMAR community, but the research has several severe weaknesses.

Definitely reject (rating of 1): I would argue strongly for rejecting this submission. Select this option if the contribution is not understandable and the paper has no recognizable merits, or it is entirely unclear what information the ISMAR community gains from this submission.


4. Writing a High-Quality Review

The following guidelines outline the content and key points of a high-quality review for ISMAR 2026. We believe that all paper reviews should be written with the same guidelines in mind. Please adhere to the guidelines and contact the program chairs or review coordinators for any questions.

Mind that your decisions affect ISMAR 2026’s public image. Therefore, the program chairs are very serious about ensuring the highest possible reviewing standards for ISMAR 2026. The coordinators and/or program chairs will ask you to improve your review if they think the reasons for your judgment are unclear.

We strongly recommend that you read the entire (short) article by Ken Hinckley, which we found to give a lot of constructive advice:

Hinckley, K. (2016). So You’re a Program Committee Member Now: On Excellence in Reviews and Meta-Reviews and Championing Submitted Work That Has Merit.

Here are some of the major elements from Hinckley’s paper, in some cases modified by adding insights from further advice by Steve Mann and Mark Bernstein, for quick reference:

From ISMAR’s perspective, we would like to add that in your role to maintain the high-quality of papers, we ask that you not categorically find hidden flaws and assassinate papers wherever possible, but instead review papers for their merits. When reviewing, keep in mind that almost every paper we review “could have done x, or y, or z”. Don’t fall into this trap! We cannot reject papers because authors did not conduct their research the way we would have, or even the way it is typically done. Instead, we must judge each paper on its own merit and whether or not the body of work presented can stand on its own, as presented. Sure, every paper “could have done more”, but is the work that has been done of sufficient quality and impact?

Furthermore, we should not reject papers because “this experiment has been done before”. This fallacy of novelty ignores the long-standing tradition in science of replication. New work that performs similar research and finds consistent (or even conflicting) results is of value to the community and to science in general, and should be considered on its own merits.

When evaluating papers that run human-subjects studies, it is important that the participant sample population is representative of the population for which the technology is being designed. For example, if the technology is being designed for a general population, the participant population should include equal gender representation and a wide age range. All papers with human-subject studies must report at a minimum demographic information, including age, gender, and every possible hint of social and diversity representation. If this is not the case, reviewers should not automatically reject the paper; instead, they should provide appropriate, constructive critique and advice regarding general claims that do not use representative sample populations.

And finally, to quote Ken Hinckley: “When in doubt, trust the literature to sort it out.”

5. Human-Subjects Research and Ethics

For papers involving human participants:

6. Role of the Rebuttal

The rebuttal allows authors to:

Reviewers should read rebuttals carefully and reconsider their initial assessments when appropriate. A rebuttal is not expected to fix everything, but it should inform the discussion and the final recommendation.

7. Writing a Meta-Review (for 1ACs)

The following guidelines outline the content of a good meta-review. The section is for review coordinators (1AC) and explains the content that the chairs believe best supports a decision.

Mind that the authors will see the meta-review with the final decision. Be constructive and explain more rather than less, especially when the authors receive an unfavorable decision. Very often, the research or paper was not ready at the time of submission. Invite the authors to resubmit next year if feasible.

8. Frequently Asked Questions (FAQs)

Thank you for your support and work to ensure the highest-quality ISMAR reviews.

Do not hesitate to contact us for any further information: paper2026@ieeeismar.net

ISMAR 2026 Paper Chairs

Document History:

This document was updated and extended by the ISMAR 2026 Paper Chairs: Mariko Isogawa, Kangsoo Kim, Alejandro Martin-Gomez, Alexander Plopski, Missie Smith, Florian Weidner, extended by the ISMAR 2025 Paper Chairs: Ulrich Eck, Gun Lee, Alexander Plopski, Missie Smith, Qi Sun, Markus Tatzgern, extended by the ISMAR 2024 Program Chairs: Ulrich Eck, Misha Sra, Jeanine Stefanucci, Maki Sugimoto, Markus Tatzgern, Ian Williams after being extended by the ISMAR 2023 Conference Paper Chairs: Jens Grubert, Andrew Cunningham, Evan Peng, Gerd Bruder, Anne-Hélène Olivier, and Ian Williams. These guidelines were updated for the ISMAR 2022 conference papers review process: Henry Duh, Jens Grubert, Jianmin Zheng, Ian Williams, and Adam Jones. These guidelines are based on the work of the ISMAR 2019 PC Chairs: Shimin Hu, Denis Kalkofen, Joseph L. Gabbard, Jonathan Ventura, Jens Grubert, and Stefanie Zollmann.