Hero Image

Paper Reviewing Guidelines

Last updated: 2025-03-26 2:48AM GMT

This document guides those who participate in the ISMAR 2025 reviewing process and is directed towards those who perform full reviews of papers (i.e., secondary review coordinator 2AC, committee members, and reviewers) and meta-reviews (primary review coordinator, or 1AC).

This guideline covers recommendations and best practices. Therefore, please read carefully the guidelines below. Finally, to ensure the integrity of the review process, we ask that reviewers refrain from uploading papers to any AI system (e.g., Claude AI, Wordtune, etc.) for analysis, support, or guidance during the review period, nor write their review using these systems.

Cycle 1: Initial Review

Review Scores:

The section explains the ISMAR 4-point ranking used in the initial round of reviews and explains when we think one should select a particular score. We are aware that the decision can be subjective in many cases and that selecting between two is often a judgment call. We hope that this explanation removes some gray area for decision-making and helps to align reviewers on how to make borderline decisions for ratings.

  1. Highly Likely to Revise Successfully: This paper has only minor issues. Revisions will be manageable and likely to succeed.
  2. Moderately Likely to Revise Successfully: While the paper is promising, there are notable issues requiring substantial revision. With careful and thoughtful revisions, the paper has a good chance of improving.
  3. Uncertain: Major components of the research, such as theoretical grounding, methodology, or data interpretation, may need rethinking or substantial rewriting. Success is possible but uncertain.
  4. Unlikely to Revise Successfully: The paper has significant flaws making it unlikely that revisions will bring it to an acceptable standard. The likelihood of success after revisions is very low, and it may require a complete overhaul to address core issues.

Cycle 2: Re-Review for Selected Submissions

-Based on the rating in Cycle 1, a selected set of manuscripts will be invited to revise and resubmit their paper. All reviewers will be asked to provide a short review of the revised version and the rebuttal document. Reviewers then score the revised paper on a 6-point ranking, and participate in a discussion to provide a final recommendation.

Review Scores: 6-point Ranking

The section explains the ISMAR 6-point ranking and explains when we think one should select a particular score. We are aware that the decision can be subjective in many cases and that selecting between two is often a judgment call. We hope that this explanation removes some gray area for decision-making and helps to align reviewers on how to make borderline decisions for ratings.

Writing a Review

The following guidelines outline the content and key points of a high-quality review for ISMAR 2025. We believe that all paper reviews should be written with the same guidelines in mind. Please adhere to the guidelines and contact the program chairs or review coordinators for any questions.

Mind that your decisions affect the public appearance of ISMAR 2025. Therefore, the program chairs are very serious about ensuring the highest possible reviewing standards for ISMAR 2025. The coordinators and/or program chairs will ask you to improve your review if they think the reasons for your judgment are unclear.

We strongly recommend that you read the entire (short) article by Ken Hinckley, which we found to give a lot of constructive advice: [Hinckley, K. (2016). So You’re a Program Committee Member Now: On Excellence in Reviews and Meta-Reviews and Championing Submitted Work That Has Merit].(https://www.microsoft.com/en-us/research/wp-content/uploads/2016/10/Excellence-in-Reviews-MobileHCI-2015-Web-Site.pdf)

Here are some of the major elements from Hinckley’s paper, in some cases modified by adding insights from further advice by Steve Mann and Mark Bernstein, for quick reference:

From ISMAR’s perspective, we would like to add that in your role to maintain the high-quality of papers, we ask that you not categorically find hidden flaws and assassinate papers wherever possible but instead review papers for their merits. When reviewing, keep in mind that almost every paper we review “could have done x, or y, or z”. Don’t fall into this trap! We cannot reject papers because authors did not perform their research the way we would have done it, or even how it is typically done. Instead, we must judge each paper on its own merit, and whether or not the body of work presented can stand on its own, as presented. Sure, every paper “could have done more”, but is the work that has been done of sufficient quality and impact?

Furthermore, we should not reject papers because “this experiment has been done before”. This fallacy of novelty, ignores the long-standing tradition in science of replication. New work that performs similar research and finds consistent (or even conflicting) results are of value to the community and to science in general, and should be considered on their own merits. When evaluating papers that run human-subjects studies, it is important that the participant sample population is representative of the population for which the technology is being designed. For example, if the technology is being designed for a general population, then the participant population should include equal gender representation and a wide range of ages. All papers with human-subject studies must report at a minimum demographic information including age, gender, and every possible hint of social and diversity representation. If this is not the case, reviewers should not automatically reject the paper, but instead provide appropriate constructive critique and advice regarding general claims that do not use representative sample populations.

And finally, to quote Ken Hinckley: “When in doubt, trust the literature to sort it out.”

Writing a Meta-Review

The following guidelines outline the content of a good meta-reviews. The section is for review coordinators (1AC) and explains the content the chairs believe best supports a decision.

Mind that the authors will see the meta-review with the final decision. Be constructive and explain more rather than less, especially for cases when the authors receive an unfavorable decision. Very often, the research or paper was not ready at the time of submission. Invite the authors to re-submit next year if feasible.

Frequently Asked Questions (FAQs)

No, you should not reject papers that have been published on ArXiv or a similar service, as authors may have done it as a way to get a timestamp for their work. However, if the ArXiv submission explicitly states that the submission is under review at ISMAR, was created within 30 days of the ISMAR submission deadline, or if the authors listed this prepublication on their individual or institutional webpages or generated publicity for it through other forms of media, then yes, it may constitute a violation of ISMAR policies. Please raise any related concerns in your review and/or contact the Program Chairs by email at: program2025@ieeeismar.net Please read the full “Double-Blind Process and Anonymity Policy” in the Author Guidelines.

In some situations, a submission may build upon prior work. As part of the Author Guidelines, paper authors were instructed to be proactive about clarifying such cases by uploading additional documents to the submission system that are anonymous and will be considered during the review process. If you suspect any issues related to this point, please first contact the primary coordinator of this paper. If no clarification has been uploaded by the authors, please carefully assess how far the publications overlap. Note that ISMAR does not consider a prior non-archival 2-page poster/demo extended abstract a reason for rejection of a paper submitted on the same topic.

The authors will provide information whether they followed the ethical guidelines imposed by their affiliation and report if approval has been obtained or why no approval may have been necessary in the submission form. This will be considered during the review process. As such, the lack of a clear statement on ethics review and participant consent in the paper should be raised but not be grounds for rejection past the desk-reject stage.

Thank you for your support and work to ensure the highest-quality ISMAR reviews.



Do not hesitate to contact us for any further information: program2025@ieeeismar.net

ISMAR 2025 Paper Chairs

Ulrich Eck, Gun Lee, Alexander Plopski, Missie Smith, Qi Sun, Markus Tatzgern

Document History

This document was updated and extended by the ISMAR 2025 Paper Chairs: Ulrich Eck, Gun Lee, Alexander Plopski, Missie Smith, Qi Sun, Markus Tatzgern, extended by the ISMAR 2024 Program Chairs: Ulrich Eck, Misha Sra, Jeanine Stefanucci, Maki Sugimoto, Markus Tatzgern, Ian Williams after being extended by the ISMAR 2023 Conference Paper Chairs: Jens Grubert, Andrew Cunningham, Evan Peng, Gerd Bruder, Anne-Hélène Olivier, and Ian Williams. These guidelines were updated for the ISMAR 2022 conference papers review process: Henry Duh, Jens Grubert, Jianmin Zheng, Ian Williams, and Adam Jones. These guidelines are based on the work of the ISMAR 2019 PC Chairs: Shimin Hu, Denis Kalkofen, Joseph L. Gabbard, Jonathan Ventura, Jens Grubert, and Stefanie Zollmann.