Skip to content

Reviewer Guidelines

Inspired by ICML, NIPS and tuned to ECML PKDD 2023 specifics.

The review process has two goals. First, to identify papers that represent an important contribution to the fields of machine learning and knowledge discovery in databases for both participants and readers. Second, to provide authors with constructive feedback that they can use to improve their work. Your role as a reviewer is critical to both of these goals.

When reviewing a paper, always consider the impact that the work could have on the community in the long run — unconventional ideas, novel problems, and “cross-field” contributions are critical to the successful development of the field. So do not neglect the high-level picture in favor of technical correctness, which is also important. Keep in mind: Novel and/or interdisciplinary work (e.g., that is not incremental extensions of problems already studied, but perhaps formulates a new problem of interest) is often very easy to criticize because, for example, the assumptions it makes and the models it uses are not yet widely accepted by the community (because of their novelty). However, such work can be of great long-term importance to the progress of the field. So try to be aware of this bias and avoid dismissive criticism.

ECML PKDD has a two-level review structure: the program committee (PC) members, the area chairs, and the program chairs. PC members provide detailed reviews for a small set of papers. Area chairs are in charge of monitoring and managing the review process for a set of papers, making recommendations for paper acceptance, and recommending high-quality papers for awards. The program chairs, among other things, take the final decisions about the papers and form the conference program.

In this document, we go through the ECML PKDD 2023 reviewer form (Part 1) and discuss what is expected from a good review in each of its sections. In Part 2, we give some examples of good reviews.

Part 1. The ECML PKDD review form

1. To which track has the paper been submitted?

Indicate to which of the two ECML PKDD tracks was the paper submitted.

ECML PKDD has two main tracks: the Research Track and the Applied Data Science Track. In the Research Track, we welcome research articles from all areas of machine learning, knowledge discovery, and data mining. We are looking for high-quality contributions in terms of novelty, technical excellence, potential impact, reproducibility and clarity of presentation. Contributions should show that they contribute to the field (e.g., improve the state of the art or provide new theoretical insights). In the Applied Data Science track, we seek articles that demonstrate unique applications of machine learning, data mining, and knowledge discovery to real-world problems and bridge the gap between practice and current theory. Papers should explicitly describe the real-world problem they address (including the particular characteristics of the data, such as data set size, noise levels, sampling rates, etc.), the methodology used, and the conclusions drawn from the use case.

Authors were asked to indicate for which track they are submitting the paper. This information is available to the reviewers by clicking on the paper’s id in the CMT system, which displays a page with additional information about the paper. Reviewers should take the appropriate perspective when reviewing the paper.

2. Relevance

Does the paper fit in the scope of the conference and in the chosen track as described above?

3. Summary of the paper

The review begins with a summary of the main ideas and contributions of the paper. Although this part of the review may not provide much new information to the authors, it is valuable to meta-reviewers and program chairs and can help authors determine if there are any misconceptions that affect the evaluation of the paper. Be brief, but specific.

Keep it polite and avoid writing: “This paper did not provide any new ideas.” Instead, try to find at least one contribution (e.g., something along the lines of “This paper proposes a model that primarily combines the models from [cite A] and [cite B]”) and then comment on its significance in the following sections.

4. Main strengths and weaknesses

Evaluate how strong this paper is by weighing its strengths against possible weaknesses. Your evaluation should reflect an absolute assessment of the contributions of each paper. You should not assume that you have received an unbiased sample of papers, nor should you adjust your answers to create an artificial balance between positive and negative recommendations in your batch of papers.

5. Detailed review with constructive comments to the authors

Write specific arguments explaining why you made a particular recommendation for the submission. Comment on the technical soundness, significance, originality, clarity, and reproducibility of the paper, by addressing the following aspects:

  • Technical soundness: Are the claims well supported by theoretical analysis or experimental results? Are the authors thorough and honest in discussing both the strengths and weaknesses of the proposed methods? Are related works properly cited?
  • Significance: Estimate the extent to which researchers or practitioners will build on or use the proposed ideas and results. Papers that explore new territory or point out new research directions are preferable to papers that advance the state of the art only incrementally.
  • Originality: Are the tasks or methods new? How does the work differ from previous contributions? Does it provide unique data, unique conclusions from existing data, or a unique theoretical or experimental approach?
  • Clarity: Is the paper clearly written? Is it well organized?
  • Reproducibility: Does the paper include sufficient information to reproduce the results? See Question 6 below for details on reproducibility.

Your arguments should be objective, specific, concise, and polite. Please avoid vague, subjective complaints. Remember that you are not evaluating your personal interest in the submission, but its scientific contribution to the field. For each argument, explain its significance. Be careful not to make unsubstantiated claims (e.g., if you claim that a certain aspect of the paper has been done before, please provide appropriate citations for that claim).

Continue with detailed comments for authors and minor comments that are not crucial to the acceptance of the paper but would improve the paper’s quality/understandability overall.

6. Reproducibility

Does the paper include sufficient information to reproduce the results? Reproducibility is defined as the ability to implement, as accurately as needed, the experimental and computational procedures, with the same data and tools to obtain the same results as in an original work. Reproducibility of methods involves describing the procedures and data of the study in sufficient detail so that the same procedures can be repeated exactly and lead to the same results and conclusions. Reproducibility is important not only because it ensures that the results are accurate, but also because it provides transparency and reassurance that we understand exactly what was done. In general, reproducibility reduces the risk of error and thus increases the reliability of experiments. It is now widely agreed that reproducibility is an essential part of any scientific process and that reproducibility needs to become a regular practice in our research.

Further reading on reproducibility:

Select the option that best answers the question: Does the paper include sufficient information to reproduce the results?

  • Excellent. All the required information is provided.
  • Good. Reasonably complete information is provided.
  • Fair. The information provided represents a fair effort towards reproducibility.
  • Poor. Some description is provided, but it is clearly insufficient for reproducibility.
  • No reproducibility information is included in the paper.

7. Overall rating

ECML PKDD is one of the flagship conferences in the fields of Machine Learning and Principles and Practice in Knowledge Discovery from Databases. Based on the above-discussed relevance, contributions, and weighing the paper’s strengths and weaknesses, one of the following scores should be assigned:

  • Strong accept: Outstanding paper
  • Accept: Good paper
  • Weak accept: Borderline paper, tending to accept
  • Weak reject: Borderline paper, tending to reject
  • Reject: Clearly below the acceptance threshold
  • Strong reject: Wrong or known results

8. Reviewer expertise

Choose the option that best describes your expertise.

  • High: I have published on the topic
  • Medium: I have read key papers in this area, but have not published on it
  • Low: I have seen some talks and/or read a few papers on the topic

9. Confidence in the assigned score:

Choose the option that best describes your confidence in your review.

  • Very high: I am absolutely certain
  • High: I have understood the main arguments and have made high-level checks of the proofs
  • Medium: I have understood the main points in the paper, but skipped proofs and technical details
  • Low: I have made an educated guess.

10. Confidential comments to the other Reviewers, Area Chairs, and Program Chairs

Write the comments that you wish to be kept confidential from the authors. Such comments might include explicit comparisons of the submission to other submissions and criticisms that are more bluntly stated. If you accidentally find out the identities of the authors, please do not divulge the identities to anyone, but use this field to tell your AC that this has happened.

Part 2. Examples of a good review

Acknowledgments: The examples of strong reviews we use in this document are partially adopted from the ICML 2020 reviewer guidelines, which are based on NeurIPS 2019 reviewer guidelines, which in turn utilize reviews written for some NeurIPS and ICLR papers. We thank the NeurIPS program committee and reviewers who write such remarkable reviews.

1. Merits of the paper

The following are examples of contributions a paper might make. This list is not exhaustive, and a single paper may make multiple contributions.

“The paper provides a thorough experimental validation of the proposed algorithm, demonstrating much faster runtimes without loss in performance compared to strong baselines.”

“The paper proposes an algorithm for [insert] with sample complexity [or computational complexity] scaling linearly in the observed dimensions; in contrast, existing algorithms scale cubically.”

“The paper presents a method for robustly handling covariate shift in cases where [insert assumptions], and demonstrated the impact on [insert an application].”

“The paper provides a framework that unifies [insert field A] and [insert field B], two previously disparate research areas.”

“This paper demonstrates how the previously popular approach of [insert] has serious limitations when applied to [insert].”

“This paper formulates a novel problem [insert brief description] and clearly shows that it is of interest to the community because [insert brief explanation] .”

2. Justification of your score and detailed comments for authors

When justifying your score and writing detailed comments to authors, consider the following (non-exclusive) list of criteria: significance, novelty, potential impact, technical quality, presentation/clarity, and reproducibility. Below are examples of good comments with respect to each of these criteria.

a) Significance

Try to answer the following questions: Are the results important? Are others (researchers or practitioners) likely to use the ideas or build on them? Does the submission address a difficult task in a better way than previous work? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach?

Example: “This article answers a very natural question: algorithm A is an extremely classical, and very simple algorithm, yet we do not fully understand its convergence rate. This paper provides a novel proof that is conceptually simple and elegant, and I found its presentation very clear.”

Example: ”This paper seems to be a useful contribution to the literature on [topic], showing a modest improvement over the state of the art. However, the paper could be strengthened by demonstrating and analyzing the efficacy of the approach in [situation X].”

b) Novelty

Try to answer the following questions: Are the tasks or methods new? Is the work a novel combination of well-known techniques? Is it clear how this work differs from previous contributions? Is related work adequately cited? Does the work formulate a novel problem?

Example: “The main contribution of this paper is to offer a convergence proof for minimizing sum fi(x) + g(x) where fi(x) is smooth, and g is nonsmooth, in an asynchronous setting. The problem is well-motivated; there is indeed no known work on this, in my knowledge .… There are two main theoretical results. Theorem 1 gives a convergence rate for the proposed algorithm, which is incrementally better than a previous result. Theorem 2 gives the rate for an asynchronous setting, which is more groundbreaking.”

Example: “The paper is missing a related work section and also does not cite several related works, particularly regarding [topic 1] (list of citations), [topic 2] (list of citations) and [topic 3] (list of citations). The proposed model is similar to that of (citation), though the [specific detail 1] makes the proposed method sufficiently original. The study of [specific detail 2] is also fairly novel.”

c) Potential Impact

Try to answer the following questions: Will the ideas in this paper have an impact on the community in the long run? Does the paper bridge previously disconnected fields? Does the paper push the community in a new, interesting/important direction?

Example: “In order to prove Theorem 2, the paper presents a novel proof technique based on [key idea]. This technique could potentially be used for a much broader class of problems such as [X], [Y] and [Z], and would lead to improved rates in settings where [assumption holds].”

Example: “The approach presented in this paper is well-evaluated in [domain], but potentially useful in many other settings. Because the approach is somewhat complex, it could have even more potential impact if the authors released an implementation.”

d) Technical Quality

Try to answer the following questions: Is the submission technically sound? Are claims well supported (e.g., by theoretical analysis or experimental results)? Is this a complete piece of work or work in progress? Are the authors careful about evaluating both the strengths and limitations of their work? Does the evaluation justify the main claims of the paper?

Example: “The technical content of the paper appears to be correct albeit some small mistakes that I believe are typos instead of technical flaws (see #4 below).…

4. The equation in line 125 appears to be wrong. Shouldn’t there be a line break before the last equal sign, and shouldn’t the last expression be equal to [equation]?”

Example: “The idea of having a bound for [X] is certainly good. While the paper did demonstrate that the bound does indeed contain [X] as expected, it is not entirely clear that this bound will be useful for model selection. This is not demonstrated in the experiments reported in the paper, despite being one of the main claims of the paper.”

e) Presentation/Clarity

Try to answer the following questions: For a reader with the appropriate background knowledge, is the submission clearly written? Is it well organized? (If not, please make constructive suggestions for improvement.)

Example: “The paper is generally well-written and structured clearly. The notation could be improved in a couple of places. In the inference model (equations between ll. 82-83), I would suggest adding a frame superscript to clarify that inference is occurring within each frame, e.g. [equation]. In addition, in Section 3 it was not immediately clear that a frame is defined to itself be a sub-sequence.”

Example: “While the paper is fairly readable; there is substantial room for improvements in the clarity. There were several variables that were used in equations before they were defined, such as [example 1] and [example 2] . Moreover, in the statement of Theorem 2, it was unclear whether the same assumptions were being made as in Theorem 1. Finally, I was sometimes confused because \ell appears to be overloaded and used to mean both [X] and [Y].”

f) Reproducibility

Try to answer the following question: does the paper contain enough details to reproduce the results? If the submission has supplementary code and you managed to run it and reproduce the results [this is optional for reviewers], please do mention it in your review.

Example: “The paper describes all the algorithms in full detail and provides enough information for an expert reader to reproduce its results. However, it seems that Theorem 1 requires an additional assumption of d>3 which is not specified..”

Example: “Neither the main text nor the supplementary code explains how the hyperparameters are selected for the synthetic experiment. The paper should explain the specific procedure used to alleviate reproducibility concerns”.