Friday 8 February 2019

Reviewers please be kind!

Academia is a very competitive world and it is something that many people, including my self, do not understand when they start a doctorate degree. Competitiveness is OK since we learn how to accept failure and be persistent in getting this work publish. But sometimes I feel disappointed when I receive the comments of the reviewers, not because they rejected my paper, but because they did not even read the entire article and it is clear from their comments! I think more people may resemble to this experience. So, I decided to write this post and ask reviewers to carefully read the entire articles, be kind and encouraging while recommending ways of making the work publishable.

Personally, I am new researcher and I have not written enough papers to fully understand the process and avoid small mistakes. But if reviewers do not help me improve myself then I will never be able to progress my career in academia.

Here are some comments that I found disturbing:
-  This is from a paper that have been rejected before even been reviewed: "a link to bird diversity is offered as motivation, the link to remote sensing is not pursued further in the manuscript". The article was about proposing a new methodology for detecting dead trees from full-waveform LiDAR data. If LiDAR is not about remote sensing then what could it be? By only rephrasing the abstract and submitting it to another high impact journal the article was published with minor corrections, indicating that the editor had not read the article.

- "The LiDAR-specific complications of mapping full-waveform data to scalar volumes are entirely handle by DASOS. It means means that the manuscript oversells its contribution" The reviewer missed the part that DASOS was implemented by the authors of the manuscript to make the research possible!

- "Evaluation of the method is completely insufficient," could have been phrased in a kinder way, especially when the reviewer request comparison with Canny Edge that it included in the article. On top of that the reviewer states "While Canny was considered, its awful results in table 3 indicates that it was implemented incorrectly or using a poor choice of settings." How is it possible for the standard python opencv function for the Canny Edge algorithm to be incorrectly implemented? The reviewer could have request instead an explanation for the bad results, which exists: the Canny Edge includes a smoothing step and the gradient differences are low. Therefore, Canny Edge fails to detect many edges!

- The same reviewer questions the approach used to find k for k-means and that it may not be reliable, but missed the part that the mean shift was also implemented in the article that does not require the number of clusters (k) to be pre-defined. 


Do not get me wrong, there are reviewers that provide constructive feedback that help improve a paper. I just feel that more reviewers should act like that. Behind every article, there is a researcher who spent months reading articles, conducting experiments, stressing with every unexpected results, missing social events hoping that the code will work this time and spending nights overworking to finish with the writing. So be kind to them! I understand that I and every young researcher makes mistakes. My articles usually lack of presentation, but if reviewers are judgmental and badly criticizing my work without even reading the entire articles, then how will I improve myself? 






No comments:

Post a Comment