Connecting Theory and Practice in Optoelectronics

How to get your simulation paper accepted

Looking back at 2016, I just realized that my yearly load of peer reviews has increased to almost 80 journal papers, mainly in the field of optoelectronic device simulation. The rising number of such paper submissions to top journals is certainly good news, but the paper quality is often insufficient. Unfortunately, I have to propose rejection of most papers after a detailed assessment of essential mistakes. A fundamental mistake in my view is the unproven assumption that simulations represent the real world. Authors often don’t seem to understand that computer simulations lead us into a virtual reality in which many unreal effects can happen – depending on their choice of mathematical models and  material parameters.

Thus, I would like to list a few general recommendations that would make a simulation paper more acceptable, at least in my view:

  1. Outline purpose, methodology, and key conclusions of your simulation study in the introduction. Compare your approach to previous publications.
  2. Identify and discuss essential physical mechanisms and corresponding mathematical models. Modeling should be governed by insight into device physics and not by mathematical convenience. Possible side-effects can only be evaluated if included in the models.
  3. Find out which material parameters in your models have a significant impact on your results and justify your choice of parameter values. In some cases, a wide range of values can be found in the literature, so that an error analysis may be required to investigate the corresponding uncertainty of your results.
  4. Reproduce measured characteristics to validate your model, at least for a reference device. You may need to find experimental partners or suitable literature sources that provide a sufficient data set. This is often difficult but it is your job as author to prove to the general audience that your simulation is realistic. If your models or parameters are incorrect, your paper may lead the reader in the wrong direction.
  5. Separate evidence-based results from speculative interpretations. Conclusions should be derived from demonstrated facts and not from wishful thinking.

However, be aware that some level of uncertainty always remains since simulations always simplify the real world and measured data are always limited. Two recent publications investigate such uncertainties in GaN-LED simulations [1,2]. In fact, the large body of peer-reviewed but often contradicting GaN-LED simulation papers underlines the urgency of establishing quality guidelines in our field. I hope this blog post initiates a broader discussion on how to improve the general reputation and the practical impact of numerical simulations.

UPDATE 2/7/17: An updated list of recommendations is now available here.

[1] How to decide between competing efficiency droop models for GaN-based light-emitting diodes,  Appl. Phys. Lett. 107, 031101 (2015)

[2] On the uncertainty of the Auger recombination coefficient extracted from InGaN/GaN light-emitting diode efficiency droop measurements, Appl. Phys. Lett. 106, 101101 (2015)


9 responses to “How to get your simulation paper accepted

  1. Wei LU 2017-01-08 at 08:31

    I think that the experimental evidence for the validity of simulation is most important for a simulation paper. This will keep the simulation paper in a right way for connecting theory and practice in optoelectronics.

  2. Matthias Auf der Maur 2017-01-09 at 03:43

    I agree completely. I would like to note also that fulfilling point 4 might not even always be sufficient, because sometimes models have many parameters so that fitting to experimental data may not be unique, or parameters lump different physics together, so that the fitted values may be valid only for the specific device structure. In the best case, on has two or more device structures fitted by the same parameter set.
    The accuracy of validation against measurement might also depend on what the message of the article is. Sometimes, qualitative behavior is more important than exact quantitative values, which anyway requires exact knowledge about what physics is in the models and what is not.
    Often I have seen pure simulation papers, with extended parts discussing optimization, without addressing any of the points mentioned in the post. I think often authors have poor understanding of the models and their applicability and limitations.
    I also would add one more point to the list of recommendations: often it is hard or impossible to reproduce results, for lack of details given (structure, parameters etc.). I think it would be very nice if each simulation paper, whenever possible, were accompanied by supplementary material containing all parameters, information on simulation tools and versions, input files and maybe also output data. In fact, in some communities there are initiatives in such a direction (but I do not have a web link right now).

    • HW 2017-01-29 at 13:04

      I would like to stress in particular the last point. Often I recieve papers for reviewing where it is impossible to understand how the authors obtained the results. This holds also for a part of the summeries submitted to the NUSOD conference, in particular from authors located in newly emerging economies. They ignore basic scientific principles which have been estabished in Europe and the US for more than 100 years. I think it is the responsibility of us, acting as reviewers, to hold the standards high.

  3. (Rod)erick MacKenzie 2017-01-10 at 08:23

    I agree your post entirely, however I think point 4 should be point 1 as a lot of the poorer papers never make an attempt to compare simulation with experiment. I always think that the purpose of a modeling paper is to bring meaning to experimental results, with no experimental results the paper is underground from reality and more often than not wrong.

  4. IK 2017-01-12 at 15:09

    Any published paper has to have a compelling and well-articulated reason to exist. There has to an open question that it answers, or a hypothesis that it tests, or it should offer an important addition/correction to where the state-of-the-art is in understanding the relevant physics. I agree that there is a proliferation of low-quality papers; some are really poor in that they make no attempt to position themselves with respect to the field or make relevant connection to experiment. Some outright ignore whole bodies of work, likely trying to seem more novel than they really are. However, many papers are correct and even agree with experiment, but they present nothing new and don’t even attempt to discuss the physics (as it, “We wrote the code based on these well-known equations, applied it to this model system that is infinitesimally different from a bunch of other closely related systems that people have published on, and here are some curves we got. The end.”) The latter kind are hard to reject, because they represent a lot of work by the conscientious authors, but I would argue that it’s the authors duty also to convince the community (and the peer reviewer, who’s the community representative) that the work was actually worth doing and that it actually advances the field; if they cannot, the paper does not deserve a place in an archival publication.

    • (Rod)erick MacKenzie 2017-01-12 at 15:37

      @IK I would agree with your post entirely. However, it’s a lot easier to be wrong and novel, rather than correct and novel. So I would say: Threshold 1: Link your model to experimental data, Threshold 2: Is the work novel in this context?

      • IK 2017-01-12 at 15:54

        @Rod MacKenzie: Agreed. My view is that, if you (the rhetorical you) are asking a legitimate physics question that we don’t know the answer to, you will naturally strive to compare a theoretical model to as much experimental information as you can get your hands on. You do this 1) in order to tease out whether the question you are trying to ask is still open and interesting and 2) to figure out whether the answer your model offers actually makes sense. I have never found myself wishing for less experimental data in the literature, only for more! 🙂

      • (Rod)erick MacKenzie 2017-01-12 at 16:01

        I would add the caveat, one can never have enough *correct* experimental data :).

  5. Tomás González 2017-01-13 at 11:13

    I essentially agree. In the last years I have received to review or edit many papers in which the authors simulate devices with no connection to the real world and, what it is worse, without understanding very well which are the models they are including in the simulations, especially when commercial tools are used. In some cases, once a given group is able to publish a “seminal” paper about a given “novel” device in a reputable journal, then many more papers with very minor variations of the initial device are submitted here and there, always based on the initial paper and without any reference to measurements in real devices. Indeed, it becomes difficult to find references in the literature about such “devices” outside the group where they were “invented”.
    How to stop this? Very difficult unless some standard guidelines like those proposed here are followed by authors and required by editors and reviewers.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s