How to decide between competing efficiency droop models for GaN-LEDs
I have been struggling with this questions ever since I wrote a first review paper on efficiency droop in 2010. Five years and hundreds of research papers later, experts are still divided over the primary cause of the droop. The two competing explanations are electron leakage and Auger recombination, respectively, but only very few direct measurements of either mechanism are published, none of which establishes a dominating magnitude. Modeling and simulation of measured characteristics has not led to a decision thus far, because parameter uncertainties often leave enough room to substantiate either mechanism. Thus, I recently looked deeper into the widely reported GaN-LED efficiency reduction with higher ambient temperature.  Intuitively, both mechanisms seem to explain this temperature sensitivity. In particular, electron leakage is commonly believed to increase with temperature. But advanced simulations show the opposite trend: higher temperatures lead to a higher hole density in p-doped layers so that more holes reach the active layers and fewer electrons leak out. In my simulations, this results in a substantial efficiency enhancement (pictured). But such enhancement is not seen in any measurements. In other words, electron leakage cannot be the main reason for the efficiency droop!
What do you think ? Will this end the debate about droop?
 Applied Physics Letters 107, 031101 (2015)