Examples, Support & Comments

Dear Community,

It will require an effort for experimentalists to incorporate their analyses into a framework like RECAST, and that is much more likely to happen if there is a clear demonstration that there is demand in the form of concrete examples.  Many examples have been talked about, so we should document them.  If you support the idea of RECAST, please leave your name and a brief message in a post below below.

More details are available in the white paper, including case studies and design considerations for RECAST.

9 Responses to Examples, Support & Comments

  1. Daniel Whiteson says:

    Very cool.

  2. First post (right?) ! But seriously, looks very interesting. Congrats.

  3. Patrick Meade says:

    It’s a great idea and would help a lot.

  4. Markus Luty says:

    As a theorist, I strongly urge that the LHC experimental community take this idea seriously and implement it.

    We do not know what model we are looking for at the LHC, but searches must compare to a specific new physics model to be meaningful. Experimentalists are therefore most likely looking for the wrong model. RECAST will make those searches useful for the large number of new models that will be constructed as LHC data comes in. This is particularly obvious when there is a discovery, but even before then it will be very hard to know what models are allowed by existing searches.

    Converging on the right model of new physics at the LHC will be an iterative process, with theorists building new models in response to data, then new searches motivated by these new models, and so on. RECAST has the potential to significantly improve the efficiency of this cycle and thereby make an important contribution to the ultimate goal of the LHC physics program.

  5. Matt Reece says:

    I strongly support this idea. As a theorist, I’ve spent a lot of time trying to do the best efficiency estimates I can, for various scenarios. This is a poor division of labor: without the experiment’s thorough detector simulation, we theorists can only do this at a crude level. Often this is enough for our purposes, especially when a signal involves cleanly-identified objects and has a rate that falls quickly with mass. Sometimes, though, we need a more refined understanding (e.g., for exotic signatures, like the recent example of “lepton jets,” where detailed isolation efficiencies might matter). Of course, there is a reason that we do this work even though we’re bad at it: we can’t ask already-busy experimentalists to do our work for us. They should be producing new results instead of putting lots of effort into redoing old ones. But the nice thing about RECAST is that you’ve given thought to how to improve this division of labor to reuse old results without adding a huge amount to the experimentalists’ workload. And, as you emphasized, it can be used even within the experimental collaborations, so it’s not just theorists who benefit.

    • Daniel Whiteson says:

      If you really want to contribute to the understanding of new signatures (especially ones as different from classical objects as lepton-jets are) then you could also consider joining a collaboration and contributing to the underlying description and tuning of the simulation….

  6. Andy Haas says:

    There’s no question that something like RECAST would be an extremely useful tool.
    I strongly support it and am interested in trying it out, probably on some D0 analyses.

    I do worry about one thing. As you know, detector simulations don’t always model the true detector response accurately in more extreme situations. (Some experiments’ simulations are better than others. ;) ) This may especially impact many creative, new signatures that theorists would likely want to use RECAST to study. For instance, if two muons or tracks get very close together, the detector simulation may say that you can reconstruct them both with some given efficiency and momentum resolution. But it’s very possible that the true detector performance is different. (The deficiency is usually in the simulation of the detector electronics and read-out.)
    For a “real” analysis, the detector simulation must always be validated on a calibration signal on data, like Z->mumu, over the full range of kinematics and topologies that the signal populates. And systematic uncertainties are assigned based on the (dis)agreement between the data calibration signal and the detector simulation.
    So I think some (human physicist) work is going to be required to make sure that new signals are not probing some less well-trusted corners of the detector simulation. I think this could easily happen, even if the same analysis cuts are being used.
    If the signal is significantly different, new signal systematic uncertainties would need to be derived. (And there may be situations where the simulation just can’t be properly tested with data – so the results will have to be simply taken “with a grain of salt”.)
    The trigger simulation is usually even less well modeled…
    I guess it would be up to the experimenters “reblessing” the analysis (did someone sneeze? :) ) to use their judgment and do the needed studies to see whether the detector simulation can still be trusted for the new signal being studied. And they would determine the new systematic uncertainties on the signal efficiency if necessary.

    • cranmer says:

      Hi Andy,

      I agree with your point completely. You have been involved in these unique analyses where the signal is unusual (lepton jets, looking for long-lived particles decaying between bunches, etc.), so you are the perfect person to make the point. One of the good arguments against having something like RECAST becoming totally automated is that someone might put in a signal that is very different (eg. lepton jets or some hidden valley model with 100′s of soft particles) where we don’t trust the signal efficiency even with GEANT.

      In those cases, the human component is very important. I think it can enter in at least three places. The first would be that the collaboration/experimenter does not accept the RECAST request based on the description of the signal and knowledge of the model, because they know that the signal efficiency may not be trustworthy without additional study. The second place the human component can enter is in the re-blessing procedure. The third place is that the experimenter may realize this issue, inflate the signal efficiency uncertainty to some reasonable value in the statistical analysis at the end.

      Thank you for your thoughtful comment.

  7. Slava Rychkov says:

    As a theorist, I think it’s a great idea.

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>