Coverage Report

Created: 2017-10-03 07:32

/Users/buildslave/jenkins/sharedspace/clang-stage2-coverage-R@2/llvm/include/llvm/Analysis/CGSCCPassManager.h
Line
Count
Source (jump to first uncovered line)
1
//===- CGSCCPassManager.h - Call graph pass management ----------*- C++ -*-===//
2
//
3
//                     The LLVM Compiler Infrastructure
4
//
5
// This file is distributed under the University of Illinois Open Source
6
// License. See LICENSE.TXT for details.
7
//
8
//===----------------------------------------------------------------------===//
9
/// \file
10
///
11
/// This header provides classes for managing passes over SCCs of the call
12
/// graph. These passes form an important component of LLVM's interprocedural
13
/// optimizations. Because they operate on the SCCs of the call graph, and they
14
/// traverse the graph in post-order, they can effectively do pair-wise
15
/// interprocedural optimizations for all call edges in the program while
16
/// incrementally refining it and improving the context of these pair-wise
17
/// optimizations. At each call site edge, the callee has already been
18
/// optimized as much as is possible. This in turn allows very accurate
19
/// analysis of it for IPO.
20
///
21
/// A secondary more general goal is to be able to isolate optimization on
22
/// unrelated parts of the IR module. This is useful to ensure our
23
/// optimizations are principled and don't miss oportunities where refinement
24
/// of one part of the module influence transformations in another part of the
25
/// module. But this is also useful if we want to parallelize the optimizations
26
/// across common large module graph shapes which tend to be very wide and have
27
/// large regions of unrelated cliques.
28
///
29
/// To satisfy these goals, we use the LazyCallGraph which provides two graphs
30
/// nested inside each other (and built lazily from the bottom-up): the call
31
/// graph proper, and a reference graph. The reference graph is super set of
32
/// the call graph and is a conservative approximation of what could through
33
/// scalar or CGSCC transforms *become* the call graph. Using this allows us to
34
/// ensure we optimize functions prior to them being introduced into the call
35
/// graph by devirtualization or other technique, and thus ensures that
36
/// subsequent pair-wise interprocedural optimizations observe the optimized
37
/// form of these functions. The (potentially transitive) reference
38
/// reachability used by the reference graph is a conservative approximation
39
/// that still allows us to have independent regions of the graph.
40
///
41
/// FIXME: There is one major drawback of the reference graph: in its naive
42
/// form it is quadratic because it contains a distinct edge for each
43
/// (potentially indirect) reference, even if are all through some common
44
/// global table of function pointers. This can be fixed in a number of ways
45
/// that essentially preserve enough of the normalization. While it isn't
46
/// expected to completely preclude the usability of this, it will need to be
47
/// addressed.
48
///
49
///
50
/// All of these issues are made substantially more complex in the face of
51
/// mutations to the call graph while optimization passes are being run. When
52
/// mutations to the call graph occur we want to achieve two different things:
53
///
54
/// - We need to update the call graph in-flight and invalidate analyses
55
///   cached on entities in the graph. Because of the cache-based analysis
56
///   design of the pass manager, it is essential to have stable identities for
57
///   the elements of the IR that passes traverse, and to invalidate any
58
///   analyses cached on these elements as the mutations take place.
59
///
60
/// - We want to preserve the incremental and post-order traversal of the
61
///   graph even as it is refined and mutated. This means we want optimization
62
///   to observe the most refined form of the call graph and to do so in
63
///   post-order.
64
///
65
/// To address this, the CGSCC manager uses both worklists that can be expanded
66
/// by passes which transform the IR, and provides invalidation tests to skip
67
/// entries that become dead. This extra data is provided to every SCC pass so
68
/// that it can carefully update the manager's traversal as the call graph
69
/// mutates.
70
///
71
/// We also provide support for running function passes within the CGSCC walk,
72
/// and there we provide automatic update of the call graph including of the
73
/// pass manager to reflect call graph changes that fall out naturally as part
74
/// of scalar transformations.
75
///
76
/// The patterns used to ensure the goals of post-order visitation of the fully
77
/// refined graph:
78
///
79
/// 1) Sink toward the "bottom" as the graph is refined. This means that any
80
///    iteration continues in some valid post-order sequence after the mutation
81
///    has altered the structure.
82
///
83
/// 2) Enqueue in post-order, including the current entity. If the current
84
///    entity's shape changes, it and everything after it in post-order needs
85
///    to be visited to observe that shape.
86
///
87
//===----------------------------------------------------------------------===//
88
89
#ifndef LLVM_ANALYSIS_CGSCCPASSMANAGER_H
90
#define LLVM_ANALYSIS_CGSCCPASSMANAGER_H
91
92
#include "llvm/ADT/DenseSet.h"
93
#include "llvm/ADT/PriorityWorklist.h"
94
#include "llvm/ADT/STLExtras.h"
95
#include "llvm/ADT/SmallPtrSet.h"
96
#include "llvm/ADT/SmallVector.h"
97
#include "llvm/Analysis/LazyCallGraph.h"
98
#include "llvm/IR/CallSite.h"
99
#include "llvm/IR/Function.h"
100
#include "llvm/IR/InstIterator.h"
101
#include "llvm/IR/PassManager.h"
102
#include "llvm/IR/ValueHandle.h"
103
#include "llvm/Support/Debug.h"
104
#include "llvm/Support/raw_ostream.h"
105
#include <algorithm>
106
#include <cassert>
107
#include <utility>
108
109
namespace llvm {
110
111
struct CGSCCUpdateResult;
112
class Module;
113
114
// Allow debug logging in this inline function.
115
#define DEBUG_TYPE "cgscc"
116
117
/// Extern template declaration for the analysis set for this IR unit.
118
extern template class AllAnalysesOn<LazyCallGraph::SCC>;
119
120
extern template class AnalysisManager<LazyCallGraph::SCC, LazyCallGraph &>;
121
122
/// \brief The CGSCC analysis manager.
123
///
124
/// See the documentation for the AnalysisManager template for detail
125
/// documentation. This type serves as a convenient way to refer to this
126
/// construct in the adaptors and proxies used to integrate this into the larger
127
/// pass manager infrastructure.
128
using CGSCCAnalysisManager =
129
    AnalysisManager<LazyCallGraph::SCC, LazyCallGraph &>;
130
131
// Explicit specialization and instantiation declarations for the pass manager.
132
// See the comments on the definition of the specialization for details on how
133
// it differs from the primary template.
134
template <>
135
PreservedAnalyses
136
PassManager<LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &,
137
            CGSCCUpdateResult &>::run(LazyCallGraph::SCC &InitialC,
138
                                      CGSCCAnalysisManager &AM,
139
                                      LazyCallGraph &G, CGSCCUpdateResult &UR);
140
extern template class PassManager<LazyCallGraph::SCC, CGSCCAnalysisManager,
141
                                  LazyCallGraph &, CGSCCUpdateResult &>;
142
143
/// \brief The CGSCC pass manager.
144
///
145
/// See the documentation for the PassManager template for details. It runs
146
/// a sequence of SCC passes over each SCC that the manager is run over. This
147
/// type serves as a convenient way to refer to this construct.
148
using CGSCCPassManager =
149
    PassManager<LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &,
150
                CGSCCUpdateResult &>;
151
152
/// An explicit specialization of the require analysis template pass.
153
template <typename AnalysisT>
154
struct RequireAnalysisPass<AnalysisT, LazyCallGraph::SCC, CGSCCAnalysisManager,
155
                           LazyCallGraph &, CGSCCUpdateResult &>
156
    : PassInfoMixin<RequireAnalysisPass<AnalysisT, LazyCallGraph::SCC,
157
                                        CGSCCAnalysisManager, LazyCallGraph &,
158
                                        CGSCCUpdateResult &>> {
159
  PreservedAnalyses run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM,
160
9
                        LazyCallGraph &CG, CGSCCUpdateResult &) {
161
9
    (void)AM.template getResult<AnalysisT>(C, CG);
162
9
    return PreservedAnalyses::all();
163
9
  }
PassBuilder.cpp:llvm::RequireAnalysisPass<(anonymous namespace)::NoOpCGSCCAnalysis, llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&>::run(llvm::LazyCallGraph::SCC&, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>&, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&)
Line
Count
Source
160
9
                        LazyCallGraph &CG, CGSCCUpdateResult &) {
161
9
    (void)AM.template getResult<AnalysisT>(C, CG);
162
9
    return PreservedAnalyses::all();
163
9
  }
Unexecuted instantiation: llvm::RequireAnalysisPass<llvm::FunctionAnalysisManagerCGSCCProxy, llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&>::run(llvm::LazyCallGraph::SCC&, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>&, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&)
164
};
165
166
/// A proxy from a \c CGSCCAnalysisManager to a \c Module.
167
using CGSCCAnalysisManagerModuleProxy =
168
    InnerAnalysisManagerProxy<CGSCCAnalysisManager, Module>;
169
170
/// We need a specialized result for the \c CGSCCAnalysisManagerModuleProxy so
171
/// it can have access to the call graph in order to walk all the SCCs when
172
/// invalidating things.
173
template <> class CGSCCAnalysisManagerModuleProxy::Result {
174
public:
175
  explicit Result(CGSCCAnalysisManager &InnerAM, LazyCallGraph &G)
176
215
      : InnerAM(&InnerAM), G(&G) {}
177
178
  /// \brief Accessor for the analysis manager.
179
239
  CGSCCAnalysisManager &getManager() { return *InnerAM; }
180
181
  /// \brief Handler for invalidation of the Module.
182
  ///
183
  /// If the proxy analysis itself is preserved, then we assume that the set of
184
  /// SCCs in the Module hasn't changed. Thus any pointers to SCCs in the
185
  /// CGSCCAnalysisManager are still valid, and we don't need to call \c clear
186
  /// on the CGSCCAnalysisManager.
187
  ///
188
  /// Regardless of whether this analysis is marked as preserved, all of the
189
  /// analyses in the \c CGSCCAnalysisManager are potentially invalidated based
190
  /// on the set of preserved analyses.
191
  bool invalidate(Module &M, const PreservedAnalyses &PA,
192
                  ModuleAnalysisManager::Invalidator &Inv);
193
194
private:
195
  CGSCCAnalysisManager *InnerAM;
196
  LazyCallGraph *G;
197
};
198
199
/// Provide a specialized run method for the \c CGSCCAnalysisManagerModuleProxy
200
/// so it can pass the lazy call graph to the result.
201
template <>
202
CGSCCAnalysisManagerModuleProxy::Result
203
CGSCCAnalysisManagerModuleProxy::run(Module &M, ModuleAnalysisManager &AM);
204
205
// Ensure the \c CGSCCAnalysisManagerModuleProxy is provided as an extern
206
// template.
207
extern template class InnerAnalysisManagerProxy<CGSCCAnalysisManager, Module>;
208
209
extern template class OuterAnalysisManagerProxy<
210
    ModuleAnalysisManager, LazyCallGraph::SCC, LazyCallGraph &>;
211
212
/// A proxy from a \c ModuleAnalysisManager to an \c SCC.
213
using ModuleAnalysisManagerCGSCCProxy =
214
    OuterAnalysisManagerProxy<ModuleAnalysisManager, LazyCallGraph::SCC,
215
                              LazyCallGraph &>;
216
217
/// Support structure for SCC passes to communicate updates the call graph back
218
/// to the CGSCC pass manager infrsatructure.
219
///
220
/// The CGSCC pass manager runs SCC passes which are allowed to update the call
221
/// graph and SCC structures. This means the structure the pass manager works
222
/// on is mutating underneath it. In order to support that, there needs to be
223
/// careful communication about the precise nature and ramifications of these
224
/// updates to the pass management infrastructure.
225
///
226
/// All SCC passes will have to accept a reference to the management layer's
227
/// update result struct and use it to reflect the results of any CG updates
228
/// performed.
229
///
230
/// Passes which do not change the call graph structure in any way can just
231
/// ignore this argument to their run method.
232
struct CGSCCUpdateResult {
233
  /// Worklist of the RefSCCs queued for processing.
234
  ///
235
  /// When a pass refines the graph and creates new RefSCCs or causes them to
236
  /// have a different shape or set of component SCCs it should add the RefSCCs
237
  /// to this worklist so that we visit them in the refined form.
238
  ///
239
  /// This worklist is in reverse post-order, as we pop off the back in order
240
  /// to observe RefSCCs in post-order. When adding RefSCCs, clients should add
241
  /// them in reverse post-order.
242
  SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> &RCWorklist;
243
244
  /// Worklist of the SCCs queued for processing.
245
  ///
246
  /// When a pass refines the graph and creates new SCCs or causes them to have
247
  /// a different shape or set of component functions it should add the SCCs to
248
  /// this worklist so that we visit them in the refined form.
249
  ///
250
  /// Note that if the SCCs are part of a RefSCC that is added to the \c
251
  /// RCWorklist, they don't need to be added here as visiting the RefSCC will
252
  /// be sufficient to re-visit the SCCs within it.
253
  ///
254
  /// This worklist is in reverse post-order, as we pop off the back in order
255
  /// to observe SCCs in post-order. When adding SCCs, clients should add them
256
  /// in reverse post-order.
257
  SmallPriorityWorklist<LazyCallGraph::SCC *, 1> &CWorklist;
258
259
  /// The set of invalidated RefSCCs which should be skipped if they are found
260
  /// in \c RCWorklist.
261
  ///
262
  /// This is used to quickly prune out RefSCCs when they get deleted and
263
  /// happen to already be on the worklist. We use this primarily to avoid
264
  /// scanning the list and removing entries from it.
265
  SmallPtrSetImpl<LazyCallGraph::RefSCC *> &InvalidatedRefSCCs;
266
267
  /// The set of invalidated SCCs which should be skipped if they are found
268
  /// in \c CWorklist.
269
  ///
270
  /// This is used to quickly prune out SCCs when they get deleted and happen
271
  /// to already be on the worklist. We use this primarily to avoid scanning
272
  /// the list and removing entries from it.
273
  SmallPtrSetImpl<LazyCallGraph::SCC *> &InvalidatedSCCs;
274
275
  /// If non-null, the updated current \c RefSCC being processed.
276
  ///
277
  /// This is set when a graph refinement takes place an the "current" point in
278
  /// the graph moves "down" or earlier in the post-order walk. This will often
279
  /// cause the "current" RefSCC to be a newly created RefSCC object and the
280
  /// old one to be added to the above worklist. When that happens, this
281
  /// pointer is non-null and can be used to continue processing the "top" of
282
  /// the post-order walk.
283
  LazyCallGraph::RefSCC *UpdatedRC;
284
285
  /// If non-null, the updated current \c SCC being processed.
286
  ///
287
  /// This is set when a graph refinement takes place an the "current" point in
288
  /// the graph moves "down" or earlier in the post-order walk. This will often
289
  /// cause the "current" SCC to be a newly created SCC object and the old one
290
  /// to be added to the above worklist. When that happens, this pointer is
291
  /// non-null and can be used to continue processing the "top" of the
292
  /// post-order walk.
293
  LazyCallGraph::SCC *UpdatedC;
294
295
  /// A hacky area where the inliner can retain history about inlining
296
  /// decisions that mutated the call graph's SCC structure in order to avoid
297
  /// infinite inlining. See the comments in the inliner's CG update logic.
298
  ///
299
  /// FIXME: Keeping this here seems like a big layering issue, we should look
300
  /// for a better technique.
301
  SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
302
      &InlinedInternalEdges;
303
};
304
305
/// \brief The core module pass which does a post-order walk of the SCCs and
306
/// runs a CGSCC pass over each one.
307
///
308
/// Designed to allow composition of a CGSCCPass(Manager) and
309
/// a ModulePassManager. Note that this pass must be run with a module analysis
310
/// manager as it uses the LazyCallGraph analysis. It will also run the
311
/// \c CGSCCAnalysisManagerModuleProxy analysis prior to running the CGSCC
312
/// pass over the module to enable a \c FunctionAnalysisManager to be used
313
/// within this run safely.
314
template <typename CGSCCPassT>
315
class ModuleToPostOrderCGSCCPassAdaptor
316
    : public PassInfoMixin<ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>> {
317
public:
318
  explicit ModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass)
319
239
      : Pass(std::move(Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>::ModuleToPostOrderCGSCCPassAdaptor(llvm::InlinerPass)
Line
Count
Source
319
7
      : Pass(std::move(Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >::ModuleToPostOrderCGSCCPassAdaptor(llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&>)
Line
Count
Source
319
164
      : Pass(std::move(Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>::ModuleToPostOrderCGSCCPassAdaptor(llvm::PostOrderFunctionAttrsPass)
Line
Count
Source
319
15
      : Pass(std::move(Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >::ModuleToPostOrderCGSCCPassAdaptor(llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >)
Line
Count
Source
319
53
      : Pass(std::move(Pass)) {}
320
321
  // We have to explicitly define all the special member functions because MSVC
322
  // refuses to generate them.
323
  ModuleToPostOrderCGSCCPassAdaptor(
324
      const ModuleToPostOrderCGSCCPassAdaptor &Arg)
325
      : Pass(Arg.Pass) {}
326
327
  ModuleToPostOrderCGSCCPassAdaptor(ModuleToPostOrderCGSCCPassAdaptor &&Arg)
328
478
      : Pass(std::move(Arg.Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >::ModuleToPostOrderCGSCCPassAdaptor(llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >&&)
Line
Count
Source
328
328
      : Pass(std::move(Arg.Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>::ModuleToPostOrderCGSCCPassAdaptor(llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>&&)
Line
Count
Source
328
30
      : Pass(std::move(Arg.Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>::ModuleToPostOrderCGSCCPassAdaptor(llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>&&)
Line
Count
Source
328
14
      : Pass(std::move(Arg.Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >::ModuleToPostOrderCGSCCPassAdaptor(llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >&&)
Line
Count
Source
328
106
      : Pass(std::move(Arg.Pass)) {}
329
330
  friend void swap(ModuleToPostOrderCGSCCPassAdaptor &LHS,
331
                   ModuleToPostOrderCGSCCPassAdaptor &RHS) {
332
    std::swap(LHS.Pass, RHS.Pass);
333
  }
334
335
  ModuleToPostOrderCGSCCPassAdaptor &
336
  operator=(ModuleToPostOrderCGSCCPassAdaptor RHS) {
337
    swap(*this, RHS);
338
    return *this;
339
  }
340
341
  /// \brief Runs the CGSCC pass across every SCC in the module.
342
239
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
343
239
    // Setup the CGSCC analysis manager from its proxy.
344
239
    CGSCCAnalysisManager &CGAM =
345
239
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
346
239
347
239
    // Get the call graph for this module.
348
239
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
349
239
350
239
    // We keep worklists to allow us to push more work onto the pass manager as
351
239
    // the passes are run.
352
239
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
353
239
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
354
239
355
239
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
356
239
    // iterating off the worklists.
357
239
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
358
239
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
359
239
360
239
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
361
239
        InlinedInternalEdges;
362
239
363
239
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
364
239
                            InvalidSCCSet,       nullptr,   nullptr,
365
239
                            InlinedInternalEdges};
366
239
367
239
    PreservedAnalyses PA = PreservedAnalyses::all();
368
239
    CG.buildRefSCCs();
369
239
    for (auto RCI = CG.postorder_ref_scc_begin(),
370
239
              RCE = CG.postorder_ref_scc_end();
371
1.02k
         
RCI != RCE1.02k
;) {
372
789
      assert(RCWorklist.empty() &&
373
789
             "Should always start with an empty RefSCC worklist");
374
789
      // The postorder_ref_sccs range we are walking is lazily constructed, so
375
789
      // we only push the first one onto the worklist. The worklist allows us
376
789
      // to capture *new* RefSCCs created during transformations.
377
789
      //
378
789
      // We really want to form RefSCCs lazily because that makes them cheaper
379
789
      // to update as the program is simplified and allows us to have greater
380
789
      // cache locality as forming a RefSCC touches all the parts of all the
381
789
      // functions within that RefSCC.
382
789
      //
383
789
      // We also eagerly increment the iterator to the next position because
384
789
      // the CGSCC passes below may delete the current RefSCC.
385
789
      RCWorklist.insert(&*RCI++);
386
789
387
814
      do {
388
814
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
389
814
        if (
InvalidRefSCCSet.count(RC)814
) {
390
4
          DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
391
4
          continue;
392
4
        }
393
814
394
810
        assert(CWorklist.empty() &&
395
810
               "Should always start with an empty SCC worklist");
396
810
397
810
        DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
398
810
                     << "\n");
399
810
400
810
        // Push the initial SCCs in reverse post-order as we'll pop off the the
401
810
        // back and so see this in post-order.
402
810
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
403
843
          CWorklist.insert(&C);
404
810
405
882
        do {
406
882
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
407
882
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
408
882
          // other RefSCCs in the worklist. The invalid ones are dead and the
409
882
          // other RefSCCs should be queued above, so we just need to skip both
410
882
          // scenarios here.
411
882
          if (
InvalidSCCSet.count(C)882
) {
412
5
            DEBUG(dbgs() << "Skipping an invalid SCC...\n");
413
5
            continue;
414
5
          }
415
877
          
if (877
&C->getOuterRefSCC() != RC877
) {
416
20
            DEBUG(dbgs() << "Skipping an SCC that is now part of some other "
417
20
                            "RefSCC...\n");
418
20
            continue;
419
20
          }
420
877
421
857
          
do 857
{
422
890
            // Check that we didn't miss any update scenario.
423
890
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
424
890
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
425
890
            assert(&C->getOuterRefSCC() == RC &&
426
890
                   "Processing an SCC in a different RefSCC!");
427
890
428
890
            UR.UpdatedRC = nullptr;
429
890
            UR.UpdatedC = nullptr;
430
890
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
431
890
432
890
            // Update the SCC and RefSCC if necessary.
433
890
            C = UR.UpdatedC ? 
UR.UpdatedC33
:
C857
;
434
890
            RC = UR.UpdatedRC ? 
UR.UpdatedRC18
:
RC872
;
435
890
436
890
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
437
890
            // the current SCC may simply need to be skipped if invalid.
438
890
            if (
UR.InvalidatedSCCs.count(C)890
) {
439
2
              DEBUG(dbgs() << "Skipping invalidated root or island SCC!\n");
440
2
              break;
441
2
            }
442
890
            // Check that we didn't miss any update scenario.
443
888
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
444
888
445
888
            // We handle invalidating the CGSCC analysis manager's information
446
888
            // for the (potentially updated) SCC here. Note that any other SCCs
447
888
            // whose structure has changed should have been invalidated by
448
888
            // whatever was updating the call graph. This SCC gets invalidated
449
888
            // late as it contains the nodes that were actively being
450
888
            // processed.
451
888
            CGAM.invalidate(*C, PassPA);
452
888
453
888
            // Then intersect the preserved set so that invalidation of module
454
888
            // analyses will eventually occur when the module pass completes.
455
888
            PA.intersect(std::move(PassPA));
456
888
457
888
            // The pass may have restructured the call graph and refined the
458
888
            // current SCC and/or RefSCC. We need to update our current SCC and
459
888
            // RefSCC pointers to follow these. Also, when the current SCC is
460
888
            // refined, re-run the SCC pass over the newly refined SCC in order
461
888
            // to observe the most precise SCC model available. This inherently
462
888
            // cannot cycle excessively as it only happens when we split SCCs
463
888
            // apart, at most converging on a DAG of single nodes.
464
888
            // FIXME: If we ever start having RefSCC passes, we'll want to
465
888
            // iterate there too.
466
888
            if (UR.UpdatedC)
467
888
              DEBUG(dbgs() << "Re-running SCC passes after a refinement of the "
468
890
                              "current SCC: "
469
890
                           << *UR.UpdatedC << "\n");
470
890
471
890
            // Note that both `C` and `RC` may at this point refer to deleted,
472
890
            // invalid SCC and RefSCCs respectively. But we will short circuit
473
890
            // the processing when we check them in the loop above.
474
888
          } while (UR.UpdatedC);
475
882
        } while (!CWorklist.empty());
476
814
477
814
        // We only need to keep internal inlined edge information within
478
814
        // a RefSCC, clear it to save on space and let the next time we visit
479
814
        // any of these functions have a fresh start.
480
814
        InlinedInternalEdges.clear();
481
814
      } while (!RCWorklist.empty());
482
789
    }
483
239
484
239
    // By definition we preserve the call garph, all SCC analyses, and the
485
239
    // analysis proxies by handling them above and in any nested pass managers.
486
239
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
487
239
    PA.preserve<LazyCallGraphAnalysis>();
488
239
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
489
239
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
490
239
    return PA;
491
239
  }
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&)
Line
Count
Source
342
53
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
343
53
    // Setup the CGSCC analysis manager from its proxy.
344
53
    CGSCCAnalysisManager &CGAM =
345
53
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
346
53
347
53
    // Get the call graph for this module.
348
53
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
349
53
350
53
    // We keep worklists to allow us to push more work onto the pass manager as
351
53
    // the passes are run.
352
53
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
353
53
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
354
53
355
53
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
356
53
    // iterating off the worklists.
357
53
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
358
53
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
359
53
360
53
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
361
53
        InlinedInternalEdges;
362
53
363
53
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
364
53
                            InvalidSCCSet,       nullptr,   nullptr,
365
53
                            InlinedInternalEdges};
366
53
367
53
    PreservedAnalyses PA = PreservedAnalyses::all();
368
53
    CG.buildRefSCCs();
369
53
    for (auto RCI = CG.postorder_ref_scc_begin(),
370
53
              RCE = CG.postorder_ref_scc_end();
371
159
         
RCI != RCE159
;) {
372
106
      assert(RCWorklist.empty() &&
373
106
             "Should always start with an empty RefSCC worklist");
374
106
      // The postorder_ref_sccs range we are walking is lazily constructed, so
375
106
      // we only push the first one onto the worklist. The worklist allows us
376
106
      // to capture *new* RefSCCs created during transformations.
377
106
      //
378
106
      // We really want to form RefSCCs lazily because that makes them cheaper
379
106
      // to update as the program is simplified and allows us to have greater
380
106
      // cache locality as forming a RefSCC touches all the parts of all the
381
106
      // functions within that RefSCC.
382
106
      //
383
106
      // We also eagerly increment the iterator to the next position because
384
106
      // the CGSCC passes below may delete the current RefSCC.
385
106
      RCWorklist.insert(&*RCI++);
386
106
387
107
      do {
388
107
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
389
107
        if (
InvalidRefSCCSet.count(RC)107
) {
390
0
          DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
391
0
          continue;
392
0
        }
393
107
394
107
        assert(CWorklist.empty() &&
395
107
               "Should always start with an empty SCC worklist");
396
107
397
107
        DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
398
107
                     << "\n");
399
107
400
107
        // Push the initial SCCs in reverse post-order as we'll pop off the the
401
107
        // back and so see this in post-order.
402
107
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
403
107
          CWorklist.insert(&C);
404
107
405
108
        do {
406
108
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
407
108
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
408
108
          // other RefSCCs in the worklist. The invalid ones are dead and the
409
108
          // other RefSCCs should be queued above, so we just need to skip both
410
108
          // scenarios here.
411
108
          if (
InvalidSCCSet.count(C)108
) {
412
0
            DEBUG(dbgs() << "Skipping an invalid SCC...\n");
413
0
            continue;
414
0
          }
415
108
          
if (108
&C->getOuterRefSCC() != RC108
) {
416
1
            DEBUG(dbgs() << "Skipping an SCC that is now part of some other "
417
1
                            "RefSCC...\n");
418
1
            continue;
419
1
          }
420
108
421
107
          
do 107
{
422
108
            // Check that we didn't miss any update scenario.
423
108
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
424
108
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
425
108
            assert(&C->getOuterRefSCC() == RC &&
426
108
                   "Processing an SCC in a different RefSCC!");
427
108
428
108
            UR.UpdatedRC = nullptr;
429
108
            UR.UpdatedC = nullptr;
430
108
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
431
108
432
108
            // Update the SCC and RefSCC if necessary.
433
108
            C = UR.UpdatedC ? 
UR.UpdatedC1
:
C107
;
434
108
            RC = UR.UpdatedRC ? 
UR.UpdatedRC1
:
RC107
;
435
108
436
108
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
437
108
            // the current SCC may simply need to be skipped if invalid.
438
108
            if (
UR.InvalidatedSCCs.count(C)108
) {
439
0
              DEBUG(dbgs() << "Skipping invalidated root or island SCC!\n");
440
0
              break;
441
0
            }
442
108
            // Check that we didn't miss any update scenario.
443
108
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
444
108
445
108
            // We handle invalidating the CGSCC analysis manager's information
446
108
            // for the (potentially updated) SCC here. Note that any other SCCs
447
108
            // whose structure has changed should have been invalidated by
448
108
            // whatever was updating the call graph. This SCC gets invalidated
449
108
            // late as it contains the nodes that were actively being
450
108
            // processed.
451
108
            CGAM.invalidate(*C, PassPA);
452
108
453
108
            // Then intersect the preserved set so that invalidation of module
454
108
            // analyses will eventually occur when the module pass completes.
455
108
            PA.intersect(std::move(PassPA));
456
108
457
108
            // The pass may have restructured the call graph and refined the
458
108
            // current SCC and/or RefSCC. We need to update our current SCC and
459
108
            // RefSCC pointers to follow these. Also, when the current SCC is
460
108
            // refined, re-run the SCC pass over the newly refined SCC in order
461
108
            // to observe the most precise SCC model available. This inherently
462
108
            // cannot cycle excessively as it only happens when we split SCCs
463
108
            // apart, at most converging on a DAG of single nodes.
464
108
            // FIXME: If we ever start having RefSCC passes, we'll want to
465
108
            // iterate there too.
466
108
            if (UR.UpdatedC)
467
108
              DEBUG(dbgs() << "Re-running SCC passes after a refinement of the "
468
108
                              "current SCC: "
469
108
                           << *UR.UpdatedC << "\n");
470
108
471
108
            // Note that both `C` and `RC` may at this point refer to deleted,
472
108
            // invalid SCC and RefSCCs respectively. But we will short circuit
473
108
            // the processing when we check them in the loop above.
474
108
          } while (UR.UpdatedC);
475
108
        } while (!CWorklist.empty());
476
107
477
107
        // We only need to keep internal inlined edge information within
478
107
        // a RefSCC, clear it to save on space and let the next time we visit
479
107
        // any of these functions have a fresh start.
480
107
        InlinedInternalEdges.clear();
481
107
      } while (!RCWorklist.empty());
482
106
    }
483
53
484
53
    // By definition we preserve the call garph, all SCC analyses, and the
485
53
    // analysis proxies by handling them above and in any nested pass managers.
486
53
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
487
53
    PA.preserve<LazyCallGraphAnalysis>();
488
53
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
489
53
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
490
53
    return PA;
491
53
  }
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&)
Line
Count
Source
342
164
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
343
164
    // Setup the CGSCC analysis manager from its proxy.
344
164
    CGSCCAnalysisManager &CGAM =
345
164
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
346
164
347
164
    // Get the call graph for this module.
348
164
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
349
164
350
164
    // We keep worklists to allow us to push more work onto the pass manager as
351
164
    // the passes are run.
352
164
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
353
164
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
354
164
355
164
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
356
164
    // iterating off the worklists.
357
164
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
358
164
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
359
164
360
164
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
361
164
        InlinedInternalEdges;
362
164
363
164
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
364
164
                            InvalidSCCSet,       nullptr,   nullptr,
365
164
                            InlinedInternalEdges};
366
164
367
164
    PreservedAnalyses PA = PreservedAnalyses::all();
368
164
    CG.buildRefSCCs();
369
164
    for (auto RCI = CG.postorder_ref_scc_begin(),
370
164
              RCE = CG.postorder_ref_scc_end();
371
825
         
RCI != RCE825
;) {
372
661
      assert(RCWorklist.empty() &&
373
661
             "Should always start with an empty RefSCC worklist");
374
661
      // The postorder_ref_sccs range we are walking is lazily constructed, so
375
661
      // we only push the first one onto the worklist. The worklist allows us
376
661
      // to capture *new* RefSCCs created during transformations.
377
661
      //
378
661
      // We really want to form RefSCCs lazily because that makes them cheaper
379
661
      // to update as the program is simplified and allows us to have greater
380
661
      // cache locality as forming a RefSCC touches all the parts of all the
381
661
      // functions within that RefSCC.
382
661
      //
383
661
      // We also eagerly increment the iterator to the next position because
384
661
      // the CGSCC passes below may delete the current RefSCC.
385
661
      RCWorklist.insert(&*RCI++);
386
661
387
685
      do {
388
685
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
389
685
        if (
InvalidRefSCCSet.count(RC)685
) {
390
4
          DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
391
4
          continue;
392
4
        }
393
685
394
681
        assert(CWorklist.empty() &&
395
681
               "Should always start with an empty SCC worklist");
396
681
397
681
        DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
398
681
                     << "\n");
399
681
400
681
        // Push the initial SCCs in reverse post-order as we'll pop off the the
401
681
        // back and so see this in post-order.
402
681
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
403
714
          CWorklist.insert(&C);
404
681
405
752
        do {
406
752
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
407
752
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
408
752
          // other RefSCCs in the worklist. The invalid ones are dead and the
409
752
          // other RefSCCs should be queued above, so we just need to skip both
410
752
          // scenarios here.
411
752
          if (
InvalidSCCSet.count(C)752
) {
412
5
            DEBUG(dbgs() << "Skipping an invalid SCC...\n");
413
5
            continue;
414
5
          }
415
747
          
if (747
&C->getOuterRefSCC() != RC747
) {
416
19
            DEBUG(dbgs() << "Skipping an SCC that is now part of some other "
417
19
                            "RefSCC...\n");
418
19
            continue;
419
19
          }
420
747
421
728
          
do 728
{
422
760
            // Check that we didn't miss any update scenario.
423
760
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
424
760
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
425
760
            assert(&C->getOuterRefSCC() == RC &&
426
760
                   "Processing an SCC in a different RefSCC!");
427
760
428
760
            UR.UpdatedRC = nullptr;
429
760
            UR.UpdatedC = nullptr;
430
760
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
431
760
432
760
            // Update the SCC and RefSCC if necessary.
433
760
            C = UR.UpdatedC ? 
UR.UpdatedC32
:
C728
;
434
760
            RC = UR.UpdatedRC ? 
UR.UpdatedRC17
:
RC743
;
435
760
436
760
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
437
760
            // the current SCC may simply need to be skipped if invalid.
438
760
            if (
UR.InvalidatedSCCs.count(C)760
) {
439
2
              DEBUG(dbgs() << "Skipping invalidated root or island SCC!\n");
440
2
              break;
441
2
            }
442
760
            // Check that we didn't miss any update scenario.
443
758
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
444
758
445
758
            // We handle invalidating the CGSCC analysis manager's information
446
758
            // for the (potentially updated) SCC here. Note that any other SCCs
447
758
            // whose structure has changed should have been invalidated by
448
758
            // whatever was updating the call graph. This SCC gets invalidated
449
758
            // late as it contains the nodes that were actively being
450
758
            // processed.
451
758
            CGAM.invalidate(*C, PassPA);
452
758
453
758
            // Then intersect the preserved set so that invalidation of module
454
758
            // analyses will eventually occur when the module pass completes.
455
758
            PA.intersect(std::move(PassPA));
456
758
457
758
            // The pass may have restructured the call graph and refined the
458
758
            // current SCC and/or RefSCC. We need to update our current SCC and
459
758
            // RefSCC pointers to follow these. Also, when the current SCC is
460
758
            // refined, re-run the SCC pass over the newly refined SCC in order
461
758
            // to observe the most precise SCC model available. This inherently
462
758
            // cannot cycle excessively as it only happens when we split SCCs
463
758
            // apart, at most converging on a DAG of single nodes.
464
758
            // FIXME: If we ever start having RefSCC passes, we'll want to
465
758
            // iterate there too.
466
758
            if (UR.UpdatedC)
467
758
              DEBUG(dbgs() << "Re-running SCC passes after a refinement of the "
468
760
                              "current SCC: "
469
760
                           << *UR.UpdatedC << "\n");
470
760
471
760
            // Note that both `C` and `RC` may at this point refer to deleted,
472
760
            // invalid SCC and RefSCCs respectively. But we will short circuit
473
760
            // the processing when we check them in the loop above.
474
758
          } while (UR.UpdatedC);
475
752
        } while (!CWorklist.empty());
476
685
477
685
        // We only need to keep internal inlined edge information within
478
685
        // a RefSCC, clear it to save on space and let the next time we visit
479
685
        // any of these functions have a fresh start.
480
685
        InlinedInternalEdges.clear();
481
685
      } while (!RCWorklist.empty());
482
661
    }
483
164
484
164
    // By definition we preserve the call garph, all SCC analyses, and the
485
164
    // analysis proxies by handling them above and in any nested pass managers.
486
164
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
487
164
    PA.preserve<LazyCallGraphAnalysis>();
488
164
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
489
164
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
490
164
    return PA;
491
164
  }
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&)
Line
Count
Source
342
7
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
343
7
    // Setup the CGSCC analysis manager from its proxy.
344
7
    CGSCCAnalysisManager &CGAM =
345
7
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
346
7
347
7
    // Get the call graph for this module.
348
7
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
349
7
350
7
    // We keep worklists to allow us to push more work onto the pass manager as
351
7
    // the passes are run.
352
7
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
353
7
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
354
7
355
7
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
356
7
    // iterating off the worklists.
357
7
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
358
7
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
359
7
360
7
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
361
7
        InlinedInternalEdges;
362
7
363
7
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
364
7
                            InvalidSCCSet,       nullptr,   nullptr,
365
7
                            InlinedInternalEdges};
366
7
367
7
    PreservedAnalyses PA = PreservedAnalyses::all();
368
7
    CG.buildRefSCCs();
369
7
    for (auto RCI = CG.postorder_ref_scc_begin(),
370
7
              RCE = CG.postorder_ref_scc_end();
371
14
         
RCI != RCE14
;) {
372
7
      assert(RCWorklist.empty() &&
373
7
             "Should always start with an empty RefSCC worklist");
374
7
      // The postorder_ref_sccs range we are walking is lazily constructed, so
375
7
      // we only push the first one onto the worklist. The worklist allows us
376
7
      // to capture *new* RefSCCs created during transformations.
377
7
      //
378
7
      // We really want to form RefSCCs lazily because that makes them cheaper
379
7
      // to update as the program is simplified and allows us to have greater
380
7
      // cache locality as forming a RefSCC touches all the parts of all the
381
7
      // functions within that RefSCC.
382
7
      //
383
7
      // We also eagerly increment the iterator to the next position because
384
7
      // the CGSCC passes below may delete the current RefSCC.
385
7
      RCWorklist.insert(&*RCI++);
386
7
387
7
      do {
388
7
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
389
7
        if (
InvalidRefSCCSet.count(RC)7
) {
390
0
          DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
391
0
          continue;
392
0
        }
393
7
394
7
        assert(CWorklist.empty() &&
395
7
               "Should always start with an empty SCC worklist");
396
7
397
7
        DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
398
7
                     << "\n");
399
7
400
7
        // Push the initial SCCs in reverse post-order as we'll pop off the the
401
7
        // back and so see this in post-order.
402
7
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
403
7
          CWorklist.insert(&C);
404
7
405
7
        do {
406
7
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
407
7
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
408
7
          // other RefSCCs in the worklist. The invalid ones are dead and the
409
7
          // other RefSCCs should be queued above, so we just need to skip both
410
7
          // scenarios here.
411
7
          if (
InvalidSCCSet.count(C)7
) {
412
0
            DEBUG(dbgs() << "Skipping an invalid SCC...\n");
413
0
            continue;
414
0
          }
415
7
          
if (7
&C->getOuterRefSCC() != RC7
) {
416
0
            DEBUG(dbgs() << "Skipping an SCC that is now part of some other "
417
0
                            "RefSCC...\n");
418
0
            continue;
419
0
          }
420
7
421
7
          
do 7
{
422
7
            // Check that we didn't miss any update scenario.
423
7
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
424
7
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
425
7
            assert(&C->getOuterRefSCC() == RC &&
426
7
                   "Processing an SCC in a different RefSCC!");
427
7
428
7
            UR.UpdatedRC = nullptr;
429
7
            UR.UpdatedC = nullptr;
430
7
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
431
7
432
7
            // Update the SCC and RefSCC if necessary.
433
7
            C = UR.UpdatedC ? 
UR.UpdatedC0
:
C7
;
434
7
            RC = UR.UpdatedRC ? 
UR.UpdatedRC0
:
RC7
;
435
7
436
7
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
437
7
            // the current SCC may simply need to be skipped if invalid.
438
7
            if (
UR.InvalidatedSCCs.count(C)7
) {
439
0
              DEBUG(dbgs() << "Skipping invalidated root or island SCC!\n");
440
0
              break;
441
0
            }
442
7
            // Check that we didn't miss any update scenario.
443
7
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
444
7
445
7
            // We handle invalidating the CGSCC analysis manager's information
446
7
            // for the (potentially updated) SCC here. Note that any other SCCs
447
7
            // whose structure has changed should have been invalidated by
448
7
            // whatever was updating the call graph. This SCC gets invalidated
449
7
            // late as it contains the nodes that were actively being
450
7
            // processed.
451
7
            CGAM.invalidate(*C, PassPA);
452
7
453
7
            // Then intersect the preserved set so that invalidation of module
454
7
            // analyses will eventually occur when the module pass completes.
455
7
            PA.intersect(std::move(PassPA));
456
7
457
7
            // The pass may have restructured the call graph and refined the
458
7
            // current SCC and/or RefSCC. We need to update our current SCC and
459
7
            // RefSCC pointers to follow these. Also, when the current SCC is
460
7
            // refined, re-run the SCC pass over the newly refined SCC in order
461
7
            // to observe the most precise SCC model available. This inherently
462
7
            // cannot cycle excessively as it only happens when we split SCCs
463
7
            // apart, at most converging on a DAG of single nodes.
464
7
            // FIXME: If we ever start having RefSCC passes, we'll want to
465
7
            // iterate there too.
466
7
            if (UR.UpdatedC)
467
7
              DEBUG(dbgs() << "Re-running SCC passes after a refinement of the "
468
7
                              "current SCC: "
469
7
                           << *UR.UpdatedC << "\n");
470
7
471
7
            // Note that both `C` and `RC` may at this point refer to deleted,
472
7
            // invalid SCC and RefSCCs respectively. But we will short circuit
473
7
            // the processing when we check them in the loop above.
474
7
          } while (UR.UpdatedC);
475
7
        } while (!CWorklist.empty());
476
7
477
7
        // We only need to keep internal inlined edge information within
478
7
        // a RefSCC, clear it to save on space and let the next time we visit
479
7
        // any of these functions have a fresh start.
480
7
        InlinedInternalEdges.clear();
481
7
      } while (!RCWorklist.empty());
482
7
    }
483
7
484
7
    // By definition we preserve the call garph, all SCC analyses, and the
485
7
    // analysis proxies by handling them above and in any nested pass managers.
486
7
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
487
7
    PA.preserve<LazyCallGraphAnalysis>();
488
7
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
489
7
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
490
7
    return PA;
491
7
  }
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&)
Line
Count
Source
342
15
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
343
15
    // Setup the CGSCC analysis manager from its proxy.
344
15
    CGSCCAnalysisManager &CGAM =
345
15
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
346
15
347
15
    // Get the call graph for this module.
348
15
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
349
15
350
15
    // We keep worklists to allow us to push more work onto the pass manager as
351
15
    // the passes are run.
352
15
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
353
15
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
354
15
355
15
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
356
15
    // iterating off the worklists.
357
15
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
358
15
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
359
15
360
15
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
361
15
        InlinedInternalEdges;
362
15
363
15
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
364
15
                            InvalidSCCSet,       nullptr,   nullptr,
365
15
                            InlinedInternalEdges};
366
15
367
15
    PreservedAnalyses PA = PreservedAnalyses::all();
368
15
    CG.buildRefSCCs();
369
15
    for (auto RCI = CG.postorder_ref_scc_begin(),
370
15
              RCE = CG.postorder_ref_scc_end();
371
30
         
RCI != RCE30
;) {
372
15
      assert(RCWorklist.empty() &&
373
15
             "Should always start with an empty RefSCC worklist");
374
15
      // The postorder_ref_sccs range we are walking is lazily constructed, so
375
15
      // we only push the first one onto the worklist. The worklist allows us
376
15
      // to capture *new* RefSCCs created during transformations.
377
15
      //
378
15
      // We really want to form RefSCCs lazily because that makes them cheaper
379
15
      // to update as the program is simplified and allows us to have greater
380
15
      // cache locality as forming a RefSCC touches all the parts of all the
381
15
      // functions within that RefSCC.
382
15
      //
383
15
      // We also eagerly increment the iterator to the next position because
384
15
      // the CGSCC passes below may delete the current RefSCC.
385
15
      RCWorklist.insert(&*RCI++);
386
15
387
15
      do {
388
15
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
389
15
        if (
InvalidRefSCCSet.count(RC)15
) {
390
0
          DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
391
0
          continue;
392
0
        }
393
15
394
15
        assert(CWorklist.empty() &&
395
15
               "Should always start with an empty SCC worklist");
396
15
397
15
        DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
398
15
                     << "\n");
399
15
400
15
        // Push the initial SCCs in reverse post-order as we'll pop off the the
401
15
        // back and so see this in post-order.
402
15
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
403
15
          CWorklist.insert(&C);
404
15
405
15
        do {
406
15
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
407
15
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
408
15
          // other RefSCCs in the worklist. The invalid ones are dead and the
409
15
          // other RefSCCs should be queued above, so we just need to skip both
410
15
          // scenarios here.
411
15
          if (
InvalidSCCSet.count(C)15
) {
412
0
            DEBUG(dbgs() << "Skipping an invalid SCC...\n");
413
0
            continue;
414
0
          }
415
15
          
if (15
&C->getOuterRefSCC() != RC15
) {
416
0
            DEBUG(dbgs() << "Skipping an SCC that is now part of some other "
417
0
                            "RefSCC...\n");
418
0
            continue;
419
0
          }
420
15
421
15
          
do 15
{
422
15
            // Check that we didn't miss any update scenario.
423
15
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
424
15
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
425
15
            assert(&C->getOuterRefSCC() == RC &&
426
15
                   "Processing an SCC in a different RefSCC!");
427
15
428
15
            UR.UpdatedRC = nullptr;
429
15
            UR.UpdatedC = nullptr;
430
15
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
431
15
432
15
            // Update the SCC and RefSCC if necessary.
433
15
            C = UR.UpdatedC ? 
UR.UpdatedC0
:
C15
;
434
15
            RC = UR.UpdatedRC ? 
UR.UpdatedRC0
:
RC15
;
435
15
436
15
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
437
15
            // the current SCC may simply need to be skipped if invalid.
438
15
            if (
UR.InvalidatedSCCs.count(C)15
) {
439
0
              DEBUG(dbgs() << "Skipping invalidated root or island SCC!\n");
440
0
              break;
441
0
            }
442
15
            // Check that we didn't miss any update scenario.
443
15
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
444
15
445
15
            // We handle invalidating the CGSCC analysis manager's information
446
15
            // for the (potentially updated) SCC here. Note that any other SCCs
447
15
            // whose structure has changed should have been invalidated by
448
15
            // whatever was updating the call graph. This SCC gets invalidated
449
15
            // late as it contains the nodes that were actively being
450
15
            // processed.
451
15
            CGAM.invalidate(*C, PassPA);
452
15
453
15
            // Then intersect the preserved set so that invalidation of module
454
15
            // analyses will eventually occur when the module pass completes.
455
15
            PA.intersect(std::move(PassPA));
456
15
457
15
            // The pass may have restructured the call graph and refined the
458
15
            // current SCC and/or RefSCC. We need to update our current SCC and
459
15
            // RefSCC pointers to follow these. Also, when the current SCC is
460
15
            // refined, re-run the SCC pass over the newly refined SCC in order
461
15
            // to observe the most precise SCC model available. This inherently
462
15
            // cannot cycle excessively as it only happens when we split SCCs
463
15
            // apart, at most converging on a DAG of single nodes.
464
15
            // FIXME: If we ever start having RefSCC passes, we'll want to
465
15
            // iterate there too.
466
15
            if (UR.UpdatedC)
467
15
              DEBUG(dbgs() << "Re-running SCC passes after a refinement of the "
468
15
                              "current SCC: "
469
15
                           << *UR.UpdatedC << "\n");
470
15
471
15
            // Note that both `C` and `RC` may at this point refer to deleted,
472
15
            // invalid SCC and RefSCCs respectively. But we will short circuit
473
15
            // the processing when we check them in the loop above.
474
15
          } while (UR.UpdatedC);
475
15
        } while (!CWorklist.empty());
476
15
477
15
        // We only need to keep internal inlined edge information within
478
15
        // a RefSCC, clear it to save on space and let the next time we visit
479
15
        // any of these functions have a fresh start.
480
15
        InlinedInternalEdges.clear();
481
15
      } while (!RCWorklist.empty());
482
15
    }
483
15
484
15
    // By definition we preserve the call garph, all SCC analyses, and the
485
15
    // analysis proxies by handling them above and in any nested pass managers.
486
15
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
487
15
    PA.preserve<LazyCallGraphAnalysis>();
488
15
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
489
15
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
490
15
    return PA;
491
15
  }
492
493
private:
494
  CGSCCPassT Pass;
495
};
496
497
/// \brief A function to deduce a function pass type and wrap it in the
498
/// templated adaptor.
499
template <typename CGSCCPassT>
500
ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>
501
239
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
502
239
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
503
239
}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > llvm::createModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >(llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&>)
Line
Count
Source
501
164
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
502
164
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
503
164
}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > > llvm::createModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >(llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >)
Line
Count
Source
501
53
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
502
53
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
503
53
}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass> llvm::createModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>(llvm::PostOrderFunctionAttrsPass)
Line
Count
Source
501
15
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
502
15
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
503
15
}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass> llvm::createModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>(llvm::InlinerPass)
Line
Count
Source
501
7
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
502
7
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
503
7
}
504
505
/// A proxy from a \c FunctionAnalysisManager to an \c SCC.
506
///
507
/// When a module pass runs and triggers invalidation, both the CGSCC and
508
/// Function analysis manager proxies on the module get an invalidation event.
509
/// We don't want to fully duplicate responsibility for most of the
510
/// invalidation logic. Instead, this layer is only responsible for SCC-local
511
/// invalidation events. We work with the module's FunctionAnalysisManager to
512
/// invalidate function analyses.
513
class FunctionAnalysisManagerCGSCCProxy
514
    : public AnalysisInfoMixin<FunctionAnalysisManagerCGSCCProxy> {
515
public:
516
  class Result {
517
  public:
518
754
    explicit Result(FunctionAnalysisManager &FAM) : FAM(&FAM) {}
519
520
    /// \brief Accessor for the analysis manager.
521
1.20k
    FunctionAnalysisManager &getManager() { return *FAM; }
522
523
    bool invalidate(LazyCallGraph::SCC &C, const PreservedAnalyses &PA,
524
                    CGSCCAnalysisManager::Invalidator &Inv);
525
526
  private:
527
    FunctionAnalysisManager *FAM;
528
  };
529
530
  /// Computes the \c FunctionAnalysisManager and stores it in the result proxy.
531
  Result run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM, LazyCallGraph &);
532
533
private:
534
  friend AnalysisInfoMixin<FunctionAnalysisManagerCGSCCProxy>;
535
536
  static AnalysisKey Key;
537
};
538
539
extern template class OuterAnalysisManagerProxy<CGSCCAnalysisManager, Function>;
540
541
/// A proxy from a \c CGSCCAnalysisManager to a \c Function.
542
using CGSCCAnalysisManagerFunctionProxy =
543
    OuterAnalysisManagerProxy<CGSCCAnalysisManager, Function>;
544
545
/// Helper to update the call graph after running a function pass.
546
///
547
/// Function passes can only mutate the call graph in specific ways. This
548
/// routine provides a helper that updates the call graph in those ways
549
/// including returning whether any changes were made and populating a CG
550
/// update result struct for the overall CGSCC walk.
551
LazyCallGraph::SCC &updateCGAndAnalysisManagerForFunctionPass(
552
    LazyCallGraph &G, LazyCallGraph::SCC &C, LazyCallGraph::Node &N,
553
    CGSCCAnalysisManager &AM, CGSCCUpdateResult &UR);
554
555
/// \brief Adaptor that maps from a SCC to its functions.
556
///
557
/// Designed to allow composition of a FunctionPass(Manager) and
558
/// a CGSCCPassManager. Note that if this pass is constructed with a pointer
559
/// to a \c CGSCCAnalysisManager it will run the
560
/// \c FunctionAnalysisManagerCGSCCProxy analysis prior to running the function
561
/// pass over the SCC to enable a \c FunctionAnalysisManager to be used
562
/// within this run safely.
563
template <typename FunctionPassT>
564
class CGSCCToFunctionPassAdaptor
565
    : public PassInfoMixin<CGSCCToFunctionPassAdaptor<FunctionPassT>> {
566
public:
567
  explicit CGSCCToFunctionPassAdaptor(FunctionPassT Pass)
568
92
      : Pass(std::move(Pass)) {}
569
570
  // We have to explicitly define all the special member functions because MSVC
571
  // refuses to generate them.
572
  CGSCCToFunctionPassAdaptor(const CGSCCToFunctionPassAdaptor &Arg)
573
      : Pass(Arg.Pass) {}
574
575
  CGSCCToFunctionPassAdaptor(CGSCCToFunctionPassAdaptor &&Arg)
576
184
      : Pass(std::move(Arg.Pass)) {}
577
578
  friend void swap(CGSCCToFunctionPassAdaptor &LHS,
579
                   CGSCCToFunctionPassAdaptor &RHS) {
580
    std::swap(LHS.Pass, RHS.Pass);
581
  }
582
583
  CGSCCToFunctionPassAdaptor &operator=(CGSCCToFunctionPassAdaptor RHS) {
584
    swap(*this, RHS);
585
    return *this;
586
  }
587
588
  /// \brief Runs the function pass across every function in the module.
589
  PreservedAnalyses run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM,
590
317
                        LazyCallGraph &CG, CGSCCUpdateResult &UR) {
591
317
    // Setup the function analysis manager from its proxy.
592
317
    FunctionAnalysisManager &FAM =
593
317
        AM.getResult<FunctionAnalysisManagerCGSCCProxy>(C, CG).getManager();
594
317
595
317
    SmallVector<LazyCallGraph::Node *, 4> Nodes;
596
317
    for (LazyCallGraph::Node &N : C)
597
410
      Nodes.push_back(&N);
598
317
599
317
    // The SCC may get split while we are optimizing functions due to deleting
600
317
    // edges. If this happens, the current SCC can shift, so keep track of
601
317
    // a pointer we can overwrite.
602
317
    LazyCallGraph::SCC *CurrentC = &C;
603
317
604
317
    DEBUG(dbgs() << "Running function passes across an SCC: " << C << "\n");
605
317
606
317
    PreservedAnalyses PA = PreservedAnalyses::all();
607
410
    for (LazyCallGraph::Node *N : Nodes) {
608
410
      // Skip nodes from other SCCs. These may have been split out during
609
410
      // processing. We'll eventually visit those SCCs and pick up the nodes
610
410
      // there.
611
410
      if (CG.lookupSCC(*N) != CurrentC)
612
40
        continue;
613
410
614
370
      PreservedAnalyses PassPA = Pass.run(N->getFunction(), FAM);
615
370
616
370
      // We know that the function pass couldn't have invalidated any other
617
370
      // function's analyses (that's the contract of a function pass), so
618
370
      // directly handle the function analysis manager's invalidation here.
619
370
      FAM.invalidate(N->getFunction(), PassPA);
620
370
621
370
      // Then intersect the preserved set so that invalidation of module
622
370
      // analyses will eventually occur when the module pass completes.
623
370
      PA.intersect(std::move(PassPA));
624
370
625
370
      // If the call graph hasn't been preserved, update it based on this
626
370
      // function pass. This may also update the current SCC to point to
627
370
      // a smaller, more refined SCC.
628
370
      auto PAC = PA.getChecker<LazyCallGraphAnalysis>();
629
370
      if (
!PAC.preserved() && 370
!PAC.preservedSet<AllAnalysesOn<Module>>()140
) {
630
140
        CurrentC = &updateCGAndAnalysisManagerForFunctionPass(CG, *CurrentC, *N,
631
140
                                                              AM, UR);
632
140
        assert(
633
140
            CG.lookupSCC(*N) == CurrentC &&
634
140
            "Current SCC not updated to the SCC containing the current node!");
635
140
      }
636
410
    }
637
317
638
317
    // By definition we preserve the proxy. And we preserve all analyses on
639
317
    // Functions. This precludes *any* invalidation of function analyses by the
640
317
    // proxy, but that's OK because we've taken care to invalidate analyses in
641
317
    // the function analysis manager incrementally above.
642
317
    PA.preserveSet<AllAnalysesOn<Function>>();
643
317
    PA.preserve<FunctionAnalysisManagerCGSCCProxy>();
644
317
645
317
    // We've also ensured that we updated the call graph along the way.
646
317
    PA.preserve<LazyCallGraphAnalysis>();
647
317
648
317
    return PA;
649
317
  }
650
651
private:
652
  FunctionPassT Pass;
653
};
654
655
/// \brief A function to deduce a function pass type and wrap it in the
656
/// templated adaptor.
657
template <typename FunctionPassT>
658
CGSCCToFunctionPassAdaptor<FunctionPassT>
659
92
createCGSCCToFunctionPassAdaptor(FunctionPassT Pass) {
660
92
  return CGSCCToFunctionPassAdaptor<FunctionPassT>(std::move(Pass));
661
92
}
662
663
/// A helper that repeats an SCC pass each time an indirect call is refined to
664
/// a direct call by that pass.
665
///
666
/// While the CGSCC pass manager works to re-visit SCCs and RefSCCs as they
667
/// change shape, we may also want to repeat an SCC pass if it simply refines
668
/// an indirect call to a direct call, even if doing so does not alter the
669
/// shape of the graph. Note that this only pertains to direct calls to
670
/// functions where IPO across the SCC may be able to compute more precise
671
/// results. For intrinsics, we assume scalar optimizations already can fully
672
/// reason about them.
673
///
674
/// This repetition has the potential to be very large however, as each one
675
/// might refine a single call site. As a consequence, in practice we use an
676
/// upper bound on the number of repetitions to limit things.
677
template <typename PassT>
678
class DevirtSCCRepeatedPass
679
    : public PassInfoMixin<DevirtSCCRepeatedPass<PassT>> {
680
public:
681
  explicit DevirtSCCRepeatedPass(PassT Pass, int MaxIterations)
682
56
      : Pass(std::move(Pass)), MaxIterations(MaxIterations) {}
683
684
  /// Runs the wrapped pass up to \c MaxIterations on the SCC, iterating
685
  /// whenever an indirect call is refined.
686
  PreservedAnalyses run(LazyCallGraph::SCC &InitialC, CGSCCAnalysisManager &AM,
687
122
                        LazyCallGraph &CG, CGSCCUpdateResult &UR) {
688
122
    PreservedAnalyses PA = PreservedAnalyses::all();
689
122
690
122
    // The SCC may be refined while we are running passes over it, so set up
691
122
    // a pointer that we can update.
692
122
    LazyCallGraph::SCC *C = &InitialC;
693
122
694
122
    // Collect value handles for all of the indirect call sites.
695
122
    SmallVector<WeakTrackingVH, 8> CallHandles;
696
122
697
122
    // Struct to track the counts of direct and indirect calls in each function
698
122
    // of the SCC.
699
122
    struct CallCount {
700
122
      int Direct;
701
122
      int Indirect;
702
122
    };
703
122
704
122
    // Put value handles on all of the indirect calls and return the number of
705
122
    // direct calls for each function in the SCC.
706
122
    auto ScanSCC = [](LazyCallGraph::SCC &C,
707
251
                      SmallVectorImpl<WeakTrackingVH> &CallHandles) {
708
251
      assert(CallHandles.empty() && "Must start with a clear set of handles.");
709
251
710
251
      SmallVector<CallCount, 4> CallCounts;
711
259
      for (LazyCallGraph::Node &N : C) {
712
259
        CallCounts.push_back({0, 0});
713
259
        CallCount &Count = CallCounts.back();
714
259
        for (Instruction &I : instructions(N.getFunction()))
715
1.11k
          
if (auto 1.11k
CS1.11k
= CallSite(&I)) {
716
237
            if (
CS.getCalledFunction()237
) {
717
217
              ++Count.Direct;
718
237
            } else {
719
20
              ++Count.Indirect;
720
20
              CallHandles.push_back(WeakTrackingVH(&I));
721
20
            }
722
1.11k
          }
723
259
      }
724
251
725
251
      return CallCounts;
726
251
    };
727
122
728
122
    // Populate the initial call handles and get the initial call counts.
729
122
    auto CallCounts = ScanSCC(*C, CallHandles);
730
122
731
122
    for (int Iteration = 0;; 
++Iteration8
) {
732
130
      PreservedAnalyses PassPA = Pass.run(*C, AM, CG, UR);
733
130
734
130
      // If the SCC structure has changed, bail immediately and let the outer
735
130
      // CGSCC layer handle any iteration to reflect the refined structure.
736
130
      if (
UR.UpdatedC && 130
UR.UpdatedC != C1
) {
737
1
        PA.intersect(std::move(PassPA));
738
1
        break;
739
1
      }
740
130
741
130
      // Check that we didn't miss any update scenario.
742
129
      assert(!UR.InvalidatedSCCs.count(C) && "Processing an invalid SCC!");
743
129
      assert(C->begin() != C->end() && "Cannot have an empty SCC!");
744
129
      assert((int)CallCounts.size() == C->size() &&
745
129
             "Cannot have changed the size of the SCC!");
746
129
747
129
      // Check whether any of the handles were devirtualized.
748
15
      auto IsDevirtualizedHandle = [&](WeakTrackingVH &CallH) {
749
15
        if (!CallH)
750
2
          return false;
751
13
        auto CS = CallSite(CallH);
752
13
        if (!CS)
753
0
          return false;
754
13
755
13
        // If the call is still indirect, leave it alone.
756
13
        Function *F = CS.getCalledFunction();
757
13
        if (!F)
758
6
          return false;
759
13
760
7
        
DEBUG7
(dbgs() << "Found devirutalized call from "
761
7
                     << CS.getParent()->getParent()->getName() << " to "
762
7
                     << F->getName() << "\n");
763
7
764
7
        // We now have a direct call where previously we had an indirect call,
765
7
        // so iterate to process this devirtualization site.
766
7
        return true;
767
15
      };
768
129
      bool Devirt = llvm::any_of(CallHandles, IsDevirtualizedHandle);
769
129
770
129
      // Rescan to build up a new set of handles and count how many direct
771
129
      // calls remain. If we decide to iterate, this also sets up the input to
772
129
      // the next iteration.
773
129
      CallHandles.clear();
774
129
      auto NewCallCounts = ScanSCC(*C, CallHandles);
775
129
776
129
      // If we haven't found an explicit devirtualization already see if we
777
129
      // have decreased the number of indirect calls and increased the number
778
129
      // of direct calls for any function in the SCC. This can be fooled by all
779
129
      // manner of transformations such as DCE and other things, but seems to
780
129
      // work well in practice.
781
129
      if (!Devirt)
782
243
        
for (int i = 0, Size = C->size(); 122
i < Size243
;
++i121
)
783
123
          
if (123
CallCounts[i].Indirect > NewCallCounts[i].Indirect &&
784
123
              
CallCounts[i].Direct < NewCallCounts[i].Direct2
) {
785
2
            Devirt = true;
786
2
            break;
787
2
          }
788
129
789
129
      if (
!Devirt129
) {
790
120
        PA.intersect(std::move(PassPA));
791
120
        break;
792
120
      }
793
129
794
129
      // Otherwise, if we've already hit our max, we're done.
795
9
      
if (9
Iteration >= MaxIterations9
) {
796
1
        DEBUG(dbgs() << "Found another devirtualization after hitting the max "
797
1
                        "number of repetitions ("
798
1
                     << MaxIterations << ") on SCC: " << *C << "\n");
799
1
        PA.intersect(std::move(PassPA));
800
1
        break;
801
1
      }
802
9
803
8
      
DEBUG8
(dbgs()
804
8
            << "Repeating an SCC pass after finding a devirtualization in: "
805
8
            << *C << "\n");
806
8
807
8
      // Move over the new call counts in preparation for iterating.
808
8
      CallCounts = std::move(NewCallCounts);
809
8
810
8
      // Update the analysis manager with each run and intersect the total set
811
8
      // of preserved analyses so we're ready to iterate.
812
8
      AM.invalidate(*C, PassPA);
813
8
      PA.intersect(std::move(PassPA));
814
8
    }
815
122
816
122
    // Note that we don't add any preserved entries here unlike a more normal
817
122
    // "pass manager" because we only handle invalidation *between* iterations,
818
122
    // not after the last iteration.
819
122
    return PA;
820
122
  }
821
822
private:
823
  PassT Pass;
824
  int MaxIterations;
825
};
826
827
/// \brief A function to deduce a function pass type and wrap it in the
828
/// templated adaptor.
829
template <typename PassT>
830
DevirtSCCRepeatedPass<PassT> createDevirtSCCRepeatedPass(PassT Pass,
831
56
                                                         int MaxIterations) {
832
56
  return DevirtSCCRepeatedPass<PassT>(std::move(Pass), MaxIterations);
833
56
}
834
835
// Clear out the debug logging macro.
836
#undef DEBUG_TYPE
837
838
} // end namespace llvm
839
840
#endif // LLVM_ANALYSIS_CGSCCPASSMANAGER_H