Coverage Report

Created: 2018-07-19 03:59

/Users/buildslave/jenkins/workspace/clang-stage2-coverage-R/llvm/include/llvm/Analysis/CGSCCPassManager.h
Line
Count
Source (jump to first uncovered line)
1
//===- CGSCCPassManager.h - Call graph pass management ----------*- C++ -*-===//
2
//
3
//                     The LLVM Compiler Infrastructure
4
//
5
// This file is distributed under the University of Illinois Open Source
6
// License. See LICENSE.TXT for details.
7
//
8
//===----------------------------------------------------------------------===//
9
/// \file
10
///
11
/// This header provides classes for managing passes over SCCs of the call
12
/// graph. These passes form an important component of LLVM's interprocedural
13
/// optimizations. Because they operate on the SCCs of the call graph, and they
14
/// traverse the graph in post-order, they can effectively do pair-wise
15
/// interprocedural optimizations for all call edges in the program while
16
/// incrementally refining it and improving the context of these pair-wise
17
/// optimizations. At each call site edge, the callee has already been
18
/// optimized as much as is possible. This in turn allows very accurate
19
/// analysis of it for IPO.
20
///
21
/// A secondary more general goal is to be able to isolate optimization on
22
/// unrelated parts of the IR module. This is useful to ensure our
23
/// optimizations are principled and don't miss oportunities where refinement
24
/// of one part of the module influence transformations in another part of the
25
/// module. But this is also useful if we want to parallelize the optimizations
26
/// across common large module graph shapes which tend to be very wide and have
27
/// large regions of unrelated cliques.
28
///
29
/// To satisfy these goals, we use the LazyCallGraph which provides two graphs
30
/// nested inside each other (and built lazily from the bottom-up): the call
31
/// graph proper, and a reference graph. The reference graph is super set of
32
/// the call graph and is a conservative approximation of what could through
33
/// scalar or CGSCC transforms *become* the call graph. Using this allows us to
34
/// ensure we optimize functions prior to them being introduced into the call
35
/// graph by devirtualization or other technique, and thus ensures that
36
/// subsequent pair-wise interprocedural optimizations observe the optimized
37
/// form of these functions. The (potentially transitive) reference
38
/// reachability used by the reference graph is a conservative approximation
39
/// that still allows us to have independent regions of the graph.
40
///
41
/// FIXME: There is one major drawback of the reference graph: in its naive
42
/// form it is quadratic because it contains a distinct edge for each
43
/// (potentially indirect) reference, even if are all through some common
44
/// global table of function pointers. This can be fixed in a number of ways
45
/// that essentially preserve enough of the normalization. While it isn't
46
/// expected to completely preclude the usability of this, it will need to be
47
/// addressed.
48
///
49
///
50
/// All of these issues are made substantially more complex in the face of
51
/// mutations to the call graph while optimization passes are being run. When
52
/// mutations to the call graph occur we want to achieve two different things:
53
///
54
/// - We need to update the call graph in-flight and invalidate analyses
55
///   cached on entities in the graph. Because of the cache-based analysis
56
///   design of the pass manager, it is essential to have stable identities for
57
///   the elements of the IR that passes traverse, and to invalidate any
58
///   analyses cached on these elements as the mutations take place.
59
///
60
/// - We want to preserve the incremental and post-order traversal of the
61
///   graph even as it is refined and mutated. This means we want optimization
62
///   to observe the most refined form of the call graph and to do so in
63
///   post-order.
64
///
65
/// To address this, the CGSCC manager uses both worklists that can be expanded
66
/// by passes which transform the IR, and provides invalidation tests to skip
67
/// entries that become dead. This extra data is provided to every SCC pass so
68
/// that it can carefully update the manager's traversal as the call graph
69
/// mutates.
70
///
71
/// We also provide support for running function passes within the CGSCC walk,
72
/// and there we provide automatic update of the call graph including of the
73
/// pass manager to reflect call graph changes that fall out naturally as part
74
/// of scalar transformations.
75
///
76
/// The patterns used to ensure the goals of post-order visitation of the fully
77
/// refined graph:
78
///
79
/// 1) Sink toward the "bottom" as the graph is refined. This means that any
80
///    iteration continues in some valid post-order sequence after the mutation
81
///    has altered the structure.
82
///
83
/// 2) Enqueue in post-order, including the current entity. If the current
84
///    entity's shape changes, it and everything after it in post-order needs
85
///    to be visited to observe that shape.
86
///
87
//===----------------------------------------------------------------------===//
88
89
#ifndef LLVM_ANALYSIS_CGSCCPASSMANAGER_H
90
#define LLVM_ANALYSIS_CGSCCPASSMANAGER_H
91
92
#include "llvm/ADT/DenseSet.h"
93
#include "llvm/ADT/PriorityWorklist.h"
94
#include "llvm/ADT/STLExtras.h"
95
#include "llvm/ADT/SmallPtrSet.h"
96
#include "llvm/ADT/SmallVector.h"
97
#include "llvm/Analysis/LazyCallGraph.h"
98
#include "llvm/IR/CallSite.h"
99
#include "llvm/IR/Function.h"
100
#include "llvm/IR/InstIterator.h"
101
#include "llvm/IR/PassManager.h"
102
#include "llvm/IR/ValueHandle.h"
103
#include "llvm/Support/Debug.h"
104
#include "llvm/Support/raw_ostream.h"
105
#include <algorithm>
106
#include <cassert>
107
#include <utility>
108
109
namespace llvm {
110
111
struct CGSCCUpdateResult;
112
class Module;
113
114
// Allow debug logging in this inline function.
115
#define DEBUG_TYPE "cgscc"
116
117
/// Extern template declaration for the analysis set for this IR unit.
118
extern template class AllAnalysesOn<LazyCallGraph::SCC>;
119
120
extern template class AnalysisManager<LazyCallGraph::SCC, LazyCallGraph &>;
121
122
/// The CGSCC analysis manager.
123
///
124
/// See the documentation for the AnalysisManager template for detail
125
/// documentation. This type serves as a convenient way to refer to this
126
/// construct in the adaptors and proxies used to integrate this into the larger
127
/// pass manager infrastructure.
128
using CGSCCAnalysisManager =
129
    AnalysisManager<LazyCallGraph::SCC, LazyCallGraph &>;
130
131
// Explicit specialization and instantiation declarations for the pass manager.
132
// See the comments on the definition of the specialization for details on how
133
// it differs from the primary template.
134
template <>
135
PreservedAnalyses
136
PassManager<LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &,
137
            CGSCCUpdateResult &>::run(LazyCallGraph::SCC &InitialC,
138
                                      CGSCCAnalysisManager &AM,
139
                                      LazyCallGraph &G, CGSCCUpdateResult &UR);
140
extern template class PassManager<LazyCallGraph::SCC, CGSCCAnalysisManager,
141
                                  LazyCallGraph &, CGSCCUpdateResult &>;
142
143
/// The CGSCC pass manager.
144
///
145
/// See the documentation for the PassManager template for details. It runs
146
/// a sequence of SCC passes over each SCC that the manager is run over. This
147
/// type serves as a convenient way to refer to this construct.
148
using CGSCCPassManager =
149
    PassManager<LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &,
150
                CGSCCUpdateResult &>;
151
152
/// An explicit specialization of the require analysis template pass.
153
template <typename AnalysisT>
154
struct RequireAnalysisPass<AnalysisT, LazyCallGraph::SCC, CGSCCAnalysisManager,
155
                           LazyCallGraph &, CGSCCUpdateResult &>
156
    : PassInfoMixin<RequireAnalysisPass<AnalysisT, LazyCallGraph::SCC,
157
                                        CGSCCAnalysisManager, LazyCallGraph &,
158
                                        CGSCCUpdateResult &>> {
159
  PreservedAnalyses run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM,
160
9
                        LazyCallGraph &CG, CGSCCUpdateResult &) {
161
9
    (void)AM.template getResult<AnalysisT>(C, CG);
162
9
    return PreservedAnalyses::all();
163
9
  }
PassBuilder.cpp:llvm::RequireAnalysisPass<(anonymous namespace)::NoOpCGSCCAnalysis, llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&>::run(llvm::LazyCallGraph::SCC&, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>&, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&)
Line
Count
Source
160
9
                        LazyCallGraph &CG, CGSCCUpdateResult &) {
161
9
    (void)AM.template getResult<AnalysisT>(C, CG);
162
9
    return PreservedAnalyses::all();
163
9
  }
Unexecuted instantiation: llvm::RequireAnalysisPass<llvm::FunctionAnalysisManagerCGSCCProxy, llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&>::run(llvm::LazyCallGraph::SCC&, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>&, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&)
164
};
165
166
/// A proxy from a \c CGSCCAnalysisManager to a \c Module.
167
using CGSCCAnalysisManagerModuleProxy =
168
    InnerAnalysisManagerProxy<CGSCCAnalysisManager, Module>;
169
170
/// We need a specialized result for the \c CGSCCAnalysisManagerModuleProxy so
171
/// it can have access to the call graph in order to walk all the SCCs when
172
/// invalidating things.
173
template <> class CGSCCAnalysisManagerModuleProxy::Result {
174
public:
175
  explicit Result(CGSCCAnalysisManager &InnerAM, LazyCallGraph &G)
176
268
      : InnerAM(&InnerAM), G(&G) {}
177
178
  /// Accessor for the analysis manager.
179
304
  CGSCCAnalysisManager &getManager() { return *InnerAM; }
180
181
  /// Handler for invalidation of the Module.
182
  ///
183
  /// If the proxy analysis itself is preserved, then we assume that the set of
184
  /// SCCs in the Module hasn't changed. Thus any pointers to SCCs in the
185
  /// CGSCCAnalysisManager are still valid, and we don't need to call \c clear
186
  /// on the CGSCCAnalysisManager.
187
  ///
188
  /// Regardless of whether this analysis is marked as preserved, all of the
189
  /// analyses in the \c CGSCCAnalysisManager are potentially invalidated based
190
  /// on the set of preserved analyses.
191
  bool invalidate(Module &M, const PreservedAnalyses &PA,
192
                  ModuleAnalysisManager::Invalidator &Inv);
193
194
private:
195
  CGSCCAnalysisManager *InnerAM;
196
  LazyCallGraph *G;
197
};
198
199
/// Provide a specialized run method for the \c CGSCCAnalysisManagerModuleProxy
200
/// so it can pass the lazy call graph to the result.
201
template <>
202
CGSCCAnalysisManagerModuleProxy::Result
203
CGSCCAnalysisManagerModuleProxy::run(Module &M, ModuleAnalysisManager &AM);
204
205
// Ensure the \c CGSCCAnalysisManagerModuleProxy is provided as an extern
206
// template.
207
extern template class InnerAnalysisManagerProxy<CGSCCAnalysisManager, Module>;
208
209
extern template class OuterAnalysisManagerProxy<
210
    ModuleAnalysisManager, LazyCallGraph::SCC, LazyCallGraph &>;
211
212
/// A proxy from a \c ModuleAnalysisManager to an \c SCC.
213
using ModuleAnalysisManagerCGSCCProxy =
214
    OuterAnalysisManagerProxy<ModuleAnalysisManager, LazyCallGraph::SCC,
215
                              LazyCallGraph &>;
216
217
/// Support structure for SCC passes to communicate updates the call graph back
218
/// to the CGSCC pass manager infrsatructure.
219
///
220
/// The CGSCC pass manager runs SCC passes which are allowed to update the call
221
/// graph and SCC structures. This means the structure the pass manager works
222
/// on is mutating underneath it. In order to support that, there needs to be
223
/// careful communication about the precise nature and ramifications of these
224
/// updates to the pass management infrastructure.
225
///
226
/// All SCC passes will have to accept a reference to the management layer's
227
/// update result struct and use it to reflect the results of any CG updates
228
/// performed.
229
///
230
/// Passes which do not change the call graph structure in any way can just
231
/// ignore this argument to their run method.
232
struct CGSCCUpdateResult {
233
  /// Worklist of the RefSCCs queued for processing.
234
  ///
235
  /// When a pass refines the graph and creates new RefSCCs or causes them to
236
  /// have a different shape or set of component SCCs it should add the RefSCCs
237
  /// to this worklist so that we visit them in the refined form.
238
  ///
239
  /// This worklist is in reverse post-order, as we pop off the back in order
240
  /// to observe RefSCCs in post-order. When adding RefSCCs, clients should add
241
  /// them in reverse post-order.
242
  SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> &RCWorklist;
243
244
  /// Worklist of the SCCs queued for processing.
245
  ///
246
  /// When a pass refines the graph and creates new SCCs or causes them to have
247
  /// a different shape or set of component functions it should add the SCCs to
248
  /// this worklist so that we visit them in the refined form.
249
  ///
250
  /// Note that if the SCCs are part of a RefSCC that is added to the \c
251
  /// RCWorklist, they don't need to be added here as visiting the RefSCC will
252
  /// be sufficient to re-visit the SCCs within it.
253
  ///
254
  /// This worklist is in reverse post-order, as we pop off the back in order
255
  /// to observe SCCs in post-order. When adding SCCs, clients should add them
256
  /// in reverse post-order.
257
  SmallPriorityWorklist<LazyCallGraph::SCC *, 1> &CWorklist;
258
259
  /// The set of invalidated RefSCCs which should be skipped if they are found
260
  /// in \c RCWorklist.
261
  ///
262
  /// This is used to quickly prune out RefSCCs when they get deleted and
263
  /// happen to already be on the worklist. We use this primarily to avoid
264
  /// scanning the list and removing entries from it.
265
  SmallPtrSetImpl<LazyCallGraph::RefSCC *> &InvalidatedRefSCCs;
266
267
  /// The set of invalidated SCCs which should be skipped if they are found
268
  /// in \c CWorklist.
269
  ///
270
  /// This is used to quickly prune out SCCs when they get deleted and happen
271
  /// to already be on the worklist. We use this primarily to avoid scanning
272
  /// the list and removing entries from it.
273
  SmallPtrSetImpl<LazyCallGraph::SCC *> &InvalidatedSCCs;
274
275
  /// If non-null, the updated current \c RefSCC being processed.
276
  ///
277
  /// This is set when a graph refinement takes place an the "current" point in
278
  /// the graph moves "down" or earlier in the post-order walk. This will often
279
  /// cause the "current" RefSCC to be a newly created RefSCC object and the
280
  /// old one to be added to the above worklist. When that happens, this
281
  /// pointer is non-null and can be used to continue processing the "top" of
282
  /// the post-order walk.
283
  LazyCallGraph::RefSCC *UpdatedRC;
284
285
  /// If non-null, the updated current \c SCC being processed.
286
  ///
287
  /// This is set when a graph refinement takes place an the "current" point in
288
  /// the graph moves "down" or earlier in the post-order walk. This will often
289
  /// cause the "current" SCC to be a newly created SCC object and the old one
290
  /// to be added to the above worklist. When that happens, this pointer is
291
  /// non-null and can be used to continue processing the "top" of the
292
  /// post-order walk.
293
  LazyCallGraph::SCC *UpdatedC;
294
295
  /// A hacky area where the inliner can retain history about inlining
296
  /// decisions that mutated the call graph's SCC structure in order to avoid
297
  /// infinite inlining. See the comments in the inliner's CG update logic.
298
  ///
299
  /// FIXME: Keeping this here seems like a big layering issue, we should look
300
  /// for a better technique.
301
  SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
302
      &InlinedInternalEdges;
303
};
304
305
/// The core module pass which does a post-order walk of the SCCs and
306
/// runs a CGSCC pass over each one.
307
///
308
/// Designed to allow composition of a CGSCCPass(Manager) and
309
/// a ModulePassManager. Note that this pass must be run with a module analysis
310
/// manager as it uses the LazyCallGraph analysis. It will also run the
311
/// \c CGSCCAnalysisManagerModuleProxy analysis prior to running the CGSCC
312
/// pass over the module to enable a \c FunctionAnalysisManager to be used
313
/// within this run safely.
314
template <typename CGSCCPassT>
315
class ModuleToPostOrderCGSCCPassAdaptor
316
    : public PassInfoMixin<ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>> {
317
public:
318
  explicit ModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass)
319
304
      : Pass(std::move(Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >::ModuleToPostOrderCGSCCPassAdaptor(llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&>)
Line
Count
Source
319
199
      : Pass(std::move(Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >::ModuleToPostOrderCGSCCPassAdaptor(llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >)
Line
Count
Source
319
65
      : Pass(std::move(Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>::ModuleToPostOrderCGSCCPassAdaptor(llvm::PostOrderFunctionAttrsPass)
Line
Count
Source
319
27
      : Pass(std::move(Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>::ModuleToPostOrderCGSCCPassAdaptor(llvm::InlinerPass)
Line
Count
Source
319
13
      : Pass(std::move(Pass)) {}
320
321
  // We have to explicitly define all the special member functions because MSVC
322
  // refuses to generate them.
323
  ModuleToPostOrderCGSCCPassAdaptor(
324
      const ModuleToPostOrderCGSCCPassAdaptor &Arg)
325
      : Pass(Arg.Pass) {}
326
327
  ModuleToPostOrderCGSCCPassAdaptor(ModuleToPostOrderCGSCCPassAdaptor &&Arg)
328
608
      : Pass(std::move(Arg.Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >::ModuleToPostOrderCGSCCPassAdaptor(llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >&&)
Line
Count
Source
328
398
      : Pass(std::move(Arg.Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >::ModuleToPostOrderCGSCCPassAdaptor(llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >&&)
Line
Count
Source
328
130
      : Pass(std::move(Arg.Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>::ModuleToPostOrderCGSCCPassAdaptor(llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>&&)
Line
Count
Source
328
54
      : Pass(std::move(Arg.Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>::ModuleToPostOrderCGSCCPassAdaptor(llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>&&)
Line
Count
Source
328
26
      : Pass(std::move(Arg.Pass)) {}
329
330
  friend void swap(ModuleToPostOrderCGSCCPassAdaptor &LHS,
331
                   ModuleToPostOrderCGSCCPassAdaptor &RHS) {
332
    std::swap(LHS.Pass, RHS.Pass);
333
  }
334
335
  ModuleToPostOrderCGSCCPassAdaptor &
336
  operator=(ModuleToPostOrderCGSCCPassAdaptor RHS) {
337
    swap(*this, RHS);
338
    return *this;
339
  }
340
341
  /// Runs the CGSCC pass across every SCC in the module.
342
304
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
343
304
    // Setup the CGSCC analysis manager from its proxy.
344
304
    CGSCCAnalysisManager &CGAM =
345
304
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
346
304
347
304
    // Get the call graph for this module.
348
304
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
349
304
350
304
    // We keep worklists to allow us to push more work onto the pass manager as
351
304
    // the passes are run.
352
304
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
353
304
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
354
304
355
304
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
356
304
    // iterating off the worklists.
357
304
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
358
304
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
359
304
360
304
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
361
304
        InlinedInternalEdges;
362
304
363
304
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
364
304
                            InvalidSCCSet,       nullptr,   nullptr,
365
304
                            InlinedInternalEdges};
366
304
367
304
    PreservedAnalyses PA = PreservedAnalyses::all();
368
304
    CG.buildRefSCCs();
369
304
    for (auto RCI = CG.postorder_ref_scc_begin(),
370
304
              RCE = CG.postorder_ref_scc_end();
371
1.28k
         RCI != RCE;) {
372
976
      assert(RCWorklist.empty() &&
373
976
             "Should always start with an empty RefSCC worklist");
374
976
      // The postorder_ref_sccs range we are walking is lazily constructed, so
375
976
      // we only push the first one onto the worklist. The worklist allows us
376
976
      // to capture *new* RefSCCs created during transformations.
377
976
      //
378
976
      // We really want to form RefSCCs lazily because that makes them cheaper
379
976
      // to update as the program is simplified and allows us to have greater
380
976
      // cache locality as forming a RefSCC touches all the parts of all the
381
976
      // functions within that RefSCC.
382
976
      //
383
976
      // We also eagerly increment the iterator to the next position because
384
976
      // the CGSCC passes below may delete the current RefSCC.
385
976
      RCWorklist.insert(&*RCI++);
386
976
387
1.00k
      do {
388
1.00k
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
389
1.00k
        if (InvalidRefSCCSet.count(RC)) {
390
4
          LLVM_DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
391
4
          continue;
392
4
        }
393
997
394
997
        assert(CWorklist.empty() &&
395
997
               "Should always start with an empty SCC worklist");
396
997
397
997
        LLVM_DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
398
997
                          << "\n");
399
997
400
997
        // Push the initial SCCs in reverse post-order as we'll pop off the
401
997
        // back and so see this in post-order.
402
997
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
403
1.03k
          CWorklist.insert(&C);
404
997
405
1.06k
        do {
406
1.06k
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
407
1.06k
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
408
1.06k
          // other RefSCCs in the worklist. The invalid ones are dead and the
409
1.06k
          // other RefSCCs should be queued above, so we just need to skip both
410
1.06k
          // scenarios here.
411
1.06k
          if (InvalidSCCSet.count(C)) {
412
5
            LLVM_DEBUG(dbgs() << "Skipping an invalid SCC...\n");
413
5
            continue;
414
5
          }
415
1.06k
          if (&C->getOuterRefSCC() != RC) {
416
20
            LLVM_DEBUG(dbgs()
417
20
                       << "Skipping an SCC that is now part of some other "
418
20
                          "RefSCC...\n");
419
20
            continue;
420
20
          }
421
1.04k
422
1.07k
          
do 1.04k
{
423
1.07k
            // Check that we didn't miss any update scenario.
424
1.07k
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
425
1.07k
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
426
1.07k
            assert(&C->getOuterRefSCC() == RC &&
427
1.07k
                   "Processing an SCC in a different RefSCC!");
428
1.07k
429
1.07k
            UR.UpdatedRC = nullptr;
430
1.07k
            UR.UpdatedC = nullptr;
431
1.07k
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
432
1.07k
433
1.07k
            // Update the SCC and RefSCC if necessary.
434
1.07k
            C = UR.UpdatedC ? 
UR.UpdatedC33
:
C1.04k
;
435
1.07k
            RC = UR.UpdatedRC ? 
UR.UpdatedRC18
:
RC1.05k
;
436
1.07k
437
1.07k
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
438
1.07k
            // the current SCC may simply need to be skipped if invalid.
439
1.07k
            if (UR.InvalidatedSCCs.count(C)) {
440
2
              LLVM_DEBUG(dbgs()
441
2
                         << "Skipping invalidated root or island SCC!\n");
442
2
              break;
443
2
            }
444
1.07k
            // Check that we didn't miss any update scenario.
445
1.07k
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
446
1.07k
447
1.07k
            // We handle invalidating the CGSCC analysis manager's information
448
1.07k
            // for the (potentially updated) SCC here. Note that any other SCCs
449
1.07k
            // whose structure has changed should have been invalidated by
450
1.07k
            // whatever was updating the call graph. This SCC gets invalidated
451
1.07k
            // late as it contains the nodes that were actively being
452
1.07k
            // processed.
453
1.07k
            CGAM.invalidate(*C, PassPA);
454
1.07k
455
1.07k
            // Then intersect the preserved set so that invalidation of module
456
1.07k
            // analyses will eventually occur when the module pass completes.
457
1.07k
            PA.intersect(std::move(PassPA));
458
1.07k
459
1.07k
            // The pass may have restructured the call graph and refined the
460
1.07k
            // current SCC and/or RefSCC. We need to update our current SCC and
461
1.07k
            // RefSCC pointers to follow these. Also, when the current SCC is
462
1.07k
            // refined, re-run the SCC pass over the newly refined SCC in order
463
1.07k
            // to observe the most precise SCC model available. This inherently
464
1.07k
            // cannot cycle excessively as it only happens when we split SCCs
465
1.07k
            // apart, at most converging on a DAG of single nodes.
466
1.07k
            // FIXME: If we ever start having RefSCC passes, we'll want to
467
1.07k
            // iterate there too.
468
1.07k
            if (UR.UpdatedC)
469
1.07k
              LLVM_DEBUG(dbgs()
470
1.07k
                         << "Re-running SCC passes after a refinement of the "
471
1.07k
                            "current SCC: "
472
1.07k
                         << *UR.UpdatedC << "\n");
473
1.07k
474
1.07k
            // Note that both `C` and `RC` may at this point refer to deleted,
475
1.07k
            // invalid SCC and RefSCCs respectively. But we will short circuit
476
1.07k
            // the processing when we check them in the loop above.
477
1.07k
          } while (UR.UpdatedC);
478
1.06k
        } while (!CWorklist.empty());
479
997
480
997
        // We only need to keep internal inlined edge information within
481
997
        // a RefSCC, clear it to save on space and let the next time we visit
482
997
        // any of these functions have a fresh start.
483
997
        InlinedInternalEdges.clear();
484
1.00k
      } while (!RCWorklist.empty());
485
976
    }
486
304
487
304
    // By definition we preserve the call garph, all SCC analyses, and the
488
304
    // analysis proxies by handling them above and in any nested pass managers.
489
304
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
490
304
    PA.preserve<LazyCallGraphAnalysis>();
491
304
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
492
304
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
493
304
    return PA;
494
304
  }
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&)
Line
Count
Source
342
199
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
343
199
    // Setup the CGSCC analysis manager from its proxy.
344
199
    CGSCCAnalysisManager &CGAM =
345
199
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
346
199
347
199
    // Get the call graph for this module.
348
199
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
349
199
350
199
    // We keep worklists to allow us to push more work onto the pass manager as
351
199
    // the passes are run.
352
199
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
353
199
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
354
199
355
199
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
356
199
    // iterating off the worklists.
357
199
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
358
199
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
359
199
360
199
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
361
199
        InlinedInternalEdges;
362
199
363
199
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
364
199
                            InvalidSCCSet,       nullptr,   nullptr,
365
199
                            InlinedInternalEdges};
366
199
367
199
    PreservedAnalyses PA = PreservedAnalyses::all();
368
199
    CG.buildRefSCCs();
369
199
    for (auto RCI = CG.postorder_ref_scc_begin(),
370
199
              RCE = CG.postorder_ref_scc_end();
371
1.02k
         RCI != RCE;) {
372
827
      assert(RCWorklist.empty() &&
373
827
             "Should always start with an empty RefSCC worklist");
374
827
      // The postorder_ref_sccs range we are walking is lazily constructed, so
375
827
      // we only push the first one onto the worklist. The worklist allows us
376
827
      // to capture *new* RefSCCs created during transformations.
377
827
      //
378
827
      // We really want to form RefSCCs lazily because that makes them cheaper
379
827
      // to update as the program is simplified and allows us to have greater
380
827
      // cache locality as forming a RefSCC touches all the parts of all the
381
827
      // functions within that RefSCC.
382
827
      //
383
827
      // We also eagerly increment the iterator to the next position because
384
827
      // the CGSCC passes below may delete the current RefSCC.
385
827
      RCWorklist.insert(&*RCI++);
386
827
387
851
      do {
388
851
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
389
851
        if (InvalidRefSCCSet.count(RC)) {
390
4
          LLVM_DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
391
4
          continue;
392
4
        }
393
847
394
847
        assert(CWorklist.empty() &&
395
847
               "Should always start with an empty SCC worklist");
396
847
397
847
        LLVM_DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
398
847
                          << "\n");
399
847
400
847
        // Push the initial SCCs in reverse post-order as we'll pop off the
401
847
        // back and so see this in post-order.
402
847
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
403
880
          CWorklist.insert(&C);
404
847
405
918
        do {
406
918
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
407
918
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
408
918
          // other RefSCCs in the worklist. The invalid ones are dead and the
409
918
          // other RefSCCs should be queued above, so we just need to skip both
410
918
          // scenarios here.
411
918
          if (InvalidSCCSet.count(C)) {
412
5
            LLVM_DEBUG(dbgs() << "Skipping an invalid SCC...\n");
413
5
            continue;
414
5
          }
415
913
          if (&C->getOuterRefSCC() != RC) {
416
19
            LLVM_DEBUG(dbgs()
417
19
                       << "Skipping an SCC that is now part of some other "
418
19
                          "RefSCC...\n");
419
19
            continue;
420
19
          }
421
894
422
926
          
do 894
{
423
926
            // Check that we didn't miss any update scenario.
424
926
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
425
926
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
426
926
            assert(&C->getOuterRefSCC() == RC &&
427
926
                   "Processing an SCC in a different RefSCC!");
428
926
429
926
            UR.UpdatedRC = nullptr;
430
926
            UR.UpdatedC = nullptr;
431
926
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
432
926
433
926
            // Update the SCC and RefSCC if necessary.
434
926
            C = UR.UpdatedC ? 
UR.UpdatedC32
:
C894
;
435
926
            RC = UR.UpdatedRC ? 
UR.UpdatedRC17
:
RC909
;
436
926
437
926
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
438
926
            // the current SCC may simply need to be skipped if invalid.
439
926
            if (UR.InvalidatedSCCs.count(C)) {
440
2
              LLVM_DEBUG(dbgs()
441
2
                         << "Skipping invalidated root or island SCC!\n");
442
2
              break;
443
2
            }
444
924
            // Check that we didn't miss any update scenario.
445
924
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
446
924
447
924
            // We handle invalidating the CGSCC analysis manager's information
448
924
            // for the (potentially updated) SCC here. Note that any other SCCs
449
924
            // whose structure has changed should have been invalidated by
450
924
            // whatever was updating the call graph. This SCC gets invalidated
451
924
            // late as it contains the nodes that were actively being
452
924
            // processed.
453
924
            CGAM.invalidate(*C, PassPA);
454
924
455
924
            // Then intersect the preserved set so that invalidation of module
456
924
            // analyses will eventually occur when the module pass completes.
457
924
            PA.intersect(std::move(PassPA));
458
924
459
924
            // The pass may have restructured the call graph and refined the
460
924
            // current SCC and/or RefSCC. We need to update our current SCC and
461
924
            // RefSCC pointers to follow these. Also, when the current SCC is
462
924
            // refined, re-run the SCC pass over the newly refined SCC in order
463
924
            // to observe the most precise SCC model available. This inherently
464
924
            // cannot cycle excessively as it only happens when we split SCCs
465
924
            // apart, at most converging on a DAG of single nodes.
466
924
            // FIXME: If we ever start having RefSCC passes, we'll want to
467
924
            // iterate there too.
468
924
            if (UR.UpdatedC)
469
924
              LLVM_DEBUG(dbgs()
470
924
                         << "Re-running SCC passes after a refinement of the "
471
924
                            "current SCC: "
472
924
                         << *UR.UpdatedC << "\n");
473
924
474
924
            // Note that both `C` and `RC` may at this point refer to deleted,
475
924
            // invalid SCC and RefSCCs respectively. But we will short circuit
476
924
            // the processing when we check them in the loop above.
477
924
          } while (UR.UpdatedC);
478
918
        } while (!CWorklist.empty());
479
847
480
847
        // We only need to keep internal inlined edge information within
481
847
        // a RefSCC, clear it to save on space and let the next time we visit
482
847
        // any of these functions have a fresh start.
483
847
        InlinedInternalEdges.clear();
484
851
      } while (!RCWorklist.empty());
485
827
    }
486
199
487
199
    // By definition we preserve the call garph, all SCC analyses, and the
488
199
    // analysis proxies by handling them above and in any nested pass managers.
489
199
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
490
199
    PA.preserve<LazyCallGraphAnalysis>();
491
199
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
492
199
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
493
199
    return PA;
494
199
  }
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&)
Line
Count
Source
342
65
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
343
65
    // Setup the CGSCC analysis manager from its proxy.
344
65
    CGSCCAnalysisManager &CGAM =
345
65
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
346
65
347
65
    // Get the call graph for this module.
348
65
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
349
65
350
65
    // We keep worklists to allow us to push more work onto the pass manager as
351
65
    // the passes are run.
352
65
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
353
65
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
354
65
355
65
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
356
65
    // iterating off the worklists.
357
65
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
358
65
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
359
65
360
65
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
361
65
        InlinedInternalEdges;
362
65
363
65
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
364
65
                            InvalidSCCSet,       nullptr,   nullptr,
365
65
                            InlinedInternalEdges};
366
65
367
65
    PreservedAnalyses PA = PreservedAnalyses::all();
368
65
    CG.buildRefSCCs();
369
65
    for (auto RCI = CG.postorder_ref_scc_begin(),
370
65
              RCE = CG.postorder_ref_scc_end();
371
192
         RCI != RCE;) {
372
127
      assert(RCWorklist.empty() &&
373
127
             "Should always start with an empty RefSCC worklist");
374
127
      // The postorder_ref_sccs range we are walking is lazily constructed, so
375
127
      // we only push the first one onto the worklist. The worklist allows us
376
127
      // to capture *new* RefSCCs created during transformations.
377
127
      //
378
127
      // We really want to form RefSCCs lazily because that makes them cheaper
379
127
      // to update as the program is simplified and allows us to have greater
380
127
      // cache locality as forming a RefSCC touches all the parts of all the
381
127
      // functions within that RefSCC.
382
127
      //
383
127
      // We also eagerly increment the iterator to the next position because
384
127
      // the CGSCC passes below may delete the current RefSCC.
385
127
      RCWorklist.insert(&*RCI++);
386
127
387
128
      do {
388
128
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
389
128
        if (InvalidRefSCCSet.count(RC)) {
390
0
          LLVM_DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
391
0
          continue;
392
0
        }
393
128
394
128
        assert(CWorklist.empty() &&
395
128
               "Should always start with an empty SCC worklist");
396
128
397
128
        LLVM_DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
398
128
                          << "\n");
399
128
400
128
        // Push the initial SCCs in reverse post-order as we'll pop off the
401
128
        // back and so see this in post-order.
402
128
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
403
128
          CWorklist.insert(&C);
404
128
405
129
        do {
406
129
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
407
129
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
408
129
          // other RefSCCs in the worklist. The invalid ones are dead and the
409
129
          // other RefSCCs should be queued above, so we just need to skip both
410
129
          // scenarios here.
411
129
          if (InvalidSCCSet.count(C)) {
412
0
            LLVM_DEBUG(dbgs() << "Skipping an invalid SCC...\n");
413
0
            continue;
414
0
          }
415
129
          if (&C->getOuterRefSCC() != RC) {
416
1
            LLVM_DEBUG(dbgs()
417
1
                       << "Skipping an SCC that is now part of some other "
418
1
                          "RefSCC...\n");
419
1
            continue;
420
1
          }
421
128
422
129
          
do 128
{
423
129
            // Check that we didn't miss any update scenario.
424
129
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
425
129
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
426
129
            assert(&C->getOuterRefSCC() == RC &&
427
129
                   "Processing an SCC in a different RefSCC!");
428
129
429
129
            UR.UpdatedRC = nullptr;
430
129
            UR.UpdatedC = nullptr;
431
129
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
432
129
433
129
            // Update the SCC and RefSCC if necessary.
434
129
            C = UR.UpdatedC ? 
UR.UpdatedC1
:
C128
;
435
129
            RC = UR.UpdatedRC ? 
UR.UpdatedRC1
:
RC128
;
436
129
437
129
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
438
129
            // the current SCC may simply need to be skipped if invalid.
439
129
            if (UR.InvalidatedSCCs.count(C)) {
440
0
              LLVM_DEBUG(dbgs()
441
0
                         << "Skipping invalidated root or island SCC!\n");
442
0
              break;
443
0
            }
444
129
            // Check that we didn't miss any update scenario.
445
129
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
446
129
447
129
            // We handle invalidating the CGSCC analysis manager's information
448
129
            // for the (potentially updated) SCC here. Note that any other SCCs
449
129
            // whose structure has changed should have been invalidated by
450
129
            // whatever was updating the call graph. This SCC gets invalidated
451
129
            // late as it contains the nodes that were actively being
452
129
            // processed.
453
129
            CGAM.invalidate(*C, PassPA);
454
129
455
129
            // Then intersect the preserved set so that invalidation of module
456
129
            // analyses will eventually occur when the module pass completes.
457
129
            PA.intersect(std::move(PassPA));
458
129
459
129
            // The pass may have restructured the call graph and refined the
460
129
            // current SCC and/or RefSCC. We need to update our current SCC and
461
129
            // RefSCC pointers to follow these. Also, when the current SCC is
462
129
            // refined, re-run the SCC pass over the newly refined SCC in order
463
129
            // to observe the most precise SCC model available. This inherently
464
129
            // cannot cycle excessively as it only happens when we split SCCs
465
129
            // apart, at most converging on a DAG of single nodes.
466
129
            // FIXME: If we ever start having RefSCC passes, we'll want to
467
129
            // iterate there too.
468
129
            if (UR.UpdatedC)
469
129
              LLVM_DEBUG(dbgs()
470
129
                         << "Re-running SCC passes after a refinement of the "
471
129
                            "current SCC: "
472
129
                         << *UR.UpdatedC << "\n");
473
129
474
129
            // Note that both `C` and `RC` may at this point refer to deleted,
475
129
            // invalid SCC and RefSCCs respectively. But we will short circuit
476
129
            // the processing when we check them in the loop above.
477
129
          } while (UR.UpdatedC);
478
129
        } while (!CWorklist.empty());
479
128
480
128
        // We only need to keep internal inlined edge information within
481
128
        // a RefSCC, clear it to save on space and let the next time we visit
482
128
        // any of these functions have a fresh start.
483
128
        InlinedInternalEdges.clear();
484
128
      } while (!RCWorklist.empty());
485
127
    }
486
65
487
65
    // By definition we preserve the call garph, all SCC analyses, and the
488
65
    // analysis proxies by handling them above and in any nested pass managers.
489
65
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
490
65
    PA.preserve<LazyCallGraphAnalysis>();
491
65
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
492
65
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
493
65
    return PA;
494
65
  }
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&)
Line
Count
Source
342
27
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
343
27
    // Setup the CGSCC analysis manager from its proxy.
344
27
    CGSCCAnalysisManager &CGAM =
345
27
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
346
27
347
27
    // Get the call graph for this module.
348
27
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
349
27
350
27
    // We keep worklists to allow us to push more work onto the pass manager as
351
27
    // the passes are run.
352
27
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
353
27
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
354
27
355
27
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
356
27
    // iterating off the worklists.
357
27
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
358
27
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
359
27
360
27
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
361
27
        InlinedInternalEdges;
362
27
363
27
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
364
27
                            InvalidSCCSet,       nullptr,   nullptr,
365
27
                            InlinedInternalEdges};
366
27
367
27
    PreservedAnalyses PA = PreservedAnalyses::all();
368
27
    CG.buildRefSCCs();
369
27
    for (auto RCI = CG.postorder_ref_scc_begin(),
370
27
              RCE = CG.postorder_ref_scc_end();
371
42
         RCI != RCE;) {
372
15
      assert(RCWorklist.empty() &&
373
15
             "Should always start with an empty RefSCC worklist");
374
15
      // The postorder_ref_sccs range we are walking is lazily constructed, so
375
15
      // we only push the first one onto the worklist. The worklist allows us
376
15
      // to capture *new* RefSCCs created during transformations.
377
15
      //
378
15
      // We really want to form RefSCCs lazily because that makes them cheaper
379
15
      // to update as the program is simplified and allows us to have greater
380
15
      // cache locality as forming a RefSCC touches all the parts of all the
381
15
      // functions within that RefSCC.
382
15
      //
383
15
      // We also eagerly increment the iterator to the next position because
384
15
      // the CGSCC passes below may delete the current RefSCC.
385
15
      RCWorklist.insert(&*RCI++);
386
15
387
15
      do {
388
15
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
389
15
        if (InvalidRefSCCSet.count(RC)) {
390
0
          LLVM_DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
391
0
          continue;
392
0
        }
393
15
394
15
        assert(CWorklist.empty() &&
395
15
               "Should always start with an empty SCC worklist");
396
15
397
15
        LLVM_DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
398
15
                          << "\n");
399
15
400
15
        // Push the initial SCCs in reverse post-order as we'll pop off the
401
15
        // back and so see this in post-order.
402
15
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
403
15
          CWorklist.insert(&C);
404
15
405
15
        do {
406
15
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
407
15
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
408
15
          // other RefSCCs in the worklist. The invalid ones are dead and the
409
15
          // other RefSCCs should be queued above, so we just need to skip both
410
15
          // scenarios here.
411
15
          if (InvalidSCCSet.count(C)) {
412
0
            LLVM_DEBUG(dbgs() << "Skipping an invalid SCC...\n");
413
0
            continue;
414
0
          }
415
15
          if (&C->getOuterRefSCC() != RC) {
416
0
            LLVM_DEBUG(dbgs()
417
0
                       << "Skipping an SCC that is now part of some other "
418
0
                          "RefSCC...\n");
419
0
            continue;
420
0
          }
421
15
422
15
          do {
423
15
            // Check that we didn't miss any update scenario.
424
15
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
425
15
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
426
15
            assert(&C->getOuterRefSCC() == RC &&
427
15
                   "Processing an SCC in a different RefSCC!");
428
15
429
15
            UR.UpdatedRC = nullptr;
430
15
            UR.UpdatedC = nullptr;
431
15
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
432
15
433
15
            // Update the SCC and RefSCC if necessary.
434
15
            C = UR.UpdatedC ? 
UR.UpdatedC0
: C;
435
15
            RC = UR.UpdatedRC ? 
UR.UpdatedRC0
: RC;
436
15
437
15
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
438
15
            // the current SCC may simply need to be skipped if invalid.
439
15
            if (UR.InvalidatedSCCs.count(C)) {
440
0
              LLVM_DEBUG(dbgs()
441
0
                         << "Skipping invalidated root or island SCC!\n");
442
0
              break;
443
0
            }
444
15
            // Check that we didn't miss any update scenario.
445
15
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
446
15
447
15
            // We handle invalidating the CGSCC analysis manager's information
448
15
            // for the (potentially updated) SCC here. Note that any other SCCs
449
15
            // whose structure has changed should have been invalidated by
450
15
            // whatever was updating the call graph. This SCC gets invalidated
451
15
            // late as it contains the nodes that were actively being
452
15
            // processed.
453
15
            CGAM.invalidate(*C, PassPA);
454
15
455
15
            // Then intersect the preserved set so that invalidation of module
456
15
            // analyses will eventually occur when the module pass completes.
457
15
            PA.intersect(std::move(PassPA));
458
15
459
15
            // The pass may have restructured the call graph and refined the
460
15
            // current SCC and/or RefSCC. We need to update our current SCC and
461
15
            // RefSCC pointers to follow these. Also, when the current SCC is
462
15
            // refined, re-run the SCC pass over the newly refined SCC in order
463
15
            // to observe the most precise SCC model available. This inherently
464
15
            // cannot cycle excessively as it only happens when we split SCCs
465
15
            // apart, at most converging on a DAG of single nodes.
466
15
            // FIXME: If we ever start having RefSCC passes, we'll want to
467
15
            // iterate there too.
468
15
            if (UR.UpdatedC)
469
15
              LLVM_DEBUG(dbgs()
470
15
                         << "Re-running SCC passes after a refinement of the "
471
15
                            "current SCC: "
472
15
                         << *UR.UpdatedC << "\n");
473
15
474
15
            // Note that both `C` and `RC` may at this point refer to deleted,
475
15
            // invalid SCC and RefSCCs respectively. But we will short circuit
476
15
            // the processing when we check them in the loop above.
477
15
          } while (UR.UpdatedC);
478
15
        } while (!CWorklist.empty());
479
15
480
15
        // We only need to keep internal inlined edge information within
481
15
        // a RefSCC, clear it to save on space and let the next time we visit
482
15
        // any of these functions have a fresh start.
483
15
        InlinedInternalEdges.clear();
484
15
      } while (!RCWorklist.empty());
485
15
    }
486
27
487
27
    // By definition we preserve the call garph, all SCC analyses, and the
488
27
    // analysis proxies by handling them above and in any nested pass managers.
489
27
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
490
27
    PA.preserve<LazyCallGraphAnalysis>();
491
27
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
492
27
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
493
27
    return PA;
494
27
  }
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&)
Line
Count
Source
342
13
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
343
13
    // Setup the CGSCC analysis manager from its proxy.
344
13
    CGSCCAnalysisManager &CGAM =
345
13
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
346
13
347
13
    // Get the call graph for this module.
348
13
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
349
13
350
13
    // We keep worklists to allow us to push more work onto the pass manager as
351
13
    // the passes are run.
352
13
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
353
13
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
354
13
355
13
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
356
13
    // iterating off the worklists.
357
13
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
358
13
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
359
13
360
13
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
361
13
        InlinedInternalEdges;
362
13
363
13
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
364
13
                            InvalidSCCSet,       nullptr,   nullptr,
365
13
                            InlinedInternalEdges};
366
13
367
13
    PreservedAnalyses PA = PreservedAnalyses::all();
368
13
    CG.buildRefSCCs();
369
13
    for (auto RCI = CG.postorder_ref_scc_begin(),
370
13
              RCE = CG.postorder_ref_scc_end();
371
20
         RCI != RCE;) {
372
7
      assert(RCWorklist.empty() &&
373
7
             "Should always start with an empty RefSCC worklist");
374
7
      // The postorder_ref_sccs range we are walking is lazily constructed, so
375
7
      // we only push the first one onto the worklist. The worklist allows us
376
7
      // to capture *new* RefSCCs created during transformations.
377
7
      //
378
7
      // We really want to form RefSCCs lazily because that makes them cheaper
379
7
      // to update as the program is simplified and allows us to have greater
380
7
      // cache locality as forming a RefSCC touches all the parts of all the
381
7
      // functions within that RefSCC.
382
7
      //
383
7
      // We also eagerly increment the iterator to the next position because
384
7
      // the CGSCC passes below may delete the current RefSCC.
385
7
      RCWorklist.insert(&*RCI++);
386
7
387
7
      do {
388
7
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
389
7
        if (InvalidRefSCCSet.count(RC)) {
390
0
          LLVM_DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
391
0
          continue;
392
0
        }
393
7
394
7
        assert(CWorklist.empty() &&
395
7
               "Should always start with an empty SCC worklist");
396
7
397
7
        LLVM_DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
398
7
                          << "\n");
399
7
400
7
        // Push the initial SCCs in reverse post-order as we'll pop off the
401
7
        // back and so see this in post-order.
402
7
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
403
7
          CWorklist.insert(&C);
404
7
405
7
        do {
406
7
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
407
7
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
408
7
          // other RefSCCs in the worklist. The invalid ones are dead and the
409
7
          // other RefSCCs should be queued above, so we just need to skip both
410
7
          // scenarios here.
411
7
          if (InvalidSCCSet.count(C)) {
412
0
            LLVM_DEBUG(dbgs() << "Skipping an invalid SCC...\n");
413
0
            continue;
414
0
          }
415
7
          if (&C->getOuterRefSCC() != RC) {
416
0
            LLVM_DEBUG(dbgs()
417
0
                       << "Skipping an SCC that is now part of some other "
418
0
                          "RefSCC...\n");
419
0
            continue;
420
0
          }
421
7
422
7
          do {
423
7
            // Check that we didn't miss any update scenario.
424
7
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
425
7
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
426
7
            assert(&C->getOuterRefSCC() == RC &&
427
7
                   "Processing an SCC in a different RefSCC!");
428
7
429
7
            UR.UpdatedRC = nullptr;
430
7
            UR.UpdatedC = nullptr;
431
7
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
432
7
433
7
            // Update the SCC and RefSCC if necessary.
434
7
            C = UR.UpdatedC ? 
UR.UpdatedC0
: C;
435
7
            RC = UR.UpdatedRC ? 
UR.UpdatedRC0
: RC;
436
7
437
7
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
438
7
            // the current SCC may simply need to be skipped if invalid.
439
7
            if (UR.InvalidatedSCCs.count(C)) {
440
0
              LLVM_DEBUG(dbgs()
441
0
                         << "Skipping invalidated root or island SCC!\n");
442
0
              break;
443
0
            }
444
7
            // Check that we didn't miss any update scenario.
445
7
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
446
7
447
7
            // We handle invalidating the CGSCC analysis manager's information
448
7
            // for the (potentially updated) SCC here. Note that any other SCCs
449
7
            // whose structure has changed should have been invalidated by
450
7
            // whatever was updating the call graph. This SCC gets invalidated
451
7
            // late as it contains the nodes that were actively being
452
7
            // processed.
453
7
            CGAM.invalidate(*C, PassPA);
454
7
455
7
            // Then intersect the preserved set so that invalidation of module
456
7
            // analyses will eventually occur when the module pass completes.
457
7
            PA.intersect(std::move(PassPA));
458
7
459
7
            // The pass may have restructured the call graph and refined the
460
7
            // current SCC and/or RefSCC. We need to update our current SCC and
461
7
            // RefSCC pointers to follow these. Also, when the current SCC is
462
7
            // refined, re-run the SCC pass over the newly refined SCC in order
463
7
            // to observe the most precise SCC model available. This inherently
464
7
            // cannot cycle excessively as it only happens when we split SCCs
465
7
            // apart, at most converging on a DAG of single nodes.
466
7
            // FIXME: If we ever start having RefSCC passes, we'll want to
467
7
            // iterate there too.
468
7
            if (UR.UpdatedC)
469
7
              LLVM_DEBUG(dbgs()
470
7
                         << "Re-running SCC passes after a refinement of the "
471
7
                            "current SCC: "
472
7
                         << *UR.UpdatedC << "\n");
473
7
474
7
            // Note that both `C` and `RC` may at this point refer to deleted,
475
7
            // invalid SCC and RefSCCs respectively. But we will short circuit
476
7
            // the processing when we check them in the loop above.
477
7
          } while (UR.UpdatedC);
478
7
        } while (!CWorklist.empty());
479
7
480
7
        // We only need to keep internal inlined edge information within
481
7
        // a RefSCC, clear it to save on space and let the next time we visit
482
7
        // any of these functions have a fresh start.
483
7
        InlinedInternalEdges.clear();
484
7
      } while (!RCWorklist.empty());
485
7
    }
486
13
487
13
    // By definition we preserve the call garph, all SCC analyses, and the
488
13
    // analysis proxies by handling them above and in any nested pass managers.
489
13
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
490
13
    PA.preserve<LazyCallGraphAnalysis>();
491
13
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
492
13
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
493
13
    return PA;
494
13
  }
495
496
private:
497
  CGSCCPassT Pass;
498
};
499
500
/// A function to deduce a function pass type and wrap it in the
501
/// templated adaptor.
502
template <typename CGSCCPassT>
503
ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>
504
304
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
505
304
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
506
304
}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > llvm::createModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >(llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&>)
Line
Count
Source
504
199
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
505
199
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
506
199
}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > > llvm::createModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >(llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >)
Line
Count
Source
504
65
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
505
65
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
506
65
}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass> llvm::createModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>(llvm::PostOrderFunctionAttrsPass)
Line
Count
Source
504
27
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
505
27
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
506
27
}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass> llvm::createModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>(llvm::InlinerPass)
Line
Count
Source
504
13
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
505
13
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
506
13
}
507
508
/// A proxy from a \c FunctionAnalysisManager to an \c SCC.
509
///
510
/// When a module pass runs and triggers invalidation, both the CGSCC and
511
/// Function analysis manager proxies on the module get an invalidation event.
512
/// We don't want to fully duplicate responsibility for most of the
513
/// invalidation logic. Instead, this layer is only responsible for SCC-local
514
/// invalidation events. We work with the module's FunctionAnalysisManager to
515
/// invalidate function analyses.
516
class FunctionAnalysisManagerCGSCCProxy
517
    : public AnalysisInfoMixin<FunctionAnalysisManagerCGSCCProxy> {
518
public:
519
  class Result {
520
  public:
521
1.10k
    explicit Result(FunctionAnalysisManager &FAM) : FAM(&FAM) {}
522
523
    /// Accessor for the analysis manager.
524
2.10k
    FunctionAnalysisManager &getManager() { return *FAM; }
525
526
    bool invalidate(LazyCallGraph::SCC &C, const PreservedAnalyses &PA,
527
                    CGSCCAnalysisManager::Invalidator &Inv);
528
529
  private:
530
    FunctionAnalysisManager *FAM;
531
  };
532
533
  /// Computes the \c FunctionAnalysisManager and stores it in the result proxy.
534
  Result run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM, LazyCallGraph &);
535
536
private:
537
  friend AnalysisInfoMixin<FunctionAnalysisManagerCGSCCProxy>;
538
539
  static AnalysisKey Key;
540
};
541
542
extern template class OuterAnalysisManagerProxy<CGSCCAnalysisManager, Function>;
543
544
/// A proxy from a \c CGSCCAnalysisManager to a \c Function.
545
using CGSCCAnalysisManagerFunctionProxy =
546
    OuterAnalysisManagerProxy<CGSCCAnalysisManager, Function>;
547
548
/// Helper to update the call graph after running a function pass.
549
///
550
/// Function passes can only mutate the call graph in specific ways. This
551
/// routine provides a helper that updates the call graph in those ways
552
/// including returning whether any changes were made and populating a CG
553
/// update result struct for the overall CGSCC walk.
554
LazyCallGraph::SCC &updateCGAndAnalysisManagerForFunctionPass(
555
    LazyCallGraph &G, LazyCallGraph::SCC &C, LazyCallGraph::Node &N,
556
    CGSCCAnalysisManager &AM, CGSCCUpdateResult &UR);
557
558
/// Adaptor that maps from a SCC to its functions.
559
///
560
/// Designed to allow composition of a FunctionPass(Manager) and
561
/// a CGSCCPassManager. Note that if this pass is constructed with a pointer
562
/// to a \c CGSCCAnalysisManager it will run the
563
/// \c FunctionAnalysisManagerCGSCCProxy analysis prior to running the function
564
/// pass over the SCC to enable a \c FunctionAnalysisManager to be used
565
/// within this run safely.
566
template <typename FunctionPassT>
567
class CGSCCToFunctionPassAdaptor
568
    : public PassInfoMixin<CGSCCToFunctionPassAdaptor<FunctionPassT>> {
569
public:
570
  explicit CGSCCToFunctionPassAdaptor(FunctionPassT Pass)
571
112
      : Pass(std::move(Pass)) {}
572
573
  // We have to explicitly define all the special member functions because MSVC
574
  // refuses to generate them.
575
  CGSCCToFunctionPassAdaptor(const CGSCCToFunctionPassAdaptor &Arg)
576
      : Pass(Arg.Pass) {}
577
578
  CGSCCToFunctionPassAdaptor(CGSCCToFunctionPassAdaptor &&Arg)
579
224
      : Pass(std::move(Arg.Pass)) {}
580
581
  friend void swap(CGSCCToFunctionPassAdaptor &LHS,
582
                   CGSCCToFunctionPassAdaptor &RHS) {
583
    std::swap(LHS.Pass, RHS.Pass);
584
  }
585
586
  CGSCCToFunctionPassAdaptor &operator=(CGSCCToFunctionPassAdaptor RHS) {
587
    swap(*this, RHS);
588
    return *this;
589
  }
590
591
  /// Runs the function pass across every function in the module.
592
  PreservedAnalyses run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM,
593
363
                        LazyCallGraph &CG, CGSCCUpdateResult &UR) {
594
363
    // Setup the function analysis manager from its proxy.
595
363
    FunctionAnalysisManager &FAM =
596
363
        AM.getResult<FunctionAnalysisManagerCGSCCProxy>(C, CG).getManager();
597
363
598
363
    SmallVector<LazyCallGraph::Node *, 4> Nodes;
599
363
    for (LazyCallGraph::Node &N : C)
600
456
      Nodes.push_back(&N);
601
363
602
363
    // The SCC may get split while we are optimizing functions due to deleting
603
363
    // edges. If this happens, the current SCC can shift, so keep track of
604
363
    // a pointer we can overwrite.
605
363
    LazyCallGraph::SCC *CurrentC = &C;
606
363
607
363
    LLVM_DEBUG(dbgs() << "Running function passes across an SCC: " << C
608
363
                      << "\n");
609
363
610
363
    PreservedAnalyses PA = PreservedAnalyses::all();
611
456
    for (LazyCallGraph::Node *N : Nodes) {
612
456
      // Skip nodes from other SCCs. These may have been split out during
613
456
      // processing. We'll eventually visit those SCCs and pick up the nodes
614
456
      // there.
615
456
      if (CG.lookupSCC(*N) != CurrentC)
616
40
        continue;
617
416
618
416
      PreservedAnalyses PassPA = Pass.run(N->getFunction(), FAM);
619
416
620
416
      // We know that the function pass couldn't have invalidated any other
621
416
      // function's analyses (that's the contract of a function pass), so
622
416
      // directly handle the function analysis manager's invalidation here.
623
416
      FAM.invalidate(N->getFunction(), PassPA);
624
416
625
416
      // Then intersect the preserved set so that invalidation of module
626
416
      // analyses will eventually occur when the module pass completes.
627
416
      PA.intersect(std::move(PassPA));
628
416
629
416
      // If the call graph hasn't been preserved, update it based on this
630
416
      // function pass. This may also update the current SCC to point to
631
416
      // a smaller, more refined SCC.
632
416
      auto PAC = PA.getChecker<LazyCallGraphAnalysis>();
633
416
      if (!PAC.preserved() && 
!PAC.preservedSet<AllAnalysesOn<Module>>()159
) {
634
159
        CurrentC = &updateCGAndAnalysisManagerForFunctionPass(CG, *CurrentC, *N,
635
159
                                                              AM, UR);
636
159
        assert(
637
159
            CG.lookupSCC(*N) == CurrentC &&
638
159
            "Current SCC not updated to the SCC containing the current node!");
639
159
      }
640
416
    }
641
363
642
363
    // By definition we preserve the proxy. And we preserve all analyses on
643
363
    // Functions. This precludes *any* invalidation of function analyses by the
644
363
    // proxy, but that's OK because we've taken care to invalidate analyses in
645
363
    // the function analysis manager incrementally above.
646
363
    PA.preserveSet<AllAnalysesOn<Function>>();
647
363
    PA.preserve<FunctionAnalysisManagerCGSCCProxy>();
648
363
649
363
    // We've also ensured that we updated the call graph along the way.
650
363
    PA.preserve<LazyCallGraphAnalysis>();
651
363
652
363
    return PA;
653
363
  }
654
655
private:
656
  FunctionPassT Pass;
657
};
658
659
/// A function to deduce a function pass type and wrap it in the
660
/// templated adaptor.
661
template <typename FunctionPassT>
662
CGSCCToFunctionPassAdaptor<FunctionPassT>
663
112
createCGSCCToFunctionPassAdaptor(FunctionPassT Pass) {
664
112
  return CGSCCToFunctionPassAdaptor<FunctionPassT>(std::move(Pass));
665
112
}
666
667
/// A helper that repeats an SCC pass each time an indirect call is refined to
668
/// a direct call by that pass.
669
///
670
/// While the CGSCC pass manager works to re-visit SCCs and RefSCCs as they
671
/// change shape, we may also want to repeat an SCC pass if it simply refines
672
/// an indirect call to a direct call, even if doing so does not alter the
673
/// shape of the graph. Note that this only pertains to direct calls to
674
/// functions where IPO across the SCC may be able to compute more precise
675
/// results. For intrinsics, we assume scalar optimizations already can fully
676
/// reason about them.
677
///
678
/// This repetition has the potential to be very large however, as each one
679
/// might refine a single call site. As a consequence, in practice we use an
680
/// upper bound on the number of repetitions to limit things.
681
template <typename PassT>
682
class DevirtSCCRepeatedPass
683
    : public PassInfoMixin<DevirtSCCRepeatedPass<PassT>> {
684
public:
685
  explicit DevirtSCCRepeatedPass(PassT Pass, int MaxIterations)
686
68
      : Pass(std::move(Pass)), MaxIterations(MaxIterations) {}
687
688
  /// Runs the wrapped pass up to \c MaxIterations on the SCC, iterating
689
  /// whenever an indirect call is refined.
690
  PreservedAnalyses run(LazyCallGraph::SCC &InitialC, CGSCCAnalysisManager &AM,
691
143
                        LazyCallGraph &CG, CGSCCUpdateResult &UR) {
692
143
    PreservedAnalyses PA = PreservedAnalyses::all();
693
143
694
143
    // The SCC may be refined while we are running passes over it, so set up
695
143
    // a pointer that we can update.
696
143
    LazyCallGraph::SCC *C = &InitialC;
697
143
698
143
    // Collect value handles for all of the indirect call sites.
699
143
    SmallVector<WeakTrackingVH, 8> CallHandles;
700
143
701
143
    // Struct to track the counts of direct and indirect calls in each function
702
143
    // of the SCC.
703
143
    struct CallCount {
704
143
      int Direct;
705
143
      int Indirect;
706
143
    };
707
143
708
143
    // Put value handles on all of the indirect calls and return the number of
709
143
    // direct calls for each function in the SCC.
710
143
    auto ScanSCC = [](LazyCallGraph::SCC &C,
711
293
                      SmallVectorImpl<WeakTrackingVH> &CallHandles) {
712
293
      assert(CallHandles.empty() && "Must start with a clear set of handles.");
713
293
714
293
      SmallVector<CallCount, 4> CallCounts;
715
301
      for (LazyCallGraph::Node &N : C) {
716
301
        CallCounts.push_back({0, 0});
717
301
        CallCount &Count = CallCounts.back();
718
301
        for (Instruction &I : instructions(N.getFunction()))
719
1.40k
          if (auto CS = CallSite(&I)) {
720
280
            if (CS.getCalledFunction()) {
721
258
              ++Count.Direct;
722
258
            } else {
723
22
              ++Count.Indirect;
724
22
              CallHandles.push_back(WeakTrackingVH(&I));
725
22
            }
726
280
          }
727
301
      }
728
293
729
293
      return CallCounts;
730
293
    };
731
143
732
143
    // Populate the initial call handles and get the initial call counts.
733
143
    auto CallCounts = ScanSCC(*C, CallHandles);
734
143
735
151
    for (int Iteration = 0;; 
++Iteration8
) {
736
151
      PreservedAnalyses PassPA = Pass.run(*C, AM, CG, UR);
737
151
738
151
      // If the SCC structure has changed, bail immediately and let the outer
739
151
      // CGSCC layer handle any iteration to reflect the refined structure.
740
151
      if (UR.UpdatedC && 
UR.UpdatedC != C1
) {
741
1
        PA.intersect(std::move(PassPA));
742
1
        break;
743
1
      }
744
150
745
150
      // Check that we didn't miss any update scenario.
746
150
      assert(!UR.InvalidatedSCCs.count(C) && "Processing an invalid SCC!");
747
150
      assert(C->begin() != C->end() && "Cannot have an empty SCC!");
748
150
      assert((int)CallCounts.size() == C->size() &&
749
150
             "Cannot have changed the size of the SCC!");
750
150
751
150
      // Check whether any of the handles were devirtualized.
752
150
      auto IsDevirtualizedHandle = [&](WeakTrackingVH &CallH) {
753
16
        if (!CallH)
754
2
          return false;
755
14
        auto CS = CallSite(CallH);
756
14
        if (!CS)
757
0
          return false;
758
14
759
14
        // If the call is still indirect, leave it alone.
760
14
        Function *F = CS.getCalledFunction();
761
14
        if (!F)
762
7
          return false;
763
7
764
7
        LLVM_DEBUG(dbgs() << "Found devirutalized call from "
765
7
                          << CS.getParent()->getParent()->getName() << " to "
766
7
                          << F->getName() << "\n");
767
7
768
7
        // We now have a direct call where previously we had an indirect call,
769
7
        // so iterate to process this devirtualization site.
770
7
        return true;
771
7
      };
772
150
      bool Devirt = llvm::any_of(CallHandles, IsDevirtualizedHandle);
773
150
774
150
      // Rescan to build up a new set of handles and count how many direct
775
150
      // calls remain. If we decide to iterate, this also sets up the input to
776
150
      // the next iteration.
777
150
      CallHandles.clear();
778
150
      auto NewCallCounts = ScanSCC(*C, CallHandles);
779
150
780
150
      // If we haven't found an explicit devirtualization already see if we
781
150
      // have decreased the number of indirect calls and increased the number
782
150
      // of direct calls for any function in the SCC. This can be fooled by all
783
150
      // manner of transformations such as DCE and other things, but seems to
784
150
      // work well in practice.
785
150
      if (!Devirt)
786
285
        
for (int i = 0, Size = C->size(); 143
i < Size;
++i142
)
787
144
          if (CallCounts[i].Indirect > NewCallCounts[i].Indirect &&
788
144
              
CallCounts[i].Direct < NewCallCounts[i].Direct2
) {
789
2
            Devirt = true;
790
2
            break;
791
2
          }
792
150
793
150
      if (!Devirt) {
794
141
        PA.intersect(std::move(PassPA));
795
141
        break;
796
141
      }
797
9
798
9
      // Otherwise, if we've already hit our max, we're done.
799
9
      if (Iteration >= MaxIterations) {
800
1
        LLVM_DEBUG(
801
1
            dbgs() << "Found another devirtualization after hitting the max "
802
1
                      "number of repetitions ("
803
1
                   << MaxIterations << ") on SCC: " << *C << "\n");
804
1
        PA.intersect(std::move(PassPA));
805
1
        break;
806
1
      }
807
8
808
8
      LLVM_DEBUG(
809
8
          dbgs()
810
8
          << "Repeating an SCC pass after finding a devirtualization in: " << *C
811
8
          << "\n");
812
8
813
8
      // Move over the new call counts in preparation for iterating.
814
8
      CallCounts = std::move(NewCallCounts);
815
8
816
8
      // Update the analysis manager with each run and intersect the total set
817
8
      // of preserved analyses so we're ready to iterate.
818
8
      AM.invalidate(*C, PassPA);
819
8
      PA.intersect(std::move(PassPA));
820
8
    }
821
143
822
143
    // Note that we don't add any preserved entries here unlike a more normal
823
143
    // "pass manager" because we only handle invalidation *between* iterations,
824
143
    // not after the last iteration.
825
143
    return PA;
826
143
  }
827
828
private:
829
  PassT Pass;
830
  int MaxIterations;
831
};
832
833
/// A function to deduce a function pass type and wrap it in the
834
/// templated adaptor.
835
template <typename PassT>
836
DevirtSCCRepeatedPass<PassT> createDevirtSCCRepeatedPass(PassT Pass,
837
68
                                                         int MaxIterations) {
838
68
  return DevirtSCCRepeatedPass<PassT>(std::move(Pass), MaxIterations);
839
68
}
840
841
// Clear out the debug logging macro.
842
#undef DEBUG_TYPE
843
844
} // end namespace llvm
845
846
#endif // LLVM_ANALYSIS_CGSCCPASSMANAGER_H