Coverage Report

Created: 2019-02-20 07:29

/Users/buildslave/jenkins/workspace/clang-stage2-coverage-R/llvm/include/llvm/Analysis/CGSCCPassManager.h
Line
Count
Source (jump to first uncovered line)
1
//===- CGSCCPassManager.h - Call graph pass management ----------*- C++ -*-===//
2
//
3
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4
// See https://llvm.org/LICENSE.txt for license information.
5
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6
//
7
//===----------------------------------------------------------------------===//
8
/// \file
9
///
10
/// This header provides classes for managing passes over SCCs of the call
11
/// graph. These passes form an important component of LLVM's interprocedural
12
/// optimizations. Because they operate on the SCCs of the call graph, and they
13
/// traverse the graph in post-order, they can effectively do pair-wise
14
/// interprocedural optimizations for all call edges in the program while
15
/// incrementally refining it and improving the context of these pair-wise
16
/// optimizations. At each call site edge, the callee has already been
17
/// optimized as much as is possible. This in turn allows very accurate
18
/// analysis of it for IPO.
19
///
20
/// A secondary more general goal is to be able to isolate optimization on
21
/// unrelated parts of the IR module. This is useful to ensure our
22
/// optimizations are principled and don't miss oportunities where refinement
23
/// of one part of the module influence transformations in another part of the
24
/// module. But this is also useful if we want to parallelize the optimizations
25
/// across common large module graph shapes which tend to be very wide and have
26
/// large regions of unrelated cliques.
27
///
28
/// To satisfy these goals, we use the LazyCallGraph which provides two graphs
29
/// nested inside each other (and built lazily from the bottom-up): the call
30
/// graph proper, and a reference graph. The reference graph is super set of
31
/// the call graph and is a conservative approximation of what could through
32
/// scalar or CGSCC transforms *become* the call graph. Using this allows us to
33
/// ensure we optimize functions prior to them being introduced into the call
34
/// graph by devirtualization or other technique, and thus ensures that
35
/// subsequent pair-wise interprocedural optimizations observe the optimized
36
/// form of these functions. The (potentially transitive) reference
37
/// reachability used by the reference graph is a conservative approximation
38
/// that still allows us to have independent regions of the graph.
39
///
40
/// FIXME: There is one major drawback of the reference graph: in its naive
41
/// form it is quadratic because it contains a distinct edge for each
42
/// (potentially indirect) reference, even if are all through some common
43
/// global table of function pointers. This can be fixed in a number of ways
44
/// that essentially preserve enough of the normalization. While it isn't
45
/// expected to completely preclude the usability of this, it will need to be
46
/// addressed.
47
///
48
///
49
/// All of these issues are made substantially more complex in the face of
50
/// mutations to the call graph while optimization passes are being run. When
51
/// mutations to the call graph occur we want to achieve two different things:
52
///
53
/// - We need to update the call graph in-flight and invalidate analyses
54
///   cached on entities in the graph. Because of the cache-based analysis
55
///   design of the pass manager, it is essential to have stable identities for
56
///   the elements of the IR that passes traverse, and to invalidate any
57
///   analyses cached on these elements as the mutations take place.
58
///
59
/// - We want to preserve the incremental and post-order traversal of the
60
///   graph even as it is refined and mutated. This means we want optimization
61
///   to observe the most refined form of the call graph and to do so in
62
///   post-order.
63
///
64
/// To address this, the CGSCC manager uses both worklists that can be expanded
65
/// by passes which transform the IR, and provides invalidation tests to skip
66
/// entries that become dead. This extra data is provided to every SCC pass so
67
/// that it can carefully update the manager's traversal as the call graph
68
/// mutates.
69
///
70
/// We also provide support for running function passes within the CGSCC walk,
71
/// and there we provide automatic update of the call graph including of the
72
/// pass manager to reflect call graph changes that fall out naturally as part
73
/// of scalar transformations.
74
///
75
/// The patterns used to ensure the goals of post-order visitation of the fully
76
/// refined graph:
77
///
78
/// 1) Sink toward the "bottom" as the graph is refined. This means that any
79
///    iteration continues in some valid post-order sequence after the mutation
80
///    has altered the structure.
81
///
82
/// 2) Enqueue in post-order, including the current entity. If the current
83
///    entity's shape changes, it and everything after it in post-order needs
84
///    to be visited to observe that shape.
85
///
86
//===----------------------------------------------------------------------===//
87
88
#ifndef LLVM_ANALYSIS_CGSCCPASSMANAGER_H
89
#define LLVM_ANALYSIS_CGSCCPASSMANAGER_H
90
91
#include "llvm/ADT/DenseSet.h"
92
#include "llvm/ADT/PriorityWorklist.h"
93
#include "llvm/ADT/STLExtras.h"
94
#include "llvm/ADT/SmallPtrSet.h"
95
#include "llvm/ADT/SmallVector.h"
96
#include "llvm/Analysis/LazyCallGraph.h"
97
#include "llvm/IR/CallSite.h"
98
#include "llvm/IR/Function.h"
99
#include "llvm/IR/InstIterator.h"
100
#include "llvm/IR/PassManager.h"
101
#include "llvm/IR/ValueHandle.h"
102
#include "llvm/Support/Debug.h"
103
#include "llvm/Support/raw_ostream.h"
104
#include <algorithm>
105
#include <cassert>
106
#include <utility>
107
108
namespace llvm {
109
110
struct CGSCCUpdateResult;
111
class Module;
112
113
// Allow debug logging in this inline function.
114
#define DEBUG_TYPE "cgscc"
115
116
/// Extern template declaration for the analysis set for this IR unit.
117
extern template class AllAnalysesOn<LazyCallGraph::SCC>;
118
119
extern template class AnalysisManager<LazyCallGraph::SCC, LazyCallGraph &>;
120
121
/// The CGSCC analysis manager.
122
///
123
/// See the documentation for the AnalysisManager template for detail
124
/// documentation. This type serves as a convenient way to refer to this
125
/// construct in the adaptors and proxies used to integrate this into the larger
126
/// pass manager infrastructure.
127
using CGSCCAnalysisManager =
128
    AnalysisManager<LazyCallGraph::SCC, LazyCallGraph &>;
129
130
// Explicit specialization and instantiation declarations for the pass manager.
131
// See the comments on the definition of the specialization for details on how
132
// it differs from the primary template.
133
template <>
134
PreservedAnalyses
135
PassManager<LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &,
136
            CGSCCUpdateResult &>::run(LazyCallGraph::SCC &InitialC,
137
                                      CGSCCAnalysisManager &AM,
138
                                      LazyCallGraph &G, CGSCCUpdateResult &UR);
139
extern template class PassManager<LazyCallGraph::SCC, CGSCCAnalysisManager,
140
                                  LazyCallGraph &, CGSCCUpdateResult &>;
141
142
/// The CGSCC pass manager.
143
///
144
/// See the documentation for the PassManager template for details. It runs
145
/// a sequence of SCC passes over each SCC that the manager is run over. This
146
/// type serves as a convenient way to refer to this construct.
147
using CGSCCPassManager =
148
    PassManager<LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &,
149
                CGSCCUpdateResult &>;
150
151
/// An explicit specialization of the require analysis template pass.
152
template <typename AnalysisT>
153
struct RequireAnalysisPass<AnalysisT, LazyCallGraph::SCC, CGSCCAnalysisManager,
154
                           LazyCallGraph &, CGSCCUpdateResult &>
155
    : PassInfoMixin<RequireAnalysisPass<AnalysisT, LazyCallGraph::SCC,
156
                                        CGSCCAnalysisManager, LazyCallGraph &,
157
                                        CGSCCUpdateResult &>> {
158
  PreservedAnalyses run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM,
159
9
                        LazyCallGraph &CG, CGSCCUpdateResult &) {
160
9
    (void)AM.template getResult<AnalysisT>(C, CG);
161
9
    return PreservedAnalyses::all();
162
9
  }
PassBuilder.cpp:llvm::RequireAnalysisPass<(anonymous namespace)::NoOpCGSCCAnalysis, llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&>::run(llvm::LazyCallGraph::SCC&, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>&, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&)
Line
Count
Source
159
9
                        LazyCallGraph &CG, CGSCCUpdateResult &) {
160
9
    (void)AM.template getResult<AnalysisT>(C, CG);
161
9
    return PreservedAnalyses::all();
162
9
  }
Unexecuted instantiation: llvm::RequireAnalysisPass<llvm::FunctionAnalysisManagerCGSCCProxy, llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&>::run(llvm::LazyCallGraph::SCC&, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>&, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&)
Unexecuted instantiation: llvm::RequireAnalysisPass<llvm::PassInstrumentationAnalysis, llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&>::run(llvm::LazyCallGraph::SCC&, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>&, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&)
163
};
164
165
/// A proxy from a \c CGSCCAnalysisManager to a \c Module.
166
using CGSCCAnalysisManagerModuleProxy =
167
    InnerAnalysisManagerProxy<CGSCCAnalysisManager, Module>;
168
169
/// We need a specialized result for the \c CGSCCAnalysisManagerModuleProxy so
170
/// it can have access to the call graph in order to walk all the SCCs when
171
/// invalidating things.
172
template <> class CGSCCAnalysisManagerModuleProxy::Result {
173
public:
174
  explicit Result(CGSCCAnalysisManager &InnerAM, LazyCallGraph &G)
175
317
      : InnerAM(&InnerAM), G(&G) {}
176
177
  /// Accessor for the analysis manager.
178
351
  CGSCCAnalysisManager &getManager() { return *InnerAM; }
179
180
  /// Handler for invalidation of the Module.
181
  ///
182
  /// If the proxy analysis itself is preserved, then we assume that the set of
183
  /// SCCs in the Module hasn't changed. Thus any pointers to SCCs in the
184
  /// CGSCCAnalysisManager are still valid, and we don't need to call \c clear
185
  /// on the CGSCCAnalysisManager.
186
  ///
187
  /// Regardless of whether this analysis is marked as preserved, all of the
188
  /// analyses in the \c CGSCCAnalysisManager are potentially invalidated based
189
  /// on the set of preserved analyses.
190
  bool invalidate(Module &M, const PreservedAnalyses &PA,
191
                  ModuleAnalysisManager::Invalidator &Inv);
192
193
private:
194
  CGSCCAnalysisManager *InnerAM;
195
  LazyCallGraph *G;
196
};
197
198
/// Provide a specialized run method for the \c CGSCCAnalysisManagerModuleProxy
199
/// so it can pass the lazy call graph to the result.
200
template <>
201
CGSCCAnalysisManagerModuleProxy::Result
202
CGSCCAnalysisManagerModuleProxy::run(Module &M, ModuleAnalysisManager &AM);
203
204
// Ensure the \c CGSCCAnalysisManagerModuleProxy is provided as an extern
205
// template.
206
extern template class InnerAnalysisManagerProxy<CGSCCAnalysisManager, Module>;
207
208
extern template class OuterAnalysisManagerProxy<
209
    ModuleAnalysisManager, LazyCallGraph::SCC, LazyCallGraph &>;
210
211
/// A proxy from a \c ModuleAnalysisManager to an \c SCC.
212
using ModuleAnalysisManagerCGSCCProxy =
213
    OuterAnalysisManagerProxy<ModuleAnalysisManager, LazyCallGraph::SCC,
214
                              LazyCallGraph &>;
215
216
/// Support structure for SCC passes to communicate updates the call graph back
217
/// to the CGSCC pass manager infrsatructure.
218
///
219
/// The CGSCC pass manager runs SCC passes which are allowed to update the call
220
/// graph and SCC structures. This means the structure the pass manager works
221
/// on is mutating underneath it. In order to support that, there needs to be
222
/// careful communication about the precise nature and ramifications of these
223
/// updates to the pass management infrastructure.
224
///
225
/// All SCC passes will have to accept a reference to the management layer's
226
/// update result struct and use it to reflect the results of any CG updates
227
/// performed.
228
///
229
/// Passes which do not change the call graph structure in any way can just
230
/// ignore this argument to their run method.
231
struct CGSCCUpdateResult {
232
  /// Worklist of the RefSCCs queued for processing.
233
  ///
234
  /// When a pass refines the graph and creates new RefSCCs or causes them to
235
  /// have a different shape or set of component SCCs it should add the RefSCCs
236
  /// to this worklist so that we visit them in the refined form.
237
  ///
238
  /// This worklist is in reverse post-order, as we pop off the back in order
239
  /// to observe RefSCCs in post-order. When adding RefSCCs, clients should add
240
  /// them in reverse post-order.
241
  SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> &RCWorklist;
242
243
  /// Worklist of the SCCs queued for processing.
244
  ///
245
  /// When a pass refines the graph and creates new SCCs or causes them to have
246
  /// a different shape or set of component functions it should add the SCCs to
247
  /// this worklist so that we visit them in the refined form.
248
  ///
249
  /// Note that if the SCCs are part of a RefSCC that is added to the \c
250
  /// RCWorklist, they don't need to be added here as visiting the RefSCC will
251
  /// be sufficient to re-visit the SCCs within it.
252
  ///
253
  /// This worklist is in reverse post-order, as we pop off the back in order
254
  /// to observe SCCs in post-order. When adding SCCs, clients should add them
255
  /// in reverse post-order.
256
  SmallPriorityWorklist<LazyCallGraph::SCC *, 1> &CWorklist;
257
258
  /// The set of invalidated RefSCCs which should be skipped if they are found
259
  /// in \c RCWorklist.
260
  ///
261
  /// This is used to quickly prune out RefSCCs when they get deleted and
262
  /// happen to already be on the worklist. We use this primarily to avoid
263
  /// scanning the list and removing entries from it.
264
  SmallPtrSetImpl<LazyCallGraph::RefSCC *> &InvalidatedRefSCCs;
265
266
  /// The set of invalidated SCCs which should be skipped if they are found
267
  /// in \c CWorklist.
268
  ///
269
  /// This is used to quickly prune out SCCs when they get deleted and happen
270
  /// to already be on the worklist. We use this primarily to avoid scanning
271
  /// the list and removing entries from it.
272
  SmallPtrSetImpl<LazyCallGraph::SCC *> &InvalidatedSCCs;
273
274
  /// If non-null, the updated current \c RefSCC being processed.
275
  ///
276
  /// This is set when a graph refinement takes place an the "current" point in
277
  /// the graph moves "down" or earlier in the post-order walk. This will often
278
  /// cause the "current" RefSCC to be a newly created RefSCC object and the
279
  /// old one to be added to the above worklist. When that happens, this
280
  /// pointer is non-null and can be used to continue processing the "top" of
281
  /// the post-order walk.
282
  LazyCallGraph::RefSCC *UpdatedRC;
283
284
  /// If non-null, the updated current \c SCC being processed.
285
  ///
286
  /// This is set when a graph refinement takes place an the "current" point in
287
  /// the graph moves "down" or earlier in the post-order walk. This will often
288
  /// cause the "current" SCC to be a newly created SCC object and the old one
289
  /// to be added to the above worklist. When that happens, this pointer is
290
  /// non-null and can be used to continue processing the "top" of the
291
  /// post-order walk.
292
  LazyCallGraph::SCC *UpdatedC;
293
294
  /// A hacky area where the inliner can retain history about inlining
295
  /// decisions that mutated the call graph's SCC structure in order to avoid
296
  /// infinite inlining. See the comments in the inliner's CG update logic.
297
  ///
298
  /// FIXME: Keeping this here seems like a big layering issue, we should look
299
  /// for a better technique.
300
  SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
301
      &InlinedInternalEdges;
302
};
303
304
/// The core module pass which does a post-order walk of the SCCs and
305
/// runs a CGSCC pass over each one.
306
///
307
/// Designed to allow composition of a CGSCCPass(Manager) and
308
/// a ModulePassManager. Note that this pass must be run with a module analysis
309
/// manager as it uses the LazyCallGraph analysis. It will also run the
310
/// \c CGSCCAnalysisManagerModuleProxy analysis prior to running the CGSCC
311
/// pass over the module to enable a \c FunctionAnalysisManager to be used
312
/// within this run safely.
313
template <typename CGSCCPassT>
314
class ModuleToPostOrderCGSCCPassAdaptor
315
    : public PassInfoMixin<ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>> {
316
public:
317
  explicit ModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass)
318
351
      : Pass(std::move(Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >::ModuleToPostOrderCGSCCPassAdaptor(llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&>)
Line
Count
Source
318
212
      : Pass(std::move(Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >::ModuleToPostOrderCGSCCPassAdaptor(llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >)
Line
Count
Source
318
87
      : Pass(std::move(Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>::ModuleToPostOrderCGSCCPassAdaptor(llvm::PostOrderFunctionAttrsPass)
Line
Count
Source
318
35
      : Pass(std::move(Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>::ModuleToPostOrderCGSCCPassAdaptor(llvm::InlinerPass)
Line
Count
Source
318
17
      : Pass(std::move(Pass)) {}
319
320
  // We have to explicitly define all the special member functions because MSVC
321
  // refuses to generate them.
322
  ModuleToPostOrderCGSCCPassAdaptor(
323
      const ModuleToPostOrderCGSCCPassAdaptor &Arg)
324
      : Pass(Arg.Pass) {}
325
326
  ModuleToPostOrderCGSCCPassAdaptor(ModuleToPostOrderCGSCCPassAdaptor &&Arg)
327
702
      : Pass(std::move(Arg.Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >::ModuleToPostOrderCGSCCPassAdaptor(llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >&&)
Line
Count
Source
327
424
      : Pass(std::move(Arg.Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >::ModuleToPostOrderCGSCCPassAdaptor(llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >&&)
Line
Count
Source
327
174
      : Pass(std::move(Arg.Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>::ModuleToPostOrderCGSCCPassAdaptor(llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>&&)
Line
Count
Source
327
70
      : Pass(std::move(Arg.Pass)) {}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>::ModuleToPostOrderCGSCCPassAdaptor(llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>&&)
Line
Count
Source
327
34
      : Pass(std::move(Arg.Pass)) {}
328
329
  friend void swap(ModuleToPostOrderCGSCCPassAdaptor &LHS,
330
                   ModuleToPostOrderCGSCCPassAdaptor &RHS) {
331
    std::swap(LHS.Pass, RHS.Pass);
332
  }
333
334
  ModuleToPostOrderCGSCCPassAdaptor &
335
  operator=(ModuleToPostOrderCGSCCPassAdaptor RHS) {
336
    swap(*this, RHS);
337
    return *this;
338
  }
339
340
  /// Runs the CGSCC pass across every SCC in the module.
341
351
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
342
351
    // Setup the CGSCC analysis manager from its proxy.
343
351
    CGSCCAnalysisManager &CGAM =
344
351
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
345
351
346
351
    // Get the call graph for this module.
347
351
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
348
351
349
351
    // We keep worklists to allow us to push more work onto the pass manager as
350
351
    // the passes are run.
351
351
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
352
351
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
353
351
354
351
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
355
351
    // iterating off the worklists.
356
351
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
357
351
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
358
351
359
351
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
360
351
        InlinedInternalEdges;
361
351
362
351
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
363
351
                            InvalidSCCSet,       nullptr,   nullptr,
364
351
                            InlinedInternalEdges};
365
351
366
351
    // Request PassInstrumentation from analysis manager, will use it to run
367
351
    // instrumenting callbacks for the passes later.
368
351
    PassInstrumentation PI = AM.getResult<PassInstrumentationAnalysis>(M);
369
351
370
351
    PreservedAnalyses PA = PreservedAnalyses::all();
371
351
    CG.buildRefSCCs();
372
351
    for (auto RCI = CG.postorder_ref_scc_begin(),
373
351
              RCE = CG.postorder_ref_scc_end();
374
1.41k
         RCI != RCE;) {
375
1.06k
      assert(RCWorklist.empty() &&
376
1.06k
             "Should always start with an empty RefSCC worklist");
377
1.06k
      // The postorder_ref_sccs range we are walking is lazily constructed, so
378
1.06k
      // we only push the first one onto the worklist. The worklist allows us
379
1.06k
      // to capture *new* RefSCCs created during transformations.
380
1.06k
      //
381
1.06k
      // We really want to form RefSCCs lazily because that makes them cheaper
382
1.06k
      // to update as the program is simplified and allows us to have greater
383
1.06k
      // cache locality as forming a RefSCC touches all the parts of all the
384
1.06k
      // functions within that RefSCC.
385
1.06k
      //
386
1.06k
      // We also eagerly increment the iterator to the next position because
387
1.06k
      // the CGSCC passes below may delete the current RefSCC.
388
1.06k
      RCWorklist.insert(&*RCI++);
389
1.06k
390
1.08k
      do {
391
1.08k
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
392
1.08k
        if (InvalidRefSCCSet.count(RC)) {
393
6
          LLVM_DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
394
6
          continue;
395
6
        }
396
1.08k
397
1.08k
        assert(CWorklist.empty() &&
398
1.08k
               "Should always start with an empty SCC worklist");
399
1.08k
400
1.08k
        LLVM_DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
401
1.08k
                          << "\n");
402
1.08k
403
1.08k
        // Push the initial SCCs in reverse post-order as we'll pop off the
404
1.08k
        // back and so see this in post-order.
405
1.08k
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
406
1.11k
          CWorklist.insert(&C);
407
1.08k
408
1.15k
        do {
409
1.15k
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
410
1.15k
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
411
1.15k
          // other RefSCCs in the worklist. The invalid ones are dead and the
412
1.15k
          // other RefSCCs should be queued above, so we just need to skip both
413
1.15k
          // scenarios here.
414
1.15k
          if (InvalidSCCSet.count(C)) {
415
7
            LLVM_DEBUG(dbgs() << "Skipping an invalid SCC...\n");
416
7
            continue;
417
7
          }
418
1.14k
          if (&C->getOuterRefSCC() != RC) {
419
18
            LLVM_DEBUG(dbgs()
420
18
                       << "Skipping an SCC that is now part of some other "
421
18
                          "RefSCC...\n");
422
18
            continue;
423
18
          }
424
1.13k
425
1.16k
          
do 1.13k
{
426
1.16k
            // Check that we didn't miss any update scenario.
427
1.16k
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
428
1.16k
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
429
1.16k
            assert(&C->getOuterRefSCC() == RC &&
430
1.16k
                   "Processing an SCC in a different RefSCC!");
431
1.16k
432
1.16k
            UR.UpdatedRC = nullptr;
433
1.16k
            UR.UpdatedC = nullptr;
434
1.16k
435
1.16k
            // Check the PassInstrumentation's BeforePass callbacks before
436
1.16k
            // running the pass, skip its execution completely if asked to
437
1.16k
            // (callback returns false).
438
1.16k
            if (!PI.runBeforePass<LazyCallGraph::SCC>(Pass, *C))
439
0
              continue;
440
1.16k
441
1.16k
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
442
1.16k
443
1.16k
            if (UR.InvalidatedSCCs.count(C))
444
16
              PI.runAfterPassInvalidated<LazyCallGraph::SCC>(Pass);
445
1.14k
            else
446
1.14k
              PI.runAfterPass<LazyCallGraph::SCC>(Pass, *C);
447
1.16k
448
1.16k
            // Update the SCC and RefSCC if necessary.
449
1.16k
            C = UR.UpdatedC ? 
UR.UpdatedC34
:
C1.13k
;
450
1.16k
            RC = UR.UpdatedRC ? 
UR.UpdatedRC19
:
RC1.14k
;
451
1.16k
452
1.16k
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
453
1.16k
            // the current SCC may simply need to be skipped if invalid.
454
1.16k
            if (UR.InvalidatedSCCs.count(C)) {
455
3
              LLVM_DEBUG(dbgs()
456
3
                         << "Skipping invalidated root or island SCC!\n");
457
3
              break;
458
3
            }
459
1.16k
            // Check that we didn't miss any update scenario.
460
1.16k
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
461
1.16k
462
1.16k
            // We handle invalidating the CGSCC analysis manager's information
463
1.16k
            // for the (potentially updated) SCC here. Note that any other SCCs
464
1.16k
            // whose structure has changed should have been invalidated by
465
1.16k
            // whatever was updating the call graph. This SCC gets invalidated
466
1.16k
            // late as it contains the nodes that were actively being
467
1.16k
            // processed.
468
1.16k
            CGAM.invalidate(*C, PassPA);
469
1.16k
470
1.16k
            // Then intersect the preserved set so that invalidation of module
471
1.16k
            // analyses will eventually occur when the module pass completes.
472
1.16k
            PA.intersect(std::move(PassPA));
473
1.16k
474
1.16k
            // The pass may have restructured the call graph and refined the
475
1.16k
            // current SCC and/or RefSCC. We need to update our current SCC and
476
1.16k
            // RefSCC pointers to follow these. Also, when the current SCC is
477
1.16k
            // refined, re-run the SCC pass over the newly refined SCC in order
478
1.16k
            // to observe the most precise SCC model available. This inherently
479
1.16k
            // cannot cycle excessively as it only happens when we split SCCs
480
1.16k
            // apart, at most converging on a DAG of single nodes.
481
1.16k
            // FIXME: If we ever start having RefSCC passes, we'll want to
482
1.16k
            // iterate there too.
483
1.16k
            if (UR.UpdatedC)
484
1.16k
              LLVM_DEBUG(dbgs()
485
1.16k
                         << "Re-running SCC passes after a refinement of the "
486
1.16k
                            "current SCC: "
487
1.16k
                         << *UR.UpdatedC << "\n");
488
1.16k
489
1.16k
            // Note that both `C` and `RC` may at this point refer to deleted,
490
1.16k
            // invalid SCC and RefSCCs respectively. But we will short circuit
491
1.16k
            // the processing when we check them in the loop above.
492
1.16k
          } while (UR.UpdatedC);
493
1.15k
        } while (!CWorklist.empty());
494
1.08k
495
1.08k
        // We only need to keep internal inlined edge information within
496
1.08k
        // a RefSCC, clear it to save on space and let the next time we visit
497
1.08k
        // any of these functions have a fresh start.
498
1.08k
        InlinedInternalEdges.clear();
499
1.08k
      } while (!RCWorklist.empty());
500
1.06k
    }
501
351
502
351
    // By definition we preserve the call garph, all SCC analyses, and the
503
351
    // analysis proxies by handling them above and in any nested pass managers.
504
351
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
505
351
    PA.preserve<LazyCallGraphAnalysis>();
506
351
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
507
351
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
508
351
    return PA;
509
351
  }
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&)
Line
Count
Source
341
212
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
342
212
    // Setup the CGSCC analysis manager from its proxy.
343
212
    CGSCCAnalysisManager &CGAM =
344
212
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
345
212
346
212
    // Get the call graph for this module.
347
212
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
348
212
349
212
    // We keep worklists to allow us to push more work onto the pass manager as
350
212
    // the passes are run.
351
212
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
352
212
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
353
212
354
212
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
355
212
    // iterating off the worklists.
356
212
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
357
212
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
358
212
359
212
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
360
212
        InlinedInternalEdges;
361
212
362
212
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
363
212
                            InvalidSCCSet,       nullptr,   nullptr,
364
212
                            InlinedInternalEdges};
365
212
366
212
    // Request PassInstrumentation from analysis manager, will use it to run
367
212
    // instrumenting callbacks for the passes later.
368
212
    PassInstrumentation PI = AM.getResult<PassInstrumentationAnalysis>(M);
369
212
370
212
    PreservedAnalyses PA = PreservedAnalyses::all();
371
212
    CG.buildRefSCCs();
372
212
    for (auto RCI = CG.postorder_ref_scc_begin(),
373
212
              RCE = CG.postorder_ref_scc_end();
374
1.08k
         RCI != RCE;) {
375
877
      assert(RCWorklist.empty() &&
376
877
             "Should always start with an empty RefSCC worklist");
377
877
      // The postorder_ref_sccs range we are walking is lazily constructed, so
378
877
      // we only push the first one onto the worklist. The worklist allows us
379
877
      // to capture *new* RefSCCs created during transformations.
380
877
      //
381
877
      // We really want to form RefSCCs lazily because that makes them cheaper
382
877
      // to update as the program is simplified and allows us to have greater
383
877
      // cache locality as forming a RefSCC touches all the parts of all the
384
877
      // functions within that RefSCC.
385
877
      //
386
877
      // We also eagerly increment the iterator to the next position because
387
877
      // the CGSCC passes below may delete the current RefSCC.
388
877
      RCWorklist.insert(&*RCI++);
389
877
390
901
      do {
391
901
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
392
901
        if (InvalidRefSCCSet.count(RC)) {
393
6
          LLVM_DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
394
6
          continue;
395
6
        }
396
895
397
895
        assert(CWorklist.empty() &&
398
895
               "Should always start with an empty SCC worklist");
399
895
400
895
        LLVM_DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
401
895
                          << "\n");
402
895
403
895
        // Push the initial SCCs in reverse post-order as we'll pop off the
404
895
        // back and so see this in post-order.
405
895
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
406
928
          CWorklist.insert(&C);
407
895
408
968
        do {
409
968
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
410
968
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
411
968
          // other RefSCCs in the worklist. The invalid ones are dead and the
412
968
          // other RefSCCs should be queued above, so we just need to skip both
413
968
          // scenarios here.
414
968
          if (InvalidSCCSet.count(C)) {
415
7
            LLVM_DEBUG(dbgs() << "Skipping an invalid SCC...\n");
416
7
            continue;
417
7
          }
418
961
          if (&C->getOuterRefSCC() != RC) {
419
17
            LLVM_DEBUG(dbgs()
420
17
                       << "Skipping an SCC that is now part of some other "
421
17
                          "RefSCC...\n");
422
17
            continue;
423
17
          }
424
944
425
977
          
do 944
{
426
977
            // Check that we didn't miss any update scenario.
427
977
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
428
977
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
429
977
            assert(&C->getOuterRefSCC() == RC &&
430
977
                   "Processing an SCC in a different RefSCC!");
431
977
432
977
            UR.UpdatedRC = nullptr;
433
977
            UR.UpdatedC = nullptr;
434
977
435
977
            // Check the PassInstrumentation's BeforePass callbacks before
436
977
            // running the pass, skip its execution completely if asked to
437
977
            // (callback returns false).
438
977
            if (!PI.runBeforePass<LazyCallGraph::SCC>(Pass, *C))
439
0
              continue;
440
977
441
977
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
442
977
443
977
            if (UR.InvalidatedSCCs.count(C))
444
16
              PI.runAfterPassInvalidated<LazyCallGraph::SCC>(Pass);
445
961
            else
446
961
              PI.runAfterPass<LazyCallGraph::SCC>(Pass, *C);
447
977
448
977
            // Update the SCC and RefSCC if necessary.
449
977
            C = UR.UpdatedC ? 
UR.UpdatedC33
:
C944
;
450
977
            RC = UR.UpdatedRC ? 
UR.UpdatedRC18
:
RC959
;
451
977
452
977
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
453
977
            // the current SCC may simply need to be skipped if invalid.
454
977
            if (UR.InvalidatedSCCs.count(C)) {
455
3
              LLVM_DEBUG(dbgs()
456
3
                         << "Skipping invalidated root or island SCC!\n");
457
3
              break;
458
3
            }
459
974
            // Check that we didn't miss any update scenario.
460
974
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
461
974
462
974
            // We handle invalidating the CGSCC analysis manager's information
463
974
            // for the (potentially updated) SCC here. Note that any other SCCs
464
974
            // whose structure has changed should have been invalidated by
465
974
            // whatever was updating the call graph. This SCC gets invalidated
466
974
            // late as it contains the nodes that were actively being
467
974
            // processed.
468
974
            CGAM.invalidate(*C, PassPA);
469
974
470
974
            // Then intersect the preserved set so that invalidation of module
471
974
            // analyses will eventually occur when the module pass completes.
472
974
            PA.intersect(std::move(PassPA));
473
974
474
974
            // The pass may have restructured the call graph and refined the
475
974
            // current SCC and/or RefSCC. We need to update our current SCC and
476
974
            // RefSCC pointers to follow these. Also, when the current SCC is
477
974
            // refined, re-run the SCC pass over the newly refined SCC in order
478
974
            // to observe the most precise SCC model available. This inherently
479
974
            // cannot cycle excessively as it only happens when we split SCCs
480
974
            // apart, at most converging on a DAG of single nodes.
481
974
            // FIXME: If we ever start having RefSCC passes, we'll want to
482
974
            // iterate there too.
483
974
            if (UR.UpdatedC)
484
974
              LLVM_DEBUG(dbgs()
485
974
                         << "Re-running SCC passes after a refinement of the "
486
974
                            "current SCC: "
487
974
                         << *UR.UpdatedC << "\n");
488
974
489
974
            // Note that both `C` and `RC` may at this point refer to deleted,
490
974
            // invalid SCC and RefSCCs respectively. But we will short circuit
491
974
            // the processing when we check them in the loop above.
492
974
          } while (UR.UpdatedC);
493
968
        } while (!CWorklist.empty());
494
895
495
895
        // We only need to keep internal inlined edge information within
496
895
        // a RefSCC, clear it to save on space and let the next time we visit
497
895
        // any of these functions have a fresh start.
498
895
        InlinedInternalEdges.clear();
499
901
      } while (!RCWorklist.empty());
500
877
    }
501
212
502
212
    // By definition we preserve the call garph, all SCC analyses, and the
503
212
    // analysis proxies by handling them above and in any nested pass managers.
504
212
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
505
212
    PA.preserve<LazyCallGraphAnalysis>();
506
212
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
507
212
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
508
212
    return PA;
509
212
  }
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&)
Line
Count
Source
341
87
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
342
87
    // Setup the CGSCC analysis manager from its proxy.
343
87
    CGSCCAnalysisManager &CGAM =
344
87
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
345
87
346
87
    // Get the call graph for this module.
347
87
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
348
87
349
87
    // We keep worklists to allow us to push more work onto the pass manager as
350
87
    // the passes are run.
351
87
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
352
87
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
353
87
354
87
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
355
87
    // iterating off the worklists.
356
87
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
357
87
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
358
87
359
87
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
360
87
        InlinedInternalEdges;
361
87
362
87
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
363
87
                            InvalidSCCSet,       nullptr,   nullptr,
364
87
                            InlinedInternalEdges};
365
87
366
87
    // Request PassInstrumentation from analysis manager, will use it to run
367
87
    // instrumenting callbacks for the passes later.
368
87
    PassInstrumentation PI = AM.getResult<PassInstrumentationAnalysis>(M);
369
87
370
87
    PreservedAnalyses PA = PreservedAnalyses::all();
371
87
    CG.buildRefSCCs();
372
87
    for (auto RCI = CG.postorder_ref_scc_begin(),
373
87
              RCE = CG.postorder_ref_scc_end();
374
241
         RCI != RCE;) {
375
154
      assert(RCWorklist.empty() &&
376
154
             "Should always start with an empty RefSCC worklist");
377
154
      // The postorder_ref_sccs range we are walking is lazily constructed, so
378
154
      // we only push the first one onto the worklist. The worklist allows us
379
154
      // to capture *new* RefSCCs created during transformations.
380
154
      //
381
154
      // We really want to form RefSCCs lazily because that makes them cheaper
382
154
      // to update as the program is simplified and allows us to have greater
383
154
      // cache locality as forming a RefSCC touches all the parts of all the
384
154
      // functions within that RefSCC.
385
154
      //
386
154
      // We also eagerly increment the iterator to the next position because
387
154
      // the CGSCC passes below may delete the current RefSCC.
388
154
      RCWorklist.insert(&*RCI++);
389
154
390
155
      do {
391
155
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
392
155
        if (InvalidRefSCCSet.count(RC)) {
393
0
          LLVM_DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
394
0
          continue;
395
0
        }
396
155
397
155
        assert(CWorklist.empty() &&
398
155
               "Should always start with an empty SCC worklist");
399
155
400
155
        LLVM_DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
401
155
                          << "\n");
402
155
403
155
        // Push the initial SCCs in reverse post-order as we'll pop off the
404
155
        // back and so see this in post-order.
405
155
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
406
155
          CWorklist.insert(&C);
407
155
408
156
        do {
409
156
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
410
156
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
411
156
          // other RefSCCs in the worklist. The invalid ones are dead and the
412
156
          // other RefSCCs should be queued above, so we just need to skip both
413
156
          // scenarios here.
414
156
          if (InvalidSCCSet.count(C)) {
415
0
            LLVM_DEBUG(dbgs() << "Skipping an invalid SCC...\n");
416
0
            continue;
417
0
          }
418
156
          if (&C->getOuterRefSCC() != RC) {
419
1
            LLVM_DEBUG(dbgs()
420
1
                       << "Skipping an SCC that is now part of some other "
421
1
                          "RefSCC...\n");
422
1
            continue;
423
1
          }
424
155
425
156
          
do 155
{
426
156
            // Check that we didn't miss any update scenario.
427
156
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
428
156
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
429
156
            assert(&C->getOuterRefSCC() == RC &&
430
156
                   "Processing an SCC in a different RefSCC!");
431
156
432
156
            UR.UpdatedRC = nullptr;
433
156
            UR.UpdatedC = nullptr;
434
156
435
156
            // Check the PassInstrumentation's BeforePass callbacks before
436
156
            // running the pass, skip its execution completely if asked to
437
156
            // (callback returns false).
438
156
            if (!PI.runBeforePass<LazyCallGraph::SCC>(Pass, *C))
439
0
              continue;
440
156
441
156
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
442
156
443
156
            if (UR.InvalidatedSCCs.count(C))
444
0
              PI.runAfterPassInvalidated<LazyCallGraph::SCC>(Pass);
445
156
            else
446
156
              PI.runAfterPass<LazyCallGraph::SCC>(Pass, *C);
447
156
448
156
            // Update the SCC and RefSCC if necessary.
449
156
            C = UR.UpdatedC ? 
UR.UpdatedC1
:
C155
;
450
156
            RC = UR.UpdatedRC ? 
UR.UpdatedRC1
:
RC155
;
451
156
452
156
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
453
156
            // the current SCC may simply need to be skipped if invalid.
454
156
            if (UR.InvalidatedSCCs.count(C)) {
455
0
              LLVM_DEBUG(dbgs()
456
0
                         << "Skipping invalidated root or island SCC!\n");
457
0
              break;
458
0
            }
459
156
            // Check that we didn't miss any update scenario.
460
156
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
461
156
462
156
            // We handle invalidating the CGSCC analysis manager's information
463
156
            // for the (potentially updated) SCC here. Note that any other SCCs
464
156
            // whose structure has changed should have been invalidated by
465
156
            // whatever was updating the call graph. This SCC gets invalidated
466
156
            // late as it contains the nodes that were actively being
467
156
            // processed.
468
156
            CGAM.invalidate(*C, PassPA);
469
156
470
156
            // Then intersect the preserved set so that invalidation of module
471
156
            // analyses will eventually occur when the module pass completes.
472
156
            PA.intersect(std::move(PassPA));
473
156
474
156
            // The pass may have restructured the call graph and refined the
475
156
            // current SCC and/or RefSCC. We need to update our current SCC and
476
156
            // RefSCC pointers to follow these. Also, when the current SCC is
477
156
            // refined, re-run the SCC pass over the newly refined SCC in order
478
156
            // to observe the most precise SCC model available. This inherently
479
156
            // cannot cycle excessively as it only happens when we split SCCs
480
156
            // apart, at most converging on a DAG of single nodes.
481
156
            // FIXME: If we ever start having RefSCC passes, we'll want to
482
156
            // iterate there too.
483
156
            if (UR.UpdatedC)
484
156
              LLVM_DEBUG(dbgs()
485
156
                         << "Re-running SCC passes after a refinement of the "
486
156
                            "current SCC: "
487
156
                         << *UR.UpdatedC << "\n");
488
156
489
156
            // Note that both `C` and `RC` may at this point refer to deleted,
490
156
            // invalid SCC and RefSCCs respectively. But we will short circuit
491
156
            // the processing when we check them in the loop above.
492
156
          } while (UR.UpdatedC);
493
156
        } while (!CWorklist.empty());
494
155
495
155
        // We only need to keep internal inlined edge information within
496
155
        // a RefSCC, clear it to save on space and let the next time we visit
497
155
        // any of these functions have a fresh start.
498
155
        InlinedInternalEdges.clear();
499
155
      } while (!RCWorklist.empty());
500
154
    }
501
87
502
87
    // By definition we preserve the call garph, all SCC analyses, and the
503
87
    // analysis proxies by handling them above and in any nested pass managers.
504
87
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
505
87
    PA.preserve<LazyCallGraphAnalysis>();
506
87
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
507
87
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
508
87
    return PA;
509
87
  }
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&)
Line
Count
Source
341
35
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
342
35
    // Setup the CGSCC analysis manager from its proxy.
343
35
    CGSCCAnalysisManager &CGAM =
344
35
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
345
35
346
35
    // Get the call graph for this module.
347
35
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
348
35
349
35
    // We keep worklists to allow us to push more work onto the pass manager as
350
35
    // the passes are run.
351
35
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
352
35
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
353
35
354
35
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
355
35
    // iterating off the worklists.
356
35
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
357
35
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
358
35
359
35
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
360
35
        InlinedInternalEdges;
361
35
362
35
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
363
35
                            InvalidSCCSet,       nullptr,   nullptr,
364
35
                            InlinedInternalEdges};
365
35
366
35
    // Request PassInstrumentation from analysis manager, will use it to run
367
35
    // instrumenting callbacks for the passes later.
368
35
    PassInstrumentation PI = AM.getResult<PassInstrumentationAnalysis>(M);
369
35
370
35
    PreservedAnalyses PA = PreservedAnalyses::all();
371
35
    CG.buildRefSCCs();
372
35
    for (auto RCI = CG.postorder_ref_scc_begin(),
373
35
              RCE = CG.postorder_ref_scc_end();
374
55
         RCI != RCE;) {
375
20
      assert(RCWorklist.empty() &&
376
20
             "Should always start with an empty RefSCC worklist");
377
20
      // The postorder_ref_sccs range we are walking is lazily constructed, so
378
20
      // we only push the first one onto the worklist. The worklist allows us
379
20
      // to capture *new* RefSCCs created during transformations.
380
20
      //
381
20
      // We really want to form RefSCCs lazily because that makes them cheaper
382
20
      // to update as the program is simplified and allows us to have greater
383
20
      // cache locality as forming a RefSCC touches all the parts of all the
384
20
      // functions within that RefSCC.
385
20
      //
386
20
      // We also eagerly increment the iterator to the next position because
387
20
      // the CGSCC passes below may delete the current RefSCC.
388
20
      RCWorklist.insert(&*RCI++);
389
20
390
20
      do {
391
20
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
392
20
        if (InvalidRefSCCSet.count(RC)) {
393
0
          LLVM_DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
394
0
          continue;
395
0
        }
396
20
397
20
        assert(CWorklist.empty() &&
398
20
               "Should always start with an empty SCC worklist");
399
20
400
20
        LLVM_DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
401
20
                          << "\n");
402
20
403
20
        // Push the initial SCCs in reverse post-order as we'll pop off the
404
20
        // back and so see this in post-order.
405
20
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
406
20
          CWorklist.insert(&C);
407
20
408
20
        do {
409
20
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
410
20
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
411
20
          // other RefSCCs in the worklist. The invalid ones are dead and the
412
20
          // other RefSCCs should be queued above, so we just need to skip both
413
20
          // scenarios here.
414
20
          if (InvalidSCCSet.count(C)) {
415
0
            LLVM_DEBUG(dbgs() << "Skipping an invalid SCC...\n");
416
0
            continue;
417
0
          }
418
20
          if (&C->getOuterRefSCC() != RC) {
419
0
            LLVM_DEBUG(dbgs()
420
0
                       << "Skipping an SCC that is now part of some other "
421
0
                          "RefSCC...\n");
422
0
            continue;
423
0
          }
424
20
425
20
          do {
426
20
            // Check that we didn't miss any update scenario.
427
20
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
428
20
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
429
20
            assert(&C->getOuterRefSCC() == RC &&
430
20
                   "Processing an SCC in a different RefSCC!");
431
20
432
20
            UR.UpdatedRC = nullptr;
433
20
            UR.UpdatedC = nullptr;
434
20
435
20
            // Check the PassInstrumentation's BeforePass callbacks before
436
20
            // running the pass, skip its execution completely if asked to
437
20
            // (callback returns false).
438
20
            if (!PI.runBeforePass<LazyCallGraph::SCC>(Pass, *C))
439
0
              continue;
440
20
441
20
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
442
20
443
20
            if (UR.InvalidatedSCCs.count(C))
444
0
              PI.runAfterPassInvalidated<LazyCallGraph::SCC>(Pass);
445
20
            else
446
20
              PI.runAfterPass<LazyCallGraph::SCC>(Pass, *C);
447
20
448
20
            // Update the SCC and RefSCC if necessary.
449
20
            C = UR.UpdatedC ? 
UR.UpdatedC0
: C;
450
20
            RC = UR.UpdatedRC ? 
UR.UpdatedRC0
: RC;
451
20
452
20
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
453
20
            // the current SCC may simply need to be skipped if invalid.
454
20
            if (UR.InvalidatedSCCs.count(C)) {
455
0
              LLVM_DEBUG(dbgs()
456
0
                         << "Skipping invalidated root or island SCC!\n");
457
0
              break;
458
0
            }
459
20
            // Check that we didn't miss any update scenario.
460
20
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
461
20
462
20
            // We handle invalidating the CGSCC analysis manager's information
463
20
            // for the (potentially updated) SCC here. Note that any other SCCs
464
20
            // whose structure has changed should have been invalidated by
465
20
            // whatever was updating the call graph. This SCC gets invalidated
466
20
            // late as it contains the nodes that were actively being
467
20
            // processed.
468
20
            CGAM.invalidate(*C, PassPA);
469
20
470
20
            // Then intersect the preserved set so that invalidation of module
471
20
            // analyses will eventually occur when the module pass completes.
472
20
            PA.intersect(std::move(PassPA));
473
20
474
20
            // The pass may have restructured the call graph and refined the
475
20
            // current SCC and/or RefSCC. We need to update our current SCC and
476
20
            // RefSCC pointers to follow these. Also, when the current SCC is
477
20
            // refined, re-run the SCC pass over the newly refined SCC in order
478
20
            // to observe the most precise SCC model available. This inherently
479
20
            // cannot cycle excessively as it only happens when we split SCCs
480
20
            // apart, at most converging on a DAG of single nodes.
481
20
            // FIXME: If we ever start having RefSCC passes, we'll want to
482
20
            // iterate there too.
483
20
            if (UR.UpdatedC)
484
20
              LLVM_DEBUG(dbgs()
485
20
                         << "Re-running SCC passes after a refinement of the "
486
20
                            "current SCC: "
487
20
                         << *UR.UpdatedC << "\n");
488
20
489
20
            // Note that both `C` and `RC` may at this point refer to deleted,
490
20
            // invalid SCC and RefSCCs respectively. But we will short circuit
491
20
            // the processing when we check them in the loop above.
492
20
          } while (UR.UpdatedC);
493
20
        } while (!CWorklist.empty());
494
20
495
20
        // We only need to keep internal inlined edge information within
496
20
        // a RefSCC, clear it to save on space and let the next time we visit
497
20
        // any of these functions have a fresh start.
498
20
        InlinedInternalEdges.clear();
499
20
      } while (!RCWorklist.empty());
500
20
    }
501
35
502
35
    // By definition we preserve the call garph, all SCC analyses, and the
503
35
    // analysis proxies by handling them above and in any nested pass managers.
504
35
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
505
35
    PA.preserve<LazyCallGraphAnalysis>();
506
35
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
507
35
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
508
35
    return PA;
509
35
  }
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>::run(llvm::Module&, llvm::AnalysisManager<llvm::Module>&)
Line
Count
Source
341
17
  PreservedAnalyses run(Module &M, ModuleAnalysisManager &AM) {
342
17
    // Setup the CGSCC analysis manager from its proxy.
343
17
    CGSCCAnalysisManager &CGAM =
344
17
        AM.getResult<CGSCCAnalysisManagerModuleProxy>(M).getManager();
345
17
346
17
    // Get the call graph for this module.
347
17
    LazyCallGraph &CG = AM.getResult<LazyCallGraphAnalysis>(M);
348
17
349
17
    // We keep worklists to allow us to push more work onto the pass manager as
350
17
    // the passes are run.
351
17
    SmallPriorityWorklist<LazyCallGraph::RefSCC *, 1> RCWorklist;
352
17
    SmallPriorityWorklist<LazyCallGraph::SCC *, 1> CWorklist;
353
17
354
17
    // Keep sets for invalidated SCCs and RefSCCs that should be skipped when
355
17
    // iterating off the worklists.
356
17
    SmallPtrSet<LazyCallGraph::RefSCC *, 4> InvalidRefSCCSet;
357
17
    SmallPtrSet<LazyCallGraph::SCC *, 4> InvalidSCCSet;
358
17
359
17
    SmallDenseSet<std::pair<LazyCallGraph::Node *, LazyCallGraph::SCC *>, 4>
360
17
        InlinedInternalEdges;
361
17
362
17
    CGSCCUpdateResult UR = {RCWorklist,          CWorklist, InvalidRefSCCSet,
363
17
                            InvalidSCCSet,       nullptr,   nullptr,
364
17
                            InlinedInternalEdges};
365
17
366
17
    // Request PassInstrumentation from analysis manager, will use it to run
367
17
    // instrumenting callbacks for the passes later.
368
17
    PassInstrumentation PI = AM.getResult<PassInstrumentationAnalysis>(M);
369
17
370
17
    PreservedAnalyses PA = PreservedAnalyses::all();
371
17
    CG.buildRefSCCs();
372
17
    for (auto RCI = CG.postorder_ref_scc_begin(),
373
17
              RCE = CG.postorder_ref_scc_end();
374
28
         RCI != RCE;) {
375
11
      assert(RCWorklist.empty() &&
376
11
             "Should always start with an empty RefSCC worklist");
377
11
      // The postorder_ref_sccs range we are walking is lazily constructed, so
378
11
      // we only push the first one onto the worklist. The worklist allows us
379
11
      // to capture *new* RefSCCs created during transformations.
380
11
      //
381
11
      // We really want to form RefSCCs lazily because that makes them cheaper
382
11
      // to update as the program is simplified and allows us to have greater
383
11
      // cache locality as forming a RefSCC touches all the parts of all the
384
11
      // functions within that RefSCC.
385
11
      //
386
11
      // We also eagerly increment the iterator to the next position because
387
11
      // the CGSCC passes below may delete the current RefSCC.
388
11
      RCWorklist.insert(&*RCI++);
389
11
390
11
      do {
391
11
        LazyCallGraph::RefSCC *RC = RCWorklist.pop_back_val();
392
11
        if (InvalidRefSCCSet.count(RC)) {
393
0
          LLVM_DEBUG(dbgs() << "Skipping an invalid RefSCC...\n");
394
0
          continue;
395
0
        }
396
11
397
11
        assert(CWorklist.empty() &&
398
11
               "Should always start with an empty SCC worklist");
399
11
400
11
        LLVM_DEBUG(dbgs() << "Running an SCC pass across the RefSCC: " << *RC
401
11
                          << "\n");
402
11
403
11
        // Push the initial SCCs in reverse post-order as we'll pop off the
404
11
        // back and so see this in post-order.
405
11
        for (LazyCallGraph::SCC &C : llvm::reverse(*RC))
406
11
          CWorklist.insert(&C);
407
11
408
11
        do {
409
11
          LazyCallGraph::SCC *C = CWorklist.pop_back_val();
410
11
          // Due to call graph mutations, we may have invalid SCCs or SCCs from
411
11
          // other RefSCCs in the worklist. The invalid ones are dead and the
412
11
          // other RefSCCs should be queued above, so we just need to skip both
413
11
          // scenarios here.
414
11
          if (InvalidSCCSet.count(C)) {
415
0
            LLVM_DEBUG(dbgs() << "Skipping an invalid SCC...\n");
416
0
            continue;
417
0
          }
418
11
          if (&C->getOuterRefSCC() != RC) {
419
0
            LLVM_DEBUG(dbgs()
420
0
                       << "Skipping an SCC that is now part of some other "
421
0
                          "RefSCC...\n");
422
0
            continue;
423
0
          }
424
11
425
11
          do {
426
11
            // Check that we didn't miss any update scenario.
427
11
            assert(!InvalidSCCSet.count(C) && "Processing an invalid SCC!");
428
11
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
429
11
            assert(&C->getOuterRefSCC() == RC &&
430
11
                   "Processing an SCC in a different RefSCC!");
431
11
432
11
            UR.UpdatedRC = nullptr;
433
11
            UR.UpdatedC = nullptr;
434
11
435
11
            // Check the PassInstrumentation's BeforePass callbacks before
436
11
            // running the pass, skip its execution completely if asked to
437
11
            // (callback returns false).
438
11
            if (!PI.runBeforePass<LazyCallGraph::SCC>(Pass, *C))
439
0
              continue;
440
11
441
11
            PreservedAnalyses PassPA = Pass.run(*C, CGAM, CG, UR);
442
11
443
11
            if (UR.InvalidatedSCCs.count(C))
444
0
              PI.runAfterPassInvalidated<LazyCallGraph::SCC>(Pass);
445
11
            else
446
11
              PI.runAfterPass<LazyCallGraph::SCC>(Pass, *C);
447
11
448
11
            // Update the SCC and RefSCC if necessary.
449
11
            C = UR.UpdatedC ? 
UR.UpdatedC0
: C;
450
11
            RC = UR.UpdatedRC ? 
UR.UpdatedRC0
: RC;
451
11
452
11
            // If the CGSCC pass wasn't able to provide a valid updated SCC,
453
11
            // the current SCC may simply need to be skipped if invalid.
454
11
            if (UR.InvalidatedSCCs.count(C)) {
455
0
              LLVM_DEBUG(dbgs()
456
0
                         << "Skipping invalidated root or island SCC!\n");
457
0
              break;
458
0
            }
459
11
            // Check that we didn't miss any update scenario.
460
11
            assert(C->begin() != C->end() && "Cannot have an empty SCC!");
461
11
462
11
            // We handle invalidating the CGSCC analysis manager's information
463
11
            // for the (potentially updated) SCC here. Note that any other SCCs
464
11
            // whose structure has changed should have been invalidated by
465
11
            // whatever was updating the call graph. This SCC gets invalidated
466
11
            // late as it contains the nodes that were actively being
467
11
            // processed.
468
11
            CGAM.invalidate(*C, PassPA);
469
11
470
11
            // Then intersect the preserved set so that invalidation of module
471
11
            // analyses will eventually occur when the module pass completes.
472
11
            PA.intersect(std::move(PassPA));
473
11
474
11
            // The pass may have restructured the call graph and refined the
475
11
            // current SCC and/or RefSCC. We need to update our current SCC and
476
11
            // RefSCC pointers to follow these. Also, when the current SCC is
477
11
            // refined, re-run the SCC pass over the newly refined SCC in order
478
11
            // to observe the most precise SCC model available. This inherently
479
11
            // cannot cycle excessively as it only happens when we split SCCs
480
11
            // apart, at most converging on a DAG of single nodes.
481
11
            // FIXME: If we ever start having RefSCC passes, we'll want to
482
11
            // iterate there too.
483
11
            if (UR.UpdatedC)
484
11
              LLVM_DEBUG(dbgs()
485
11
                         << "Re-running SCC passes after a refinement of the "
486
11
                            "current SCC: "
487
11
                         << *UR.UpdatedC << "\n");
488
11
489
11
            // Note that both `C` and `RC` may at this point refer to deleted,
490
11
            // invalid SCC and RefSCCs respectively. But we will short circuit
491
11
            // the processing when we check them in the loop above.
492
11
          } while (UR.UpdatedC);
493
11
        } while (!CWorklist.empty());
494
11
495
11
        // We only need to keep internal inlined edge information within
496
11
        // a RefSCC, clear it to save on space and let the next time we visit
497
11
        // any of these functions have a fresh start.
498
11
        InlinedInternalEdges.clear();
499
11
      } while (!RCWorklist.empty());
500
11
    }
501
17
502
17
    // By definition we preserve the call garph, all SCC analyses, and the
503
17
    // analysis proxies by handling them above and in any nested pass managers.
504
17
    PA.preserveSet<AllAnalysesOn<LazyCallGraph::SCC>>();
505
17
    PA.preserve<LazyCallGraphAnalysis>();
506
17
    PA.preserve<CGSCCAnalysisManagerModuleProxy>();
507
17
    PA.preserve<FunctionAnalysisManagerModuleProxy>();
508
17
    return PA;
509
17
  }
510
511
private:
512
  CGSCCPassT Pass;
513
};
514
515
/// A function to deduce a function pass type and wrap it in the
516
/// templated adaptor.
517
template <typename CGSCCPassT>
518
ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>
519
351
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
520
351
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
521
351
}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > llvm::createModuleToPostOrderCGSCCPassAdaptor<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >(llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&>)
Line
Count
Source
519
212
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
520
212
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
521
212
}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > > llvm::createModuleToPostOrderCGSCCPassAdaptor<llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> > >(llvm::DevirtSCCRepeatedPass<llvm::PassManager<llvm::LazyCallGraph::SCC, llvm::AnalysisManager<llvm::LazyCallGraph::SCC, llvm::LazyCallGraph&>, llvm::LazyCallGraph&, llvm::CGSCCUpdateResult&> >)
Line
Count
Source
519
87
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
520
87
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
521
87
}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass> llvm::createModuleToPostOrderCGSCCPassAdaptor<llvm::PostOrderFunctionAttrsPass>(llvm::PostOrderFunctionAttrsPass)
Line
Count
Source
519
35
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
520
35
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
521
35
}
llvm::ModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass> llvm::createModuleToPostOrderCGSCCPassAdaptor<llvm::InlinerPass>(llvm::InlinerPass)
Line
Count
Source
519
17
createModuleToPostOrderCGSCCPassAdaptor(CGSCCPassT Pass) {
520
17
  return ModuleToPostOrderCGSCCPassAdaptor<CGSCCPassT>(std::move(Pass));
521
17
}
522
523
/// A proxy from a \c FunctionAnalysisManager to an \c SCC.
524
///
525
/// When a module pass runs and triggers invalidation, both the CGSCC and
526
/// Function analysis manager proxies on the module get an invalidation event.
527
/// We don't want to fully duplicate responsibility for most of the
528
/// invalidation logic. Instead, this layer is only responsible for SCC-local
529
/// invalidation events. We work with the module's FunctionAnalysisManager to
530
/// invalidate function analyses.
531
class FunctionAnalysisManagerCGSCCProxy
532
    : public AnalysisInfoMixin<FunctionAnalysisManagerCGSCCProxy> {
533
public:
534
  class Result {
535
  public:
536
1.19k
    explicit Result(FunctionAnalysisManager &FAM) : FAM(&FAM) {}
537
538
    /// Accessor for the analysis manager.
539
2.28k
    FunctionAnalysisManager &getManager() { return *FAM; }
540
541
    bool invalidate(LazyCallGraph::SCC &C, const PreservedAnalyses &PA,
542
                    CGSCCAnalysisManager::Invalidator &Inv);
543
544
  private:
545
    FunctionAnalysisManager *FAM;
546
  };
547
548
  /// Computes the \c FunctionAnalysisManager and stores it in the result proxy.
549
  Result run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM, LazyCallGraph &);
550
551
private:
552
  friend AnalysisInfoMixin<FunctionAnalysisManagerCGSCCProxy>;
553
554
  static AnalysisKey Key;
555
};
556
557
extern template class OuterAnalysisManagerProxy<CGSCCAnalysisManager, Function>;
558
559
/// A proxy from a \c CGSCCAnalysisManager to a \c Function.
560
using CGSCCAnalysisManagerFunctionProxy =
561
    OuterAnalysisManagerProxy<CGSCCAnalysisManager, Function>;
562
563
/// Helper to update the call graph after running a function pass.
564
///
565
/// Function passes can only mutate the call graph in specific ways. This
566
/// routine provides a helper that updates the call graph in those ways
567
/// including returning whether any changes were made and populating a CG
568
/// update result struct for the overall CGSCC walk.
569
LazyCallGraph::SCC &updateCGAndAnalysisManagerForFunctionPass(
570
    LazyCallGraph &G, LazyCallGraph::SCC &C, LazyCallGraph::Node &N,
571
    CGSCCAnalysisManager &AM, CGSCCUpdateResult &UR);
572
573
/// Adaptor that maps from a SCC to its functions.
574
///
575
/// Designed to allow composition of a FunctionPass(Manager) and
576
/// a CGSCCPassManager. Note that if this pass is constructed with a pointer
577
/// to a \c CGSCCAnalysisManager it will run the
578
/// \c FunctionAnalysisManagerCGSCCProxy analysis prior to running the function
579
/// pass over the SCC to enable a \c FunctionAnalysisManager to be used
580
/// within this run safely.
581
template <typename FunctionPassT>
582
class CGSCCToFunctionPassAdaptor
583
    : public PassInfoMixin<CGSCCToFunctionPassAdaptor<FunctionPassT>> {
584
public:
585
  explicit CGSCCToFunctionPassAdaptor(FunctionPassT Pass)
586
136
      : Pass(std::move(Pass)) {}
587
588
  // We have to explicitly define all the special member functions because MSVC
589
  // refuses to generate them.
590
  CGSCCToFunctionPassAdaptor(const CGSCCToFunctionPassAdaptor &Arg)
591
      : Pass(Arg.Pass) {}
592
593
  CGSCCToFunctionPassAdaptor(CGSCCToFunctionPassAdaptor &&Arg)
594
272
      : Pass(std::move(Arg.Pass)) {}
595
596
  friend void swap(CGSCCToFunctionPassAdaptor &LHS,
597
                   CGSCCToFunctionPassAdaptor &RHS) {
598
    std::swap(LHS.Pass, RHS.Pass);
599
  }
600
601
  CGSCCToFunctionPassAdaptor &operator=(CGSCCToFunctionPassAdaptor RHS) {
602
    swap(*this, RHS);
603
    return *this;
604
  }
605
606
  /// Runs the function pass across every function in the module.
607
  PreservedAnalyses run(LazyCallGraph::SCC &C, CGSCCAnalysisManager &AM,
608
400
                        LazyCallGraph &CG, CGSCCUpdateResult &UR) {
609
400
    // Setup the function analysis manager from its proxy.
610
400
    FunctionAnalysisManager &FAM =
611
400
        AM.getResult<FunctionAnalysisManagerCGSCCProxy>(C, CG).getManager();
612
400
613
400
    SmallVector<LazyCallGraph::Node *, 4> Nodes;
614
400
    for (LazyCallGraph::Node &N : C)
615
499
      Nodes.push_back(&N);
616
400
617
400
    // The SCC may get split while we are optimizing functions due to deleting
618
400
    // edges. If this happens, the current SCC can shift, so keep track of
619
400
    // a pointer we can overwrite.
620
400
    LazyCallGraph::SCC *CurrentC = &C;
621
400
622
400
    LLVM_DEBUG(dbgs() << "Running function passes across an SCC: " << C
623
400
                      << "\n");
624
400
625
400
    PreservedAnalyses PA = PreservedAnalyses::all();
626
499
    for (LazyCallGraph::Node *N : Nodes) {
627
499
      // Skip nodes from other SCCs. These may have been split out during
628
499
      // processing. We'll eventually visit those SCCs and pick up the nodes
629
499
      // there.
630
499
      if (CG.lookupSCC(*N) != CurrentC)
631
40
        continue;
632
459
633
459
      Function &F = N->getFunction();
634
459
635
459
      PassInstrumentation PI = FAM.getResult<PassInstrumentationAnalysis>(F);
636
459
      if (!PI.runBeforePass<Function>(Pass, F))
637
0
        continue;
638
459
639
459
      PreservedAnalyses PassPA = Pass.run(F, FAM);
640
459
641
459
      PI.runAfterPass<Function>(Pass, F);
642
459
643
459
      // We know that the function pass couldn't have invalidated any other
644
459
      // function's analyses (that's the contract of a function pass), so
645
459
      // directly handle the function analysis manager's invalidation here.
646
459
      FAM.invalidate(F, PassPA);
647
459
648
459
      // Then intersect the preserved set so that invalidation of module
649
459
      // analyses will eventually occur when the module pass completes.
650
459
      PA.intersect(std::move(PassPA));
651
459
652
459
      // If the call graph hasn't been preserved, update it based on this
653
459
      // function pass. This may also update the current SCC to point to
654
459
      // a smaller, more refined SCC.
655
459
      auto PAC = PA.getChecker<LazyCallGraphAnalysis>();
656
459
      if (!PAC.preserved() && 
!PAC.preservedSet<AllAnalysesOn<Module>>()172
) {
657
172
        CurrentC = &updateCGAndAnalysisManagerForFunctionPass(CG, *CurrentC, *N,
658
172
                                                              AM, UR);
659
172
        assert(
660
172
            CG.lookupSCC(*N) == CurrentC &&
661
172
            "Current SCC not updated to the SCC containing the current node!");
662
172
      }
663
459
    }
664
400
665
400
    // By definition we preserve the proxy. And we preserve all analyses on
666
400
    // Functions. This precludes *any* invalidation of function analyses by the
667
400
    // proxy, but that's OK because we've taken care to invalidate analyses in
668
400
    // the function analysis manager incrementally above.
669
400
    PA.preserveSet<AllAnalysesOn<Function>>();
670
400
    PA.preserve<FunctionAnalysisManagerCGSCCProxy>();
671
400
672
400
    // We've also ensured that we updated the call graph along the way.
673
400
    PA.preserve<LazyCallGraphAnalysis>();
674
400
675
400
    return PA;
676
400
  }
677
678
private:
679
  FunctionPassT Pass;
680
};
681
682
/// A function to deduce a function pass type and wrap it in the
683
/// templated adaptor.
684
template <typename FunctionPassT>
685
CGSCCToFunctionPassAdaptor<FunctionPassT>
686
136
createCGSCCToFunctionPassAdaptor(FunctionPassT Pass) {
687
136
  return CGSCCToFunctionPassAdaptor<FunctionPassT>(std::move(Pass));
688
136
}
689
690
/// A helper that repeats an SCC pass each time an indirect call is refined to
691
/// a direct call by that pass.
692
///
693
/// While the CGSCC pass manager works to re-visit SCCs and RefSCCs as they
694
/// change shape, we may also want to repeat an SCC pass if it simply refines
695
/// an indirect call to a direct call, even if doing so does not alter the
696
/// shape of the graph. Note that this only pertains to direct calls to
697
/// functions where IPO across the SCC may be able to compute more precise
698
/// results. For intrinsics, we assume scalar optimizations already can fully
699
/// reason about them.
700
///
701
/// This repetition has the potential to be very large however, as each one
702
/// might refine a single call site. As a consequence, in practice we use an
703
/// upper bound on the number of repetitions to limit things.
704
template <typename PassT>
705
class DevirtSCCRepeatedPass
706
    : public PassInfoMixin<DevirtSCCRepeatedPass<PassT>> {
707
public:
708
  explicit DevirtSCCRepeatedPass(PassT Pass, int MaxIterations)
709
90
      : Pass(std::move(Pass)), MaxIterations(MaxIterations) {}
710
711
  /// Runs the wrapped pass up to \c MaxIterations on the SCC, iterating
712
  /// whenever an indirect call is refined.
713
  PreservedAnalyses run(LazyCallGraph::SCC &InitialC, CGSCCAnalysisManager &AM,
714
170
                        LazyCallGraph &CG, CGSCCUpdateResult &UR) {
715
170
    PreservedAnalyses PA = PreservedAnalyses::all();
716
170
    PassInstrumentation PI =
717
170
        AM.getResult<PassInstrumentationAnalysis>(InitialC, CG);
718
170
719
170
    // The SCC may be refined while we are running passes over it, so set up
720
170
    // a pointer that we can update.
721
170
    LazyCallGraph::SCC *C = &InitialC;
722
170
723
170
    // Collect value handles for all of the indirect call sites.
724
170
    SmallVector<WeakTrackingVH, 8> CallHandles;
725
170
726
170
    // Struct to track the counts of direct and indirect calls in each function
727
170
    // of the SCC.
728
170
    struct CallCount {
729
170
      int Direct;
730
170
      int Indirect;
731
170
    };
732
170
733
170
    // Put value handles on all of the indirect calls and return the number of
734
170
    // direct calls for each function in the SCC.
735
170
    auto ScanSCC = [](LazyCallGraph::SCC &C,
736
347
                      SmallVectorImpl<WeakTrackingVH> &CallHandles) {
737
347
      assert(CallHandles.empty() && "Must start with a clear set of handles.");
738
347
739
347
      SmallVector<CallCount, 4> CallCounts;
740
355
      for (LazyCallGraph::Node &N : C) {
741
355
        CallCounts.push_back({0, 0});
742
355
        CallCount &Count = CallCounts.back();
743
355
        for (Instruction &I : instructions(N.getFunction()))
744
1.57k
          if (auto CS = CallSite(&I)) {
745
316
            if (CS.getCalledFunction()) {
746
288
              ++Count.Direct;
747
288
            } else {
748
28
              ++Count.Indirect;
749
28
              CallHandles.push_back(WeakTrackingVH(&I));
750
28
            }
751
316
          }
752
355
      }
753
347
754
347
      return CallCounts;
755
347
    };
756
170
757
170
    // Populate the initial call handles and get the initial call counts.
758
170
    auto CallCounts = ScanSCC(*C, CallHandles);
759
170
760
178
    for (int Iteration = 0;; 
++Iteration8
) {
761
178
762
178
      if (!PI.runBeforePass<LazyCallGraph::SCC>(Pass, *C))
763
0
        continue;
764
178
765
178
      PreservedAnalyses PassPA = Pass.run(*C, AM, CG, UR);
766
178
767
178
      if (UR.InvalidatedSCCs.count(C))
768
0
        PI.runAfterPassInvalidated<LazyCallGraph::SCC>(Pass);
769
178
      else
770
178
        PI.runAfterPass<LazyCallGraph::SCC>(Pass, *C);
771
178
772
178
      // If the SCC structure has changed, bail immediately and let the outer
773
178
      // CGSCC layer handle any iteration to reflect the refined structure.
774
178
      if (UR.UpdatedC && 
UR.UpdatedC != C1
) {
775
1
        PA.intersect(std::move(PassPA));
776
1
        break;
777
1
      }
778
177
779
177
      // Check that we didn't miss any update scenario.
780
177
      assert(!UR.InvalidatedSCCs.count(C) && "Processing an invalid SCC!");
781
177
      assert(C->begin() != C->end() && "Cannot have an empty SCC!");
782
177
      assert((int)CallCounts.size() == C->size() &&
783
177
             "Cannot have changed the size of the SCC!");
784
177
785
177
      // Check whether any of the handles were devirtualized.
786
177
      auto IsDevirtualizedHandle = [&](WeakTrackingVH &CallH) {
787
19
        if (!CallH)
788
2
          return false;
789
17
        auto CS = CallSite(CallH);
790
17
        if (!CS)
791
0
          return false;
792
17
793
17
        // If the call is still indirect, leave it alone.
794
17
        Function *F = CS.getCalledFunction();
795
17
        if (!F)
796
10
          return false;
797
7
798
7
        LLVM_DEBUG(dbgs() << "Found devirutalized call from "
799
7
                          << CS.getParent()->getParent()->getName() << " to "
800
7
                          << F->getName() << "\n");
801
7
802
7
        // We now have a direct call where previously we had an indirect call,
803
7
        // so iterate to process this devirtualization site.
804
7
        return true;
805
7
      };
806
177
      bool Devirt = llvm::any_of(CallHandles, IsDevirtualizedHandle);
807
177
808
177
      // Rescan to build up a new set of handles and count how many direct
809
177
      // calls remain. If we decide to iterate, this also sets up the input to
810
177
      // the next iteration.
811
177
      CallHandles.clear();
812
177
      auto NewCallCounts = ScanSCC(*C, CallHandles);
813
177
814
177
      // If we haven't found an explicit devirtualization already see if we
815
177
      // have decreased the number of indirect calls and increased the number
816
177
      // of direct calls for any function in the SCC. This can be fooled by all
817
177
      // manner of transformations such as DCE and other things, but seems to
818
177
      // work well in practice.
819
177
      if (!Devirt)
820
339
        
for (int i = 0, Size = C->size(); 170
i < Size;
++i169
)
821
171
          if (CallCounts[i].Indirect > NewCallCounts[i].Indirect &&
822
171
              
CallCounts[i].Direct < NewCallCounts[i].Direct2
) {
823
2
            Devirt = true;
824
2
            break;
825
2
          }
826
177
827
177
      if (!Devirt) {
828
168
        PA.intersect(std::move(PassPA));
829
168
        break;
830
168
      }
831
9
832
9
      // Otherwise, if we've already hit our max, we're done.
833
9
      if (Iteration >= MaxIterations) {
834
1
        LLVM_DEBUG(
835
1
            dbgs() << "Found another devirtualization after hitting the max "
836
1
                      "number of repetitions ("
837
1
                   << MaxIterations << ") on SCC: " << *C << "\n");
838
1
        PA.intersect(std::move(PassPA));
839
1
        break;
840
1
      }
841
8
842
8
      LLVM_DEBUG(
843
8
          dbgs()
844
8
          << "Repeating an SCC pass after finding a devirtualization in: " << *C
845
8
          << "\n");
846
8
847
8
      // Move over the new call counts in preparation for iterating.
848
8
      CallCounts = std::move(NewCallCounts);
849
8
850
8
      // Update the analysis manager with each run and intersect the total set
851
8
      // of preserved analyses so we're ready to iterate.
852
8
      AM.invalidate(*C, PassPA);
853
8
      PA.intersect(std::move(PassPA));
854
8
    }
855
170
856
170
    // Note that we don't add any preserved entries here unlike a more normal
857
170
    // "pass manager" because we only handle invalidation *between* iterations,
858
170
    // not after the last iteration.
859
170
    return PA;
860
170
  }
861
862
private:
863
  PassT Pass;
864
  int MaxIterations;
865
};
866
867
/// A function to deduce a function pass type and wrap it in the
868
/// templated adaptor.
869
template <typename PassT>
870
DevirtSCCRepeatedPass<PassT> createDevirtSCCRepeatedPass(PassT Pass,
871
90
                                                         int MaxIterations) {
872
90
  return DevirtSCCRepeatedPass<PassT>(std::move(Pass), MaxIterations);
873
90
}
874
875
// Clear out the debug logging macro.
876
#undef DEBUG_TYPE
877
878
} // end namespace llvm
879
880
#endif // LLVM_ANALYSIS_CGSCCPASSMANAGER_H