Poster of Linux kernelThe best gift for a Linux geek
MetisPartitioner

MetisPartitioner

Section: C Library Functions (3) Updated: Thu Apr 7 2011
Local index Up
 

NAME

MetisPartitioner -  

SYNOPSIS


#include <metis_partitioner.h>

Inherits Partitioner.  

Public Member Functions


MetisPartitioner ()

virtual AutoPtr< Partitioner > clone () const

void partition (MeshBase &mesh, const unsigned int n=libMesh::n_processors())

void repartition (MeshBase &mesh, const unsigned int n=libMesh::n_processors())
 

Static Public Member Functions


static void partition_unpartitioned_elements (MeshBase &mesh, const unsigned int n=libMesh::n_processors())

static void set_parent_processor_ids (MeshBase &mesh)

static void set_node_processor_ids (MeshBase &mesh)
 

Protected Member Functions


virtual void _do_partition (MeshBase &mesh, const unsigned int n)

void single_partition (MeshBase &mesh)

virtual void _do_repartition (MeshBase &mesh, const unsigned int n)
 

Static Protected Attributes


static const unsigned int communication_blocksize = 1000000
 

Detailed Description

The MetisPartitioner uses the Metis graph partitioner to partition the elements.

Definition at line 39 of file metis_partitioner.h.  

Constructor & Destructor Documentation

 

MetisPartitioner::MetisPartitioner () [inline]Constructor.

Definition at line 46 of file metis_partitioner.h.

Referenced by clone().

{}
 

Member Function Documentation

 

void MetisPartitioner::_do_partition (MeshBase &mesh, const unsigned intn) [protected, virtual]Partition the MeshBase into n subdomains.

Implements Partitioner.

Definition at line 51 of file metis_partitioner.C.

References Elem::active(), MeshBase::active_elements_begin(), MeshBase::active_elements_end(), Elem::active_family_tree(), MeshTools::bounding_box(), MeshCommunication::find_global_indices(), MeshBase::is_serial(), MeshBase::n_active_elem(), Elem::n_neighbors(), Elem::n_nodes(), Elem::neighbor(), Partitioner::partition(), DofObject::processor_id(), Partitioner::single_partition(), and Elem::which_neighbor_am_i().

{
  libmesh_assert (n_pieces > 0);
  libmesh_assert (mesh.is_serial());

  // Check for an easy return
  if (n_pieces == 1)
    {
      this->single_partition (mesh);
      return;
    }
  
// What to do if the Metis library IS NOT present
#ifndef LIBMESH_HAVE_METIS

  libmesh_here();
  std::cerr << 'ERROR: The library has been built without'    << std::endl
            << 'Metis support.  Using a space-filling curve'  << std::endl
            << 'partitioner instead!'                         << std::endl;

  SFCPartitioner sfcp;

  sfcp.partition (mesh, n_pieces);
  
// What to do if the Metis library IS present
#else
  
  START_LOG('partition()', 'MetisPartitioner');

  const unsigned int n_active_elem = mesh.n_active_elem();
  
  // build the graph  
  std::vector<int> options(5);
  std::vector<int> vwgt(n_active_elem);
  std::vector<int> part(n_active_elem);
  
  int
    n = static_cast<int>(n_active_elem),  // number of 'nodes' (elements)
                                          //   in the graph
    wgtflag = 2,                          // weights on vertices only,
                                          //   none on edges
    numflag = 0,                          // C-style 0-based numbering
    nparts  = static_cast<int>(n_pieces), // number of subdomains to create
    edgecut = 0;                          // the numbers of edges cut by the
                                          //   resulting partition
  
  // Set the options
  options[0] = 0; // use default options

  // Metis will only consider the active elements.
  // We need to map the active element ids into a
  // contiguous range.  Further, we want the unique range indexing to be
  // independednt of the element ordering, otherwise a circular dependency
  // can result in which the partitioning depends on the ordering which 
  // depends on the partitioning...
  std::map<const Elem*, unsigned int> global_index_map;
  {
    std::vector<unsigned int> global_index;
    
    MeshBase::element_iterator       it  = mesh.active_elements_begin();
    const MeshBase::element_iterator end = mesh.active_elements_end(); 
    
    MeshCommunication().find_global_indices (MeshTools::bounding_box(mesh),
                                             it, end, global_index);
    
    libmesh_assert (global_index.size() == n_active_elem);    
    
    for (unsigned int cnt=0; it != end; ++it)
      {
        const Elem *elem = *it;
        libmesh_assert (!global_index_map.count(elem));
        
        global_index_map[elem]  = global_index[cnt++];
      }
    libmesh_assert (global_index_map.size() == n_active_elem);    
  }
  
  
  // build the graph in CSR format.  Note that
  // the edges in the graph will correspond to
  // face neighbors
  std::vector<int> xadj, adjncy;
  {
    std::vector<const Elem*> neighbors_offspring;    
    
    MeshBase::element_iterator       elem_it  = mesh.active_elements_begin();
    const MeshBase::element_iterator elem_end = mesh.active_elements_end(); 
    
    // This will be exact when there is no refinement and all the
    // elements are of the same type.
    unsigned int graph_size=0;
    std::vector<std::vector<unsigned int> > graph(n_active_elem);
    
    for (; elem_it != elem_end; ++elem_it)
      {
        const Elem* elem = *elem_it;
        
        libmesh_assert (global_index_map.count(elem));
        
        const unsigned int elem_global_index = 
          global_index_map[elem];
        
        libmesh_assert (elem_global_index < vwgt.size());
        libmesh_assert (elem_global_index < graph.size());

        // maybe there is a better weight?
        // The weight is used to define what a balanced graph is
        vwgt[elem_global_index] = elem->n_nodes(); 

        // Loop over the element's neighbors.  An element
        // adjacency corresponds to a face neighbor
        for (unsigned int ms=0; ms<elem->n_neighbors(); ms++)
          {
            const Elem* neighbor = elem->neighbor(ms);
            
            if (neighbor != NULL)
              {  
                // If the neighbor is active treat it
                // as a connection
                if (neighbor->active())
                  {
                    libmesh_assert (global_index_map.count(neighbor));
                    
                    const unsigned int neighbor_global_index = 
                      global_index_map[neighbor];
                    
                    graph[elem_global_index].push_back(neighbor_global_index);
                    graph_size++;
                  }
  
#ifdef LIBMESH_ENABLE_AMR
                
                // Otherwise we need to find all of the
                // neighbor's children that are connected to
                // us and add them
                else
                  {
                    // The side of the neighbor to which
                    // we are connected
                    const unsigned int ns =
                      neighbor->which_neighbor_am_i (elem);
                    libmesh_assert (ns < neighbor->n_neighbors());
                    
                    // Get all the active children (& grandchildren, etc...)
                    // of the neighbor.
                    neighbor->active_family_tree (neighbors_offspring);
                    
                    // Get all the neighbor's children that
                    // live on that side and are thus connected
                    // to us
                    for (unsigned int nc=0; nc<neighbors_offspring.size(); nc++)
                      {
                        const Elem* child =
                          neighbors_offspring[nc];
                        
                        // This does not assume a level-1 mesh.
                        // Note that since children have sides numbered
                        // coincident with the parent then this is a sufficient test.
                        if (child->neighbor(ns) == elem)
                          {
                            libmesh_assert (child->active());
                            libmesh_assert (global_index_map.count(child));

                            const unsigned int child_global_index =
                              global_index_map[child];
                            
                            graph[elem_global_index].push_back(child_global_index);
                            graph_size++;
                          }
                      }
                  }

#endif /* ifdef LIBMESH_ENABLE_AMR */

              }
          }
      }
    
    // Convert the graph into the format Metis wants
    xadj.reserve(n_active_elem+1);
    adjncy.reserve(graph_size);
    
    for (unsigned int r=0; r<graph.size(); r++)
      {
        xadj.push_back(adjncy.size());
        std::vector<unsigned int> graph_row; // build this emtpy
        graph_row.swap(graph[r]); // this will deallocate at the end of scope
        adjncy.insert(adjncy.end(),
                      graph_row.begin(),
                      graph_row.end());
      }

    // The end of the adjacency array for the last elem
    xadj.push_back(adjncy.size());
    
    libmesh_assert (adjncy.size() == graph_size);
    libmesh_assert (xadj.size() == n_active_elem+1);    
  } // done building the graph
  
  
  if (adjncy.empty())
    adjncy.push_back(0);
  
  // Select which type of partitioning to create
  
  // Use recursive if the number of partitions is less than or equal to 8
  if (n_pieces <= 8)
    Metis::METIS_PartGraphRecursive(&n, &xadj[0], &adjncy[0], &vwgt[0], NULL,
                                    &wgtflag, &numflag, &nparts, &options[0],
                                    &edgecut, &part[0]);
  
  // Otherwise  use kway
  else
    Metis::METIS_PartGraphKway(&n, &xadj[0], &adjncy[0], &vwgt[0], NULL,
                               &wgtflag, &numflag, &nparts, &options[0],
                               &edgecut, &part[0]);
  
  
  // Assign the returned processor ids.  The part array contains
  // the processor id for each active element, but in terms of
  // the contiguous indexing we defined above
  {
    MeshBase::element_iterator       it  = mesh.active_elements_begin();
    const MeshBase::element_iterator end = mesh.active_elements_end(); 
    
    for (; it!=end; ++it)
      {
        Elem* elem = *it;
        
        libmesh_assert (global_index_map.count(elem));
        
        const unsigned int elem_global_index = 
          global_index_map[elem];
        
        libmesh_assert (elem_global_index < part.size());
        const unsigned int elem_procid =  static_cast<short int>(part[elem_global_index]);
    
        elem->processor_id() = elem_procid;
      }
  }

  STOP_LOG('partition()', 'MetisPartitioner');
#endif
}
 

virtual void Partitioner::_do_repartition (MeshBase &mesh, const unsigned intn) [inline, protected, virtual, inherited]This is the actual re-partitioning method which can be overloaded in derived classes. Note that the default behavior is to simply call the partition function.

Reimplemented in ParmetisPartitioner.

Definition at line 133 of file partitioner.h.

References Partitioner::_do_partition().

Referenced by Partitioner::repartition().

                                                      { this->_do_partition (mesh, n); }
 

virtual AutoPtr<Partitioner> MetisPartitioner::clone () const [inline, virtual]Creates a new partitioner of this type and returns it in an AutoPtr.

Implements Partitioner.

Definition at line 52 of file metis_partitioner.h.

References MetisPartitioner().

                                              {
    AutoPtr<Partitioner> cloned_partitioner
      (new MetisPartitioner());
    return cloned_partitioner;
  }
 

void Partitioner::partition (MeshBase &mesh, const unsigned intn = libMesh::n_processors()) [inherited]Partition the MeshBase into n parts. If the user does not specify a number of pieces into which the mesh should be partitioned, then the default behavior of the partitioner is to partition according to the number of processors defined in libMesh::n_processors(). The partitioner currently does not modify the subdomain_id of each element. This number is reserved for things like material properties, etc.

Definition at line 43 of file partitioner.C.

References Partitioner::_do_partition(), MeshBase::is_serial(), std::min(), MeshBase::n_active_elem(), Partitioner::partition_unpartitioned_elements(), MeshBase::set_n_partitions(), Partitioner::set_node_processor_ids(), Partitioner::set_parent_processor_ids(), and Partitioner::single_partition().

Referenced by SFCPartitioner::_do_partition(), _do_partition(), and ParmetisPartitioner::_do_repartition().

{
  // BSK - temporary fix while redistribution is integrated 6/26/2008
  // Uncomment this to not repartition in parallel
   if (!mesh.is_serial())
     return;

  // we cannot partition into more pieces than we have
  // active elements!
  const unsigned int n_parts =
    std::min(mesh.n_active_elem(), n);
  
  // Set the number of partitions in the mesh
  mesh.set_n_partitions()=n_parts;

  if (n_parts == 1)
    {
      this->single_partition (mesh);
      return;
    }
  
  // First assign a temporary partitioning to any unpartitioned elements
  Partitioner::partition_unpartitioned_elements(mesh, n_parts);
  
  // Call the partitioning function
  this->_do_partition(mesh,n_parts);

  // Set the parent's processor ids
  Partitioner::set_parent_processor_ids(mesh);

  // Set the node's processor ids
  Partitioner::set_node_processor_ids(mesh);
}
 

void Partitioner::partition_unpartitioned_elements (MeshBase &mesh, const unsigned intn = libMesh::n_processors()) [static, inherited]This function

Definition at line 139 of file partitioner.C.

References MeshTools::bounding_box(), MeshCommunication::find_global_indices(), MeshTools::n_elem(), libMesh::n_processors(), DofObject::processor_id(), MeshBase::unpartitioned_elements_begin(), and MeshBase::unpartitioned_elements_end().

Referenced by Partitioner::partition(), and Partitioner::repartition().

{
  MeshBase::const_element_iterator       it  = mesh.unpartitioned_elements_begin();
  const MeshBase::const_element_iterator end = mesh.unpartitioned_elements_end();

  const unsigned int n_unpartitioned_elements = MeshTools::n_elem (it, end);

  // the unpartitioned elements must exist on all processors. If the range is empty on one
  // it is empty on all, and we can quit right here.
  if (!n_unpartitioned_elements) return;
         
  // find the target subdomain sizes
  std::vector<unsigned int> subdomain_bounds(libMesh::n_processors());

  for (unsigned int pid=0; pid<libMesh::n_processors(); pid++)
    {
      unsigned int tgt_subdomain_size = 0;

      // watch out for the case that n_subdomains < n_processors
      if (pid < n_subdomains)
        {
          tgt_subdomain_size = n_unpartitioned_elements/n_subdomains;
      
          if (pid < n_unpartitioned_elements%n_subdomains)
            tgt_subdomain_size++;

        }
      
      //std::cout << 'pid, #= ' << pid << ', ' << tgt_subdomain_size << std::endl;
      if (pid == 0)
        subdomain_bounds[0] = tgt_subdomain_size;
      else
        subdomain_bounds[pid] = subdomain_bounds[pid-1] + tgt_subdomain_size;
    }
  
  libmesh_assert (subdomain_bounds.back() == n_unpartitioned_elements);  
  
  // create the unique mapping for all unpartitioned elements independent of partitioning
  // determine the global indexing for all the unpartitoned elements
  std::vector<unsigned int> global_indices;
    
  // Calling this on all processors a unique range in [0,n_unpartitioned_elements) is constructed.  
  // Only the indices for the elements we pass in are returned in the array.
  MeshCommunication().find_global_indices (MeshTools::bounding_box(mesh), it, end, 
                                           global_indices);
  
  for (unsigned int cnt=0; it != end; ++it)
    {
      Elem *elem = *it;
      
      libmesh_assert (cnt < global_indices.size());
      const unsigned int global_index =
        global_indices[cnt++];
      
      libmesh_assert (global_index < subdomain_bounds.back());
      libmesh_assert (global_index < n_unpartitioned_elements);

      const unsigned int subdomain_id =
        std::distance(subdomain_bounds.begin(),
                      std::upper_bound(subdomain_bounds.begin(),
                                       subdomain_bounds.end(),
                                       global_index));
      libmesh_assert (subdomain_id < n_subdomains);
     
      elem->processor_id() = subdomain_id;              
      //std::cout << 'assigning ' << global_index << ' to ' << subdomain_id << std::endl;             
    }
}
 

void Partitioner::repartition (MeshBase &mesh, const unsigned intn = libMesh::n_processors()) [inherited]Repartitions the MeshBase into n parts. This is required since some partitoning algorithms can repartition more efficiently than computing a new partitioning from scratch. The default behavior is to simply call this->partition(n)

Definition at line 82 of file partitioner.C.

References Partitioner::_do_repartition(), std::min(), MeshBase::n_active_elem(), Partitioner::partition_unpartitioned_elements(), MeshBase::set_n_partitions(), Partitioner::set_node_processor_ids(), Partitioner::set_parent_processor_ids(), and Partitioner::single_partition().

{
  // we cannot partition into more pieces than we have
  // active elements!
  const unsigned int n_parts =
    std::min(mesh.n_active_elem(), n);
  
  // Set the number of partitions in the mesh
  mesh.set_n_partitions()=n_parts;

  if (n_parts == 1)
    {
      this->single_partition (mesh);
      return;
    }
  
  // First assign a temporary partitioning to any unpartitioned elements
  Partitioner::partition_unpartitioned_elements(mesh, n_parts);
  
  // Call the partitioning function
  this->_do_repartition(mesh,n_parts);
  
  // Set the parent's processor ids
  Partitioner::set_parent_processor_ids(mesh);
  
  // Set the node's processor ids
  Partitioner::set_node_processor_ids(mesh);
}
 

void Partitioner::set_node_processor_ids (MeshBase &mesh) [static, inherited]This function is called after partitioning to set the processor IDs for the nodes. By definition, a Node's processor ID is the minimum processor ID for all of the elements which share the node.

Definition at line 353 of file partitioner.C.

References MeshBase::active_elements_begin(), MeshBase::active_elements_end(), MeshBase::elements_begin(), MeshBase::elements_end(), Elem::get_node(), DofObject::id(), DofObject::invalid_processor_id, DofObject::invalidate_processor_id(), std::min(), MeshTools::n_elem(), Elem::n_nodes(), MeshBase::n_partitions(), libMesh::n_processors(), MeshBase::node_ptr(), MeshBase::nodes_begin(), MeshBase::nodes_end(), MeshBase::not_active_elements_begin(), MeshBase::not_active_elements_end(), libMesh::processor_id(), DofObject::processor_id(), MeshBase::subactive_elements_begin(), MeshBase::subactive_elements_end(), MeshBase::unpartitioned_elements_begin(), and MeshBase::unpartitioned_elements_end().

Referenced by Partitioner::partition(), XdrIO::read(), Partitioner::repartition(), and BoundaryInfo::sync().

{
  START_LOG('set_node_processor_ids()','Partitioner');

  // This function must be run on all processors at once
  parallel_only();

  // If we have any unpartitioned elements at this 
  // stage there is a problem
  libmesh_assert (MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
                            mesh.unpartitioned_elements_end()) == 0);


//   const unsigned int orig_n_local_nodes = mesh.n_local_nodes();

//   std::cerr << '[' << libMesh::processor_id() << ']: orig_n_local_nodes='
//          << orig_n_local_nodes << std::endl;

  // Build up request sets.  Each node is currently owned by a processor because
  // it is connected to an element owned by that processor.  However, during the
  // repartitioning phase that element may have been assigned a new processor id, but
  // it is still resident on the original processor.  We need to know where to look
  // for new ids before assigning new ids, otherwise we may be asking the wrong processors
  // for the wrong information.
  //
  // The only remaining issue is what to do with unpartitioned nodes.  Since they are required
  // to live on all processors we can simply rely on ourselves to number them properly.
  std::vector<std::vector<unsigned int> >
    requested_node_ids(libMesh::n_processors());

  // Loop over all the nodes, count the ones on each processor.  We can skip ourself
  std::vector<unsigned int> ghost_nodes_from_proc(libMesh::n_processors(), 0);

  MeshBase::node_iterator       node_it  = mesh.nodes_begin();
  const MeshBase::node_iterator node_end = mesh.nodes_end();
  
  for (; node_it != node_end; ++node_it)
    {
      Node *node = *node_it;
      libmesh_assert(node);
      const unsigned int current_pid = node->processor_id();
      if (current_pid != libMesh::processor_id() &&
          current_pid != DofObject::invalid_processor_id)
        {
          libmesh_assert(current_pid < ghost_nodes_from_proc.size());
          ghost_nodes_from_proc[current_pid]++;
        }
    }

  // We know how many objects live on each processor, so reserve()
  // space for each.
  for (unsigned int pid=0; pid != libMesh::n_processors(); ++pid)
    requested_node_ids[pid].reserve(ghost_nodes_from_proc[pid]);

  // We need to get the new pid for each node from the processor
  // which *currently* owns the node.  We can safely skip ourself
  for (node_it = mesh.nodes_begin(); node_it != node_end; ++node_it)
    {
      Node *node = *node_it;
      libmesh_assert(node);
      const unsigned int current_pid = node->processor_id();      
      if (current_pid != libMesh::processor_id() &&
          current_pid != DofObject::invalid_processor_id)
        {
          libmesh_assert(current_pid < requested_node_ids.size());
          libmesh_assert(requested_node_ids[current_pid].size() <
                 ghost_nodes_from_proc[current_pid]);
          requested_node_ids[current_pid].push_back(node->id());
        }
      
      // Unset any previously-set node processor ids
      node->invalidate_processor_id();
    }
  
  // Loop over all the active elements
  MeshBase::element_iterator       elem_it  = mesh.active_elements_begin();
  const MeshBase::element_iterator elem_end = mesh.active_elements_end(); 
  
  for ( ; elem_it != elem_end; ++elem_it)
    {
      Elem* elem = *elem_it;
      libmesh_assert(elem);

      libmesh_assert (elem->processor_id() != DofObject::invalid_processor_id);
      
      // For each node, set the processor ID to the min of
      // its current value and this Element's processor id.
      for (unsigned int n=0; n<elem->n_nodes(); ++n)
        elem->get_node(n)->processor_id() = std::min(elem->get_node(n)->processor_id(),
                                                     elem->processor_id());
    }

  // And loop over the subactive elements, but don't reassign
  // nodes that are already active on another processor.
  MeshBase::element_iterator       sub_it  = mesh.subactive_elements_begin();
  const MeshBase::element_iterator sub_end = mesh.subactive_elements_end(); 
  
  for ( ; sub_it != sub_end; ++sub_it)
    {
      Elem* elem = *sub_it;
      libmesh_assert(elem);

      libmesh_assert (elem->processor_id() != DofObject::invalid_processor_id);
      
      for (unsigned int n=0; n<elem->n_nodes(); ++n)
        if (elem->get_node(n)->processor_id() == DofObject::invalid_processor_id)
          elem->get_node(n)->processor_id() = elem->processor_id();
    }

  // Same for the inactive elements -- we will have already gotten most of these
  // nodes, *except* for the case of a parent with a subset of children which are
  // ghost elements.  In that case some of the parent nodes will not have been
  // properly handled yet
  MeshBase::element_iterator       not_it  = mesh.not_active_elements_begin();
  const MeshBase::element_iterator not_end = mesh.not_active_elements_end(); 
  
  for ( ; not_it != not_end; ++not_it)
    {
      Elem* elem = *not_it;
      libmesh_assert(elem);

      libmesh_assert (elem->processor_id() != DofObject::invalid_processor_id);
      
      for (unsigned int n=0; n<elem->n_nodes(); ++n)
        if (elem->get_node(n)->processor_id() == DofObject::invalid_processor_id)
          elem->get_node(n)->processor_id() = elem->processor_id();
    }

#ifndef NDEBUG
  {
    // make sure all the nodes connected to any element have received a
    // valid processor id
    std::set<const Node*> used_nodes;
    MeshBase::element_iterator       all_it  = mesh.elements_begin();
    const MeshBase::element_iterator all_end = mesh.elements_end(); 
  
    for ( ; all_it != all_end; ++all_it)
      {
        Elem* elem = *all_it;
        libmesh_assert(elem);
        libmesh_assert(elem->processor_id() != DofObject::invalid_processor_id);
        for (unsigned int n=0; n<elem->n_nodes(); ++n)
          used_nodes.insert(elem->get_node(n));
      }

    for (node_it = mesh.nodes_begin(); node_it != node_end; ++node_it)
      {
        Node *node = *node_it;
        libmesh_assert(node);
        libmesh_assert(used_nodes.count(node));
        libmesh_assert(node->processor_id() != DofObject::invalid_processor_id);
      }
  }
#endif

  // Next set node ids from other processors, excluding self
  for (unsigned int p=1; p != libMesh::n_processors(); ++p)
    {
      // Trade my requests with processor procup and procdown
      unsigned int procup = (libMesh::processor_id() + p) %
                             libMesh::n_processors();
      unsigned int procdown = (libMesh::n_processors() +
                               libMesh::processor_id() - p) %
                               libMesh::n_processors();
      std::vector<unsigned int> request_to_fill;
      Parallel::send_receive(procup, requested_node_ids[procup],
                             procdown, request_to_fill);

      // Fill those requests in-place
      for (unsigned int i=0; i != request_to_fill.size(); ++i)
        {
          Node *node = mesh.node_ptr(request_to_fill[i]);
          libmesh_assert(node);
          const unsigned int new_pid = node->processor_id();
          libmesh_assert (new_pid != DofObject::invalid_processor_id);
          libmesh_assert (new_pid < mesh.n_partitions()); // this is the correct test --
          request_to_fill[i] = new_pid;           //  the number of partitions may
        }                                         //  not equal the number of processors

      // Trade back the results
      std::vector<unsigned int> filled_request;
      Parallel::send_receive(procdown, request_to_fill,
                             procup,   filled_request);
      libmesh_assert(filled_request.size() == requested_node_ids[procup].size());
      
      // And copy the id changes we've now been informed of
      for (unsigned int i=0; i != filled_request.size(); ++i)
        {
          Node *node = mesh.node_ptr(requested_node_ids[procup][i]);
          libmesh_assert(node);
          libmesh_assert(filled_request[i] < mesh.n_partitions()); // this is the correct test --
          node->processor_id(filled_request[i]);           //  the number of partitions may
        }                                                  //  not equal the number of processors
    }
  
  STOP_LOG('set_node_processor_ids()','Partitioner');
}
 

void Partitioner::set_parent_processor_ids (MeshBase &mesh) [static, inherited]This function is called after partitioning to set the processor IDs for the inactive parent elements. A Parent's processor ID is the same as its first child.

Definition at line 211 of file partitioner.C.

References MeshBase::active_elements_begin(), MeshBase::active_elements_end(), MeshBase::ancestor_elements_begin(), MeshBase::ancestor_elements_end(), Elem::child(), Partitioner::communication_blocksize, DofObject::id(), DofObject::invalid_processor_id, DofObject::invalidate_processor_id(), Elem::is_remote(), MeshBase::is_serial(), MeshBase::max_elem_id(), std::min(), Elem::n_children(), MeshTools::n_elem(), Elem::parent(), DofObject::processor_id(), MeshBase::unpartitioned_elements_begin(), and MeshBase::unpartitioned_elements_end().

Referenced by Partitioner::partition(), and Partitioner::repartition().

{
  START_LOG('set_parent_processor_ids()','Partitioner');
  
  // If the mesh is serial we have access to all the elements,
  // in particular all the active ones.  We can therefore set
  // the parent processor ids indirecly through their children.
  // By convention a parent is assigned to the minimum processor
  // of all its children.
  if (mesh.is_serial())
    {
      // Loop over all the active elements in the mesh  
      MeshBase::element_iterator       it  = mesh.active_elements_begin();
      const MeshBase::element_iterator end = mesh.active_elements_end();
      
      for ( ; it!=end; ++it)
        {
#ifdef LIBMESH_ENABLE_AMR
          Elem *child  = *it;
          Elem *parent = child->parent();

          while (parent)
            {
              // invalidate the parent id, otherwise the min below
              // will not work if the current parent id is less
              // than all the children!
              parent->invalidate_processor_id();
              
              for(unsigned int c=0; c<parent->n_children(); c++)
                {
                  child = parent->child(c);
                  libmesh_assert(child);
                  libmesh_assert(!child->is_remote());
                  libmesh_assert(child->processor_id() != DofObject::invalid_processor_id);
                  parent->processor_id() = std::min(parent->processor_id(),
                                                    child->processor_id());
                }             
              parent = parent->parent();
            }
#else
          libmesh_assert ((*it)->level() == 0);
#endif
          
        }
    }

  // When the mesh is parallel we cannot guarantee that parents have access to
  // all their children.
  else
    {
      // We will use a brute-force approach here.  Each processor finds its parent
      // elements and sets the parent pid to the minimum of its local children.
      // A global reduction is then performed to make sure the true minimum is found.
      // As noted, this is required because we cannot guarantee that a parent has
      // access to all its children on any single processor.
      parallel_only();
      libmesh_assert(MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
                               mesh.unpartitioned_elements_end()) == 0);

      const unsigned int max_elem_id = mesh.max_elem_id();

      std::vector<unsigned short int>
        parent_processor_ids (std::min(communication_blocksize,
                                       max_elem_id));
      
      for (unsigned int blk=0, last_elem_id=0; last_elem_id<max_elem_id; blk++)
        {
                              last_elem_id = std::min((blk+1)*communication_blocksize, max_elem_id);
          const unsigned int first_elem_id = blk*communication_blocksize;

          std::fill (parent_processor_ids.begin(),
                     parent_processor_ids.end(),
                     DofObject::invalid_processor_id);

          // first build up local contributions to parent_processor_ids
          MeshBase::element_iterator       not_it  = mesh.ancestor_elements_begin();
          const MeshBase::element_iterator not_end = mesh.ancestor_elements_end(); 

          bool have_parent_in_block = false;
          
          for ( ; not_it != not_end; ++not_it)
            {
#ifdef LIBMESH_ENABLE_AMR
              Elem *parent = *not_it;

              const unsigned int parent_idx = parent->id();
              libmesh_assert (parent_idx < max_elem_id);

              if ((parent_idx >= first_elem_id) &&
                  (parent_idx <  last_elem_id))
                {
                  have_parent_in_block = true;
                  unsigned short int parent_pid = DofObject::invalid_processor_id;

                  for (unsigned int c=0; c<parent->n_children(); c++)
                    parent_pid = std::min (parent_pid, parent->child(c)->processor_id());
                  
                  const unsigned int packed_idx = parent_idx - first_elem_id;
                  libmesh_assert (packed_idx < parent_processor_ids.size());

                  parent_processor_ids[packed_idx] = parent_pid;
                }
#else
              // without AMR there should be no inactive elements
              libmesh_error();
#endif
            }

          // then find the global minimum
          Parallel::min (parent_processor_ids);

          // and assign the ids, if we have a parent in this block.
          if (have_parent_in_block)
            for (not_it = mesh.ancestor_elements_begin();
                 not_it != not_end; ++not_it)
              {
                Elem *parent = *not_it;
                
                const unsigned int parent_idx = parent->id();
                
                if ((parent_idx >= first_elem_id) &&
                    (parent_idx <  last_elem_id))
                  {
                    const unsigned int packed_idx = parent_idx - first_elem_id;
                    libmesh_assert (packed_idx < parent_processor_ids.size());
                    
                    const unsigned short int parent_pid =
                      parent_processor_ids[packed_idx];
                    
                    libmesh_assert (parent_pid != DofObject::invalid_processor_id);
                    
                    parent->processor_id() = parent_pid;
                  }
              }
        }
    }
  
  STOP_LOG('set_parent_processor_ids()','Partitioner');
}
 

void Partitioner::single_partition (MeshBase &mesh) [protected, inherited]Trivially 'partitions' the mesh for one processor. Simply loops through the elements and assigns all of them to processor 0. Is is provided as a separate function so that derived classes may use it without reimplementing it.

Definition at line 116 of file partitioner.C.

References MeshBase::elements_begin(), MeshBase::elements_end(), MeshBase::nodes_begin(), and MeshBase::nodes_end().

Referenced by SFCPartitioner::_do_partition(), _do_partition(), LinearPartitioner::_do_partition(), CentroidPartitioner::_do_partition(), ParmetisPartitioner::_do_repartition(), Partitioner::partition(), and Partitioner::repartition().

{
  START_LOG('single_partition()','Partitioner');
  
  // Loop over all the elements and assign them to processor 0.
  MeshBase::element_iterator       elem_it  = mesh.elements_begin();
  const MeshBase::element_iterator elem_end = mesh.elements_end(); 

  for ( ; elem_it != elem_end; ++elem_it)
    (*elem_it)->processor_id() = 0;

  // For a single partition, all the nodes are on processor 0
  MeshBase::node_iterator       node_it  = mesh.nodes_begin();
  const MeshBase::node_iterator node_end = mesh.nodes_end();
  
  for ( ; node_it != node_end; ++node_it)
    (*node_it)->processor_id() = 0;

  STOP_LOG('single_partition()','Partitioner');
}
 

Member Data Documentation

 

const unsigned int Partitioner::communication_blocksize = 1000000 [static, protected, inherited]The blocksize to use when doing blocked parallel communication. This limits the maximum vector size which can be used in a single communication step.

Definition at line 140 of file partitioner.h.

Referenced by Partitioner::set_parent_processor_ids().

 

Author

Generated automatically by Doxygen for libMesh from the source code.


 

Index

NAME
SYNOPSIS
Public Member Functions
Static Public Member Functions
Protected Member Functions
Static Protected Attributes
Detailed Description
Constructor & Destructor Documentation
MetisPartitioner::MetisPartitioner () [inline]Constructor.
Member Function Documentation
void MetisPartitioner::_do_partition (MeshBase &mesh, const unsigned intn) [protected, virtual]Partition the MeshBase into n subdomains.
virtual void Partitioner::_do_repartition (MeshBase &mesh, const unsigned intn) [inline, protected, virtual, inherited]This is the actual re-partitioning method which can be overloaded in derived classes. Note that the default behavior is to simply call the partition function.
virtual AutoPtr<Partitioner> MetisPartitioner::clone () const [inline, virtual]Creates a new partitioner of this type and returns it in an AutoPtr.
void Partitioner::partition (MeshBase &mesh, const unsigned intn = libMesh::n_processors()) [inherited]Partition the MeshBase into n parts. If the user does not specify a number of pieces into which the mesh should be partitioned, then the default behavior of the partitioner is to partition according to the number of processors defined in libMesh::n_processors(). The partitioner currently does not modify the subdomain_id of each element. This number is reserved for things like material properties, etc.
void Partitioner::partition_unpartitioned_elements (MeshBase &mesh, const unsigned intn = libMesh::n_processors()) [static, inherited]This function
void Partitioner::repartition (MeshBase &mesh, const unsigned intn = libMesh::n_processors()) [inherited]Repartitions the MeshBase into n parts. This is required since some partitoning algorithms can repartition more efficiently than computing a new partitioning from scratch. The default behavior is to simply call this->partition(n)
void Partitioner::set_node_processor_ids (MeshBase &mesh) [static, inherited]This function is called after partitioning to set the processor IDs for the nodes. By definition, a Node's processor ID is the minimum processor ID for all of the elements which share the node.
void Partitioner::set_parent_processor_ids (MeshBase &mesh) [static, inherited]This function is called after partitioning to set the processor IDs for the inactive parent elements. A Parent's processor ID is the same as its first child.
void Partitioner::single_partition (MeshBase &mesh) [protected, inherited]Trivially 'partitions' the mesh for one processor. Simply loops through the elements and assigns all of them to processor 0. Is is provided as a separate function so that derived classes may use it without reimplementing it.
Member Data Documentation
const unsigned int Partitioner::communication_blocksize = 1000000 [static, protected, inherited]The blocksize to use when doing blocked parallel communication. This limits the maximum vector size which can be used in a single communication step.
Author

This document was created by man2html, using the manual pages.
Time: 21:50:11 GMT, April 16, 2011