Poster of Linux kernelThe best gift for a Linux geek
CentroidPartitioner

CentroidPartitioner

Section: C Library Functions (3) Updated: Thu Apr 7 2011
Local index Up
 

NAME

CentroidPartitioner -  

SYNOPSIS


#include <centroid_partitioner.h>

Inherits Partitioner.  

Public Types


enum CentroidSortMethod { X = 0, Y, Z, RADIAL, INVALID_METHOD }
 

Public Member Functions


CentroidPartitioner (const CentroidSortMethod sm=X)

virtual AutoPtr< Partitioner > clone () const

CentroidSortMethod sort_method () const

void set_sort_method (const CentroidSortMethod sm)

void partition (MeshBase &mesh, const unsigned int n=libMesh::n_processors())

void repartition (MeshBase &mesh, const unsigned int n=libMesh::n_processors())
 

Static Public Member Functions


static void partition_unpartitioned_elements (MeshBase &mesh, const unsigned int n=libMesh::n_processors())

static void set_parent_processor_ids (MeshBase &mesh)

static void set_node_processor_ids (MeshBase &mesh)
 

Protected Member Functions


virtual void _do_partition (MeshBase &mesh, const unsigned int n)

void single_partition (MeshBase &mesh)

virtual void _do_repartition (MeshBase &mesh, const unsigned int n)
 

Static Protected Attributes


static const unsigned int communication_blocksize = 1000000
 

Private Member Functions


void compute_centroids (MeshBase &mesh)
 

Static Private Member Functions


static bool sort_x (const std::pair< Point, Elem * > &lhs, const std::pair< Point, Elem * > &rhs)

static bool sort_y (const std::pair< Point, Elem * > &lhs, const std::pair< Point, Elem * > &rhs)

static bool sort_z (const std::pair< Point, Elem * > &lhs, const std::pair< Point, Elem * > &rhs)

static bool sort_radial (const std::pair< Point, Elem * > &lhs, const std::pair< Point, Elem * > &rhs)
 

Private Attributes


CentroidSortMethod _sort_method

std::vector< std::pair< Point, Elem * > > _elem_centroids
 

Detailed Description

The centroid partitioner partitions simply based on the locations of element centroids. You must define what you mean by 'less than' for the list of element centroids, e.g. if you only care about distance in the z-direction, you would define 'less than' differently than if you cared about radial distance.

Author:

John W. Peterson and Benjamin S. Kirk, 2003

Definition at line 51 of file centroid_partitioner.h.  

Member Enumeration Documentation

 

enum CentroidPartitioner::CentroidSortMethodA typedef which is reserved only for use within this class. If X is chosen, then centroid locations will be sorted according to their X-location, etc...

Enumerator:

X
Y
Z
RADIAL
INVALID_METHOD

Definition at line 61 of file centroid_partitioner.h.

                          {X=0,
                           Y,
                           Z,
                           RADIAL,
                           INVALID_METHOD};
 

Constructor & Destructor Documentation

 

CentroidPartitioner::CentroidPartitioner (const CentroidSortMethodsm = X) [inline]Constructor. Takes the CentroidSortMethod to use, which defaults to X ordering.

Definition at line 71 of file centroid_partitioner.h.

Referenced by clone().

: _sort_method(sm) {}
 

Member Function Documentation

 

void CentroidPartitioner::_do_partition (MeshBase &mesh, const unsigned intn) [protected, virtual]Partitions the mesh into n subdomains. This is a required interface for the class.

Implements Partitioner.

Definition at line 31 of file centroid_partitioner.C.

References _elem_centroids, compute_centroids(), std::min(), MeshTools::n_elem(), MeshBase::n_elem(), DofObject::processor_id(), RADIAL, Partitioner::single_partition(), sort_method(), sort_radial(), sort_x(), sort_y(), sort_z(), X, Y, and Z.

{
  // Check for an easy return
  if (n == 1)
    {
      this->single_partition (mesh);
      return;
    }


  // Possibly reconstruct centroids
  if (mesh.n_elem() != _elem_centroids.size())
    this->compute_centroids (mesh);


  
  switch (this->sort_method())
    {
    case X:
      {
        std::sort(_elem_centroids.begin(),
                  _elem_centroids.end(),
                  CentroidPartitioner::sort_x);
        
        break;
      }

      
    case Y:
      {
        std::sort(_elem_centroids.begin(),
                  _elem_centroids.end(),
                  CentroidPartitioner::sort_y);
        
        break;
        
      }

      
    case Z:
      {
        std::sort(_elem_centroids.begin(),
                  _elem_centroids.end(),
                  CentroidPartitioner::sort_z);
        
        break;
      }

      
     case RADIAL:
      {
        std::sort(_elem_centroids.begin(),
                  _elem_centroids.end(),
                  CentroidPartitioner::sort_radial);
        
        break;
      } 
    default:
      libmesh_error();
    }

  
  // Make sure the user has not handed us an
  // invalid number of partitions.
  libmesh_assert (n > 0);

  // the number of elements, e.g. 1000
  const unsigned int n_elem      = mesh.n_elem();
  // the number of elements per processor, e.g 400
  const unsigned int target_size = n_elem / n;

  // Make sure the mesh hasn't changed since the
  // last time we computed the centroids.
  libmesh_assert (mesh.n_elem() == _elem_centroids.size());

  for (unsigned int i=0; i<n_elem; i++)
    {
      Elem* elem = _elem_centroids[i].second;

      elem->processor_id() = std::min (i / target_size, n-1);
    }   
}
 

virtual void Partitioner::_do_repartition (MeshBase &mesh, const unsigned intn) [inline, protected, virtual, inherited]This is the actual re-partitioning method which can be overloaded in derived classes. Note that the default behavior is to simply call the partition function.

Reimplemented in ParmetisPartitioner.

Definition at line 133 of file partitioner.h.

References Partitioner::_do_partition().

Referenced by Partitioner::repartition().

                                                      { this->_do_partition (mesh, n); }
 

virtual AutoPtr<Partitioner> CentroidPartitioner::clone () const [inline, virtual]Creates a new partitioner of this type and returns it in an AutoPtr.

Implements Partitioner.

Definition at line 77 of file centroid_partitioner.h.

References CentroidPartitioner(), and sort_method().

                                              {
    AutoPtr<Partitioner> cloned_partitioner
      (new CentroidPartitioner(sort_method()));
    return cloned_partitioner;
  }
 

void CentroidPartitioner::compute_centroids (MeshBase &mesh) [private]Computes a list of element centroids for the mesh. This list will be kept around in case a repartition is desired.

Definition at line 122 of file centroid_partitioner.C.

References _elem_centroids, Elem::centroid(), MeshBase::elements_begin(), MeshBase::elements_end(), and MeshBase::n_elem().

Referenced by _do_partition().

{
  _elem_centroids.clear();
  _elem_centroids.reserve(mesh.n_elem());
  
//   elem_iterator it(mesh.elements_begin());
//   const elem_iterator it_end(mesh.elements_end());

  MeshBase::element_iterator       it     = mesh.elements_begin();
  const MeshBase::element_iterator it_end = mesh.elements_end(); 

  for (; it != it_end; ++it)
    {
      Elem* elem = *it;

      _elem_centroids.push_back(std::make_pair(elem->centroid(), elem));
    }
}
 

void Partitioner::partition (MeshBase &mesh, const unsigned intn = libMesh::n_processors()) [inherited]Partition the MeshBase into n parts. If the user does not specify a number of pieces into which the mesh should be partitioned, then the default behavior of the partitioner is to partition according to the number of processors defined in libMesh::n_processors(). The partitioner currently does not modify the subdomain_id of each element. This number is reserved for things like material properties, etc.

Definition at line 43 of file partitioner.C.

References Partitioner::_do_partition(), MeshBase::is_serial(), std::min(), MeshBase::n_active_elem(), Partitioner::partition_unpartitioned_elements(), MeshBase::set_n_partitions(), Partitioner::set_node_processor_ids(), Partitioner::set_parent_processor_ids(), and Partitioner::single_partition().

Referenced by SFCPartitioner::_do_partition(), MetisPartitioner::_do_partition(), and ParmetisPartitioner::_do_repartition().

{
  // BSK - temporary fix while redistribution is integrated 6/26/2008
  // Uncomment this to not repartition in parallel
   if (!mesh.is_serial())
     return;

  // we cannot partition into more pieces than we have
  // active elements!
  const unsigned int n_parts =
    std::min(mesh.n_active_elem(), n);
  
  // Set the number of partitions in the mesh
  mesh.set_n_partitions()=n_parts;

  if (n_parts == 1)
    {
      this->single_partition (mesh);
      return;
    }
  
  // First assign a temporary partitioning to any unpartitioned elements
  Partitioner::partition_unpartitioned_elements(mesh, n_parts);
  
  // Call the partitioning function
  this->_do_partition(mesh,n_parts);

  // Set the parent's processor ids
  Partitioner::set_parent_processor_ids(mesh);

  // Set the node's processor ids
  Partitioner::set_node_processor_ids(mesh);
}
 

void Partitioner::partition_unpartitioned_elements (MeshBase &mesh, const unsigned intn = libMesh::n_processors()) [static, inherited]This function

Definition at line 139 of file partitioner.C.

References MeshTools::bounding_box(), MeshCommunication::find_global_indices(), MeshTools::n_elem(), libMesh::n_processors(), DofObject::processor_id(), MeshBase::unpartitioned_elements_begin(), and MeshBase::unpartitioned_elements_end().

Referenced by Partitioner::partition(), and Partitioner::repartition().

{
  MeshBase::const_element_iterator       it  = mesh.unpartitioned_elements_begin();
  const MeshBase::const_element_iterator end = mesh.unpartitioned_elements_end();

  const unsigned int n_unpartitioned_elements = MeshTools::n_elem (it, end);

  // the unpartitioned elements must exist on all processors. If the range is empty on one
  // it is empty on all, and we can quit right here.
  if (!n_unpartitioned_elements) return;
         
  // find the target subdomain sizes
  std::vector<unsigned int> subdomain_bounds(libMesh::n_processors());

  for (unsigned int pid=0; pid<libMesh::n_processors(); pid++)
    {
      unsigned int tgt_subdomain_size = 0;

      // watch out for the case that n_subdomains < n_processors
      if (pid < n_subdomains)
        {
          tgt_subdomain_size = n_unpartitioned_elements/n_subdomains;
      
          if (pid < n_unpartitioned_elements%n_subdomains)
            tgt_subdomain_size++;

        }
      
      //std::cout << 'pid, #= ' << pid << ', ' << tgt_subdomain_size << std::endl;
      if (pid == 0)
        subdomain_bounds[0] = tgt_subdomain_size;
      else
        subdomain_bounds[pid] = subdomain_bounds[pid-1] + tgt_subdomain_size;
    }
  
  libmesh_assert (subdomain_bounds.back() == n_unpartitioned_elements);  
  
  // create the unique mapping for all unpartitioned elements independent of partitioning
  // determine the global indexing for all the unpartitoned elements
  std::vector<unsigned int> global_indices;
    
  // Calling this on all processors a unique range in [0,n_unpartitioned_elements) is constructed.  
  // Only the indices for the elements we pass in are returned in the array.
  MeshCommunication().find_global_indices (MeshTools::bounding_box(mesh), it, end, 
                                           global_indices);
  
  for (unsigned int cnt=0; it != end; ++it)
    {
      Elem *elem = *it;
      
      libmesh_assert (cnt < global_indices.size());
      const unsigned int global_index =
        global_indices[cnt++];
      
      libmesh_assert (global_index < subdomain_bounds.back());
      libmesh_assert (global_index < n_unpartitioned_elements);

      const unsigned int subdomain_id =
        std::distance(subdomain_bounds.begin(),
                      std::upper_bound(subdomain_bounds.begin(),
                                       subdomain_bounds.end(),
                                       global_index));
      libmesh_assert (subdomain_id < n_subdomains);
     
      elem->processor_id() = subdomain_id;              
      //std::cout << 'assigning ' << global_index << ' to ' << subdomain_id << std::endl;             
    }
}
 

void Partitioner::repartition (MeshBase &mesh, const unsigned intn = libMesh::n_processors()) [inherited]Repartitions the MeshBase into n parts. This is required since some partitoning algorithms can repartition more efficiently than computing a new partitioning from scratch. The default behavior is to simply call this->partition(n)

Definition at line 82 of file partitioner.C.

References Partitioner::_do_repartition(), std::min(), MeshBase::n_active_elem(), Partitioner::partition_unpartitioned_elements(), MeshBase::set_n_partitions(), Partitioner::set_node_processor_ids(), Partitioner::set_parent_processor_ids(), and Partitioner::single_partition().

{
  // we cannot partition into more pieces than we have
  // active elements!
  const unsigned int n_parts =
    std::min(mesh.n_active_elem(), n);
  
  // Set the number of partitions in the mesh
  mesh.set_n_partitions()=n_parts;

  if (n_parts == 1)
    {
      this->single_partition (mesh);
      return;
    }
  
  // First assign a temporary partitioning to any unpartitioned elements
  Partitioner::partition_unpartitioned_elements(mesh, n_parts);
  
  // Call the partitioning function
  this->_do_repartition(mesh,n_parts);
  
  // Set the parent's processor ids
  Partitioner::set_parent_processor_ids(mesh);
  
  // Set the node's processor ids
  Partitioner::set_node_processor_ids(mesh);
}
 

void Partitioner::set_node_processor_ids (MeshBase &mesh) [static, inherited]This function is called after partitioning to set the processor IDs for the nodes. By definition, a Node's processor ID is the minimum processor ID for all of the elements which share the node.

Definition at line 353 of file partitioner.C.

References MeshBase::active_elements_begin(), MeshBase::active_elements_end(), MeshBase::elements_begin(), MeshBase::elements_end(), Elem::get_node(), DofObject::id(), DofObject::invalid_processor_id, DofObject::invalidate_processor_id(), std::min(), MeshTools::n_elem(), Elem::n_nodes(), MeshBase::n_partitions(), libMesh::n_processors(), MeshBase::node_ptr(), MeshBase::nodes_begin(), MeshBase::nodes_end(), MeshBase::not_active_elements_begin(), MeshBase::not_active_elements_end(), libMesh::processor_id(), DofObject::processor_id(), MeshBase::subactive_elements_begin(), MeshBase::subactive_elements_end(), MeshBase::unpartitioned_elements_begin(), and MeshBase::unpartitioned_elements_end().

Referenced by Partitioner::partition(), XdrIO::read(), Partitioner::repartition(), and BoundaryInfo::sync().

{
  START_LOG('set_node_processor_ids()','Partitioner');

  // This function must be run on all processors at once
  parallel_only();

  // If we have any unpartitioned elements at this 
  // stage there is a problem
  libmesh_assert (MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
                            mesh.unpartitioned_elements_end()) == 0);


//   const unsigned int orig_n_local_nodes = mesh.n_local_nodes();

//   std::cerr << '[' << libMesh::processor_id() << ']: orig_n_local_nodes='
//          << orig_n_local_nodes << std::endl;

  // Build up request sets.  Each node is currently owned by a processor because
  // it is connected to an element owned by that processor.  However, during the
  // repartitioning phase that element may have been assigned a new processor id, but
  // it is still resident on the original processor.  We need to know where to look
  // for new ids before assigning new ids, otherwise we may be asking the wrong processors
  // for the wrong information.
  //
  // The only remaining issue is what to do with unpartitioned nodes.  Since they are required
  // to live on all processors we can simply rely on ourselves to number them properly.
  std::vector<std::vector<unsigned int> >
    requested_node_ids(libMesh::n_processors());

  // Loop over all the nodes, count the ones on each processor.  We can skip ourself
  std::vector<unsigned int> ghost_nodes_from_proc(libMesh::n_processors(), 0);

  MeshBase::node_iterator       node_it  = mesh.nodes_begin();
  const MeshBase::node_iterator node_end = mesh.nodes_end();
  
  for (; node_it != node_end; ++node_it)
    {
      Node *node = *node_it;
      libmesh_assert(node);
      const unsigned int current_pid = node->processor_id();
      if (current_pid != libMesh::processor_id() &&
          current_pid != DofObject::invalid_processor_id)
        {
          libmesh_assert(current_pid < ghost_nodes_from_proc.size());
          ghost_nodes_from_proc[current_pid]++;
        }
    }

  // We know how many objects live on each processor, so reserve()
  // space for each.
  for (unsigned int pid=0; pid != libMesh::n_processors(); ++pid)
    requested_node_ids[pid].reserve(ghost_nodes_from_proc[pid]);

  // We need to get the new pid for each node from the processor
  // which *currently* owns the node.  We can safely skip ourself
  for (node_it = mesh.nodes_begin(); node_it != node_end; ++node_it)
    {
      Node *node = *node_it;
      libmesh_assert(node);
      const unsigned int current_pid = node->processor_id();      
      if (current_pid != libMesh::processor_id() &&
          current_pid != DofObject::invalid_processor_id)
        {
          libmesh_assert(current_pid < requested_node_ids.size());
          libmesh_assert(requested_node_ids[current_pid].size() <
                 ghost_nodes_from_proc[current_pid]);
          requested_node_ids[current_pid].push_back(node->id());
        }
      
      // Unset any previously-set node processor ids
      node->invalidate_processor_id();
    }
  
  // Loop over all the active elements
  MeshBase::element_iterator       elem_it  = mesh.active_elements_begin();
  const MeshBase::element_iterator elem_end = mesh.active_elements_end(); 
  
  for ( ; elem_it != elem_end; ++elem_it)
    {
      Elem* elem = *elem_it;
      libmesh_assert(elem);

      libmesh_assert (elem->processor_id() != DofObject::invalid_processor_id);
      
      // For each node, set the processor ID to the min of
      // its current value and this Element's processor id.
      for (unsigned int n=0; n<elem->n_nodes(); ++n)
        elem->get_node(n)->processor_id() = std::min(elem->get_node(n)->processor_id(),
                                                     elem->processor_id());
    }

  // And loop over the subactive elements, but don't reassign
  // nodes that are already active on another processor.
  MeshBase::element_iterator       sub_it  = mesh.subactive_elements_begin();
  const MeshBase::element_iterator sub_end = mesh.subactive_elements_end(); 
  
  for ( ; sub_it != sub_end; ++sub_it)
    {
      Elem* elem = *sub_it;
      libmesh_assert(elem);

      libmesh_assert (elem->processor_id() != DofObject::invalid_processor_id);
      
      for (unsigned int n=0; n<elem->n_nodes(); ++n)
        if (elem->get_node(n)->processor_id() == DofObject::invalid_processor_id)
          elem->get_node(n)->processor_id() = elem->processor_id();
    }

  // Same for the inactive elements -- we will have already gotten most of these
  // nodes, *except* for the case of a parent with a subset of children which are
  // ghost elements.  In that case some of the parent nodes will not have been
  // properly handled yet
  MeshBase::element_iterator       not_it  = mesh.not_active_elements_begin();
  const MeshBase::element_iterator not_end = mesh.not_active_elements_end(); 
  
  for ( ; not_it != not_end; ++not_it)
    {
      Elem* elem = *not_it;
      libmesh_assert(elem);

      libmesh_assert (elem->processor_id() != DofObject::invalid_processor_id);
      
      for (unsigned int n=0; n<elem->n_nodes(); ++n)
        if (elem->get_node(n)->processor_id() == DofObject::invalid_processor_id)
          elem->get_node(n)->processor_id() = elem->processor_id();
    }

#ifndef NDEBUG
  {
    // make sure all the nodes connected to any element have received a
    // valid processor id
    std::set<const Node*> used_nodes;
    MeshBase::element_iterator       all_it  = mesh.elements_begin();
    const MeshBase::element_iterator all_end = mesh.elements_end(); 
  
    for ( ; all_it != all_end; ++all_it)
      {
        Elem* elem = *all_it;
        libmesh_assert(elem);
        libmesh_assert(elem->processor_id() != DofObject::invalid_processor_id);
        for (unsigned int n=0; n<elem->n_nodes(); ++n)
          used_nodes.insert(elem->get_node(n));
      }

    for (node_it = mesh.nodes_begin(); node_it != node_end; ++node_it)
      {
        Node *node = *node_it;
        libmesh_assert(node);
        libmesh_assert(used_nodes.count(node));
        libmesh_assert(node->processor_id() != DofObject::invalid_processor_id);
      }
  }
#endif

  // Next set node ids from other processors, excluding self
  for (unsigned int p=1; p != libMesh::n_processors(); ++p)
    {
      // Trade my requests with processor procup and procdown
      unsigned int procup = (libMesh::processor_id() + p) %
                             libMesh::n_processors();
      unsigned int procdown = (libMesh::n_processors() +
                               libMesh::processor_id() - p) %
                               libMesh::n_processors();
      std::vector<unsigned int> request_to_fill;
      Parallel::send_receive(procup, requested_node_ids[procup],
                             procdown, request_to_fill);

      // Fill those requests in-place
      for (unsigned int i=0; i != request_to_fill.size(); ++i)
        {
          Node *node = mesh.node_ptr(request_to_fill[i]);
          libmesh_assert(node);
          const unsigned int new_pid = node->processor_id();
          libmesh_assert (new_pid != DofObject::invalid_processor_id);
          libmesh_assert (new_pid < mesh.n_partitions()); // this is the correct test --
          request_to_fill[i] = new_pid;           //  the number of partitions may
        }                                         //  not equal the number of processors

      // Trade back the results
      std::vector<unsigned int> filled_request;
      Parallel::send_receive(procdown, request_to_fill,
                             procup,   filled_request);
      libmesh_assert(filled_request.size() == requested_node_ids[procup].size());
      
      // And copy the id changes we've now been informed of
      for (unsigned int i=0; i != filled_request.size(); ++i)
        {
          Node *node = mesh.node_ptr(requested_node_ids[procup][i]);
          libmesh_assert(node);
          libmesh_assert(filled_request[i] < mesh.n_partitions()); // this is the correct test --
          node->processor_id(filled_request[i]);           //  the number of partitions may
        }                                                  //  not equal the number of processors
    }
  
  STOP_LOG('set_node_processor_ids()','Partitioner');
}
 

void Partitioner::set_parent_processor_ids (MeshBase &mesh) [static, inherited]This function is called after partitioning to set the processor IDs for the inactive parent elements. A Parent's processor ID is the same as its first child.

Definition at line 211 of file partitioner.C.

References MeshBase::active_elements_begin(), MeshBase::active_elements_end(), MeshBase::ancestor_elements_begin(), MeshBase::ancestor_elements_end(), Elem::child(), Partitioner::communication_blocksize, DofObject::id(), DofObject::invalid_processor_id, DofObject::invalidate_processor_id(), Elem::is_remote(), MeshBase::is_serial(), MeshBase::max_elem_id(), std::min(), Elem::n_children(), MeshTools::n_elem(), Elem::parent(), DofObject::processor_id(), MeshBase::unpartitioned_elements_begin(), and MeshBase::unpartitioned_elements_end().

Referenced by Partitioner::partition(), and Partitioner::repartition().

{
  START_LOG('set_parent_processor_ids()','Partitioner');
  
  // If the mesh is serial we have access to all the elements,
  // in particular all the active ones.  We can therefore set
  // the parent processor ids indirecly through their children.
  // By convention a parent is assigned to the minimum processor
  // of all its children.
  if (mesh.is_serial())
    {
      // Loop over all the active elements in the mesh  
      MeshBase::element_iterator       it  = mesh.active_elements_begin();
      const MeshBase::element_iterator end = mesh.active_elements_end();
      
      for ( ; it!=end; ++it)
        {
#ifdef LIBMESH_ENABLE_AMR
          Elem *child  = *it;
          Elem *parent = child->parent();

          while (parent)
            {
              // invalidate the parent id, otherwise the min below
              // will not work if the current parent id is less
              // than all the children!
              parent->invalidate_processor_id();
              
              for(unsigned int c=0; c<parent->n_children(); c++)
                {
                  child = parent->child(c);
                  libmesh_assert(child);
                  libmesh_assert(!child->is_remote());
                  libmesh_assert(child->processor_id() != DofObject::invalid_processor_id);
                  parent->processor_id() = std::min(parent->processor_id(),
                                                    child->processor_id());
                }             
              parent = parent->parent();
            }
#else
          libmesh_assert ((*it)->level() == 0);
#endif
          
        }
    }

  // When the mesh is parallel we cannot guarantee that parents have access to
  // all their children.
  else
    {
      // We will use a brute-force approach here.  Each processor finds its parent
      // elements and sets the parent pid to the minimum of its local children.
      // A global reduction is then performed to make sure the true minimum is found.
      // As noted, this is required because we cannot guarantee that a parent has
      // access to all its children on any single processor.
      parallel_only();
      libmesh_assert(MeshTools::n_elem(mesh.unpartitioned_elements_begin(),
                               mesh.unpartitioned_elements_end()) == 0);

      const unsigned int max_elem_id = mesh.max_elem_id();

      std::vector<unsigned short int>
        parent_processor_ids (std::min(communication_blocksize,
                                       max_elem_id));
      
      for (unsigned int blk=0, last_elem_id=0; last_elem_id<max_elem_id; blk++)
        {
                              last_elem_id = std::min((blk+1)*communication_blocksize, max_elem_id);
          const unsigned int first_elem_id = blk*communication_blocksize;

          std::fill (parent_processor_ids.begin(),
                     parent_processor_ids.end(),
                     DofObject::invalid_processor_id);

          // first build up local contributions to parent_processor_ids
          MeshBase::element_iterator       not_it  = mesh.ancestor_elements_begin();
          const MeshBase::element_iterator not_end = mesh.ancestor_elements_end(); 

          bool have_parent_in_block = false;
          
          for ( ; not_it != not_end; ++not_it)
            {
#ifdef LIBMESH_ENABLE_AMR
              Elem *parent = *not_it;

              const unsigned int parent_idx = parent->id();
              libmesh_assert (parent_idx < max_elem_id);

              if ((parent_idx >= first_elem_id) &&
                  (parent_idx <  last_elem_id))
                {
                  have_parent_in_block = true;
                  unsigned short int parent_pid = DofObject::invalid_processor_id;

                  for (unsigned int c=0; c<parent->n_children(); c++)
                    parent_pid = std::min (parent_pid, parent->child(c)->processor_id());
                  
                  const unsigned int packed_idx = parent_idx - first_elem_id;
                  libmesh_assert (packed_idx < parent_processor_ids.size());

                  parent_processor_ids[packed_idx] = parent_pid;
                }
#else
              // without AMR there should be no inactive elements
              libmesh_error();
#endif
            }

          // then find the global minimum
          Parallel::min (parent_processor_ids);

          // and assign the ids, if we have a parent in this block.
          if (have_parent_in_block)
            for (not_it = mesh.ancestor_elements_begin();
                 not_it != not_end; ++not_it)
              {
                Elem *parent = *not_it;
                
                const unsigned int parent_idx = parent->id();
                
                if ((parent_idx >= first_elem_id) &&
                    (parent_idx <  last_elem_id))
                  {
                    const unsigned int packed_idx = parent_idx - first_elem_id;
                    libmesh_assert (packed_idx < parent_processor_ids.size());
                    
                    const unsigned short int parent_pid =
                      parent_processor_ids[packed_idx];
                    
                    libmesh_assert (parent_pid != DofObject::invalid_processor_id);
                    
                    parent->processor_id() = parent_pid;
                  }
              }
        }
    }
  
  STOP_LOG('set_parent_processor_ids()','Partitioner');
}
 

void CentroidPartitioner::set_sort_method (const CentroidSortMethodsm) [inline]Change how the elements will be sorted.

Definition at line 91 of file centroid_partitioner.h.

References _sort_method.

{_sort_method = sm; }
 

void Partitioner::single_partition (MeshBase &mesh) [protected, inherited]Trivially 'partitions' the mesh for one processor. Simply loops through the elements and assigns all of them to processor 0. Is is provided as a separate function so that derived classes may use it without reimplementing it.

Definition at line 116 of file partitioner.C.

References MeshBase::elements_begin(), MeshBase::elements_end(), MeshBase::nodes_begin(), and MeshBase::nodes_end().

Referenced by SFCPartitioner::_do_partition(), MetisPartitioner::_do_partition(), LinearPartitioner::_do_partition(), _do_partition(), ParmetisPartitioner::_do_repartition(), Partitioner::partition(), and Partitioner::repartition().

{
  START_LOG('single_partition()','Partitioner');
  
  // Loop over all the elements and assign them to processor 0.
  MeshBase::element_iterator       elem_it  = mesh.elements_begin();
  const MeshBase::element_iterator elem_end = mesh.elements_end(); 

  for ( ; elem_it != elem_end; ++elem_it)
    (*elem_it)->processor_id() = 0;

  // For a single partition, all the nodes are on processor 0
  MeshBase::node_iterator       node_it  = mesh.nodes_begin();
  const MeshBase::node_iterator node_end = mesh.nodes_end();
  
  for ( ; node_it != node_end; ++node_it)
    (*node_it)->processor_id() = 0;

  STOP_LOG('single_partition()','Partitioner');
}
 

CentroidSortMethod CentroidPartitioner::sort_method () const [inline]Specifies how the elements will be sorted.

Definition at line 86 of file centroid_partitioner.h.

References _sort_method.

Referenced by _do_partition(), and clone().

{ return _sort_method; }
 

bool CentroidPartitioner::sort_radial (const std::pair< Point, Elem * > &lhs, const std::pair< Point, Elem * > &rhs) [static, private]Partition the list of centroids based on the radial position of the centroid. This provides a function which may be passed to the std::sort routine for sorting the elements by centroid.

Definition at line 171 of file centroid_partitioner.C.

Referenced by _do_partition().

{
  return (lhs.first.size() < rhs.first.size());
}
 

bool CentroidPartitioner::sort_x (const std::pair< Point, Elem * > &lhs, const std::pair< Point, Elem * > &rhs) [static, private]Partition the list of centroids based on the x-coordinate of the centroid. This provides a function which may be passed to the std::sort routine for sorting the elements by centroid.

Definition at line 144 of file centroid_partitioner.C.

Referenced by _do_partition().

{
  return (lhs.first(0) < rhs.first(0));
}
 

bool CentroidPartitioner::sort_y (const std::pair< Point, Elem * > &lhs, const std::pair< Point, Elem * > &rhs) [static, private]Partition the list of centroids based on the y-coordinate of the centroid. This provides a function which may be passed to the std::sort routine for sorting the elements by centroid.

Definition at line 153 of file centroid_partitioner.C.

Referenced by _do_partition().

{
  return (lhs.first(1) < rhs.first(1));
}
 

bool CentroidPartitioner::sort_z (const std::pair< Point, Elem * > &lhs, const std::pair< Point, Elem * > &rhs) [static, private]Partition the list of centroids based on the z-coordinate of the centroid. This provides a function which may be passed to the std::sort routine for sorting the elements by centroid.

Definition at line 163 of file centroid_partitioner.C.

Referenced by _do_partition().

{
  return (lhs.first(2) < rhs.first(2));
}
 

Member Data Documentation

 

std::vector<std::pair<Point, Elem*> > CentroidPartitioner::_elem_centroids [private]Vector which holds pairs of centroids and their respective element pointers.

Definition at line 158 of file centroid_partitioner.h.

Referenced by _do_partition(), and compute_centroids().  

CentroidSortMethod CentroidPartitioner::_sort_method [private]Store a flag which tells which type of sort method we are using.

Definition at line 152 of file centroid_partitioner.h.

Referenced by set_sort_method(), and sort_method().  

const unsigned int Partitioner::communication_blocksize = 1000000 [static, protected, inherited]The blocksize to use when doing blocked parallel communication. This limits the maximum vector size which can be used in a single communication step.

Definition at line 140 of file partitioner.h.

Referenced by Partitioner::set_parent_processor_ids().

 

Author

Generated automatically by Doxygen for libMesh from the source code.


 

Index

NAME
SYNOPSIS
Public Types
Public Member Functions
Static Public Member Functions
Protected Member Functions
Static Protected Attributes
Private Member Functions
Static Private Member Functions
Private Attributes
Detailed Description
Member Enumeration Documentation
enum CentroidPartitioner::CentroidSortMethodA typedef which is reserved only for use within this class. If X is chosen, then centroid locations will be sorted according to their X-location, etc...
Constructor & Destructor Documentation
CentroidPartitioner::CentroidPartitioner (const CentroidSortMethodsm = X) [inline]Constructor. Takes the CentroidSortMethod to use, which defaults to X ordering.
Member Function Documentation
void CentroidPartitioner::_do_partition (MeshBase &mesh, const unsigned intn) [protected, virtual]Partitions the mesh into n subdomains. This is a required interface for the class.
virtual void Partitioner::_do_repartition (MeshBase &mesh, const unsigned intn) [inline, protected, virtual, inherited]This is the actual re-partitioning method which can be overloaded in derived classes. Note that the default behavior is to simply call the partition function.
virtual AutoPtr<Partitioner> CentroidPartitioner::clone () const [inline, virtual]Creates a new partitioner of this type and returns it in an AutoPtr.
void CentroidPartitioner::compute_centroids (MeshBase &mesh) [private]Computes a list of element centroids for the mesh. This list will be kept around in case a repartition is desired.
void Partitioner::partition (MeshBase &mesh, const unsigned intn = libMesh::n_processors()) [inherited]Partition the MeshBase into n parts. If the user does not specify a number of pieces into which the mesh should be partitioned, then the default behavior of the partitioner is to partition according to the number of processors defined in libMesh::n_processors(). The partitioner currently does not modify the subdomain_id of each element. This number is reserved for things like material properties, etc.
void Partitioner::partition_unpartitioned_elements (MeshBase &mesh, const unsigned intn = libMesh::n_processors()) [static, inherited]This function
void Partitioner::repartition (MeshBase &mesh, const unsigned intn = libMesh::n_processors()) [inherited]Repartitions the MeshBase into n parts. This is required since some partitoning algorithms can repartition more efficiently than computing a new partitioning from scratch. The default behavior is to simply call this->partition(n)
void Partitioner::set_node_processor_ids (MeshBase &mesh) [static, inherited]This function is called after partitioning to set the processor IDs for the nodes. By definition, a Node's processor ID is the minimum processor ID for all of the elements which share the node.
void Partitioner::set_parent_processor_ids (MeshBase &mesh) [static, inherited]This function is called after partitioning to set the processor IDs for the inactive parent elements. A Parent's processor ID is the same as its first child.
void CentroidPartitioner::set_sort_method (const CentroidSortMethodsm) [inline]Change how the elements will be sorted.
void Partitioner::single_partition (MeshBase &mesh) [protected, inherited]Trivially 'partitions' the mesh for one processor. Simply loops through the elements and assigns all of them to processor 0. Is is provided as a separate function so that derived classes may use it without reimplementing it.
CentroidSortMethod CentroidPartitioner::sort_method () const [inline]Specifies how the elements will be sorted.
bool CentroidPartitioner::sort_radial (const std::pair< Point, Elem * > &lhs, const std::pair< Point, Elem * > &rhs) [static, private]Partition the list of centroids based on the radial position of the centroid. This provides a function which may be passed to the std::sort routine for sorting the elements by centroid.
bool CentroidPartitioner::sort_x (const std::pair< Point, Elem * > &lhs, const std::pair< Point, Elem * > &rhs) [static, private]Partition the list of centroids based on the x-coordinate of the centroid. This provides a function which may be passed to the std::sort routine for sorting the elements by centroid.
bool CentroidPartitioner::sort_y (const std::pair< Point, Elem * > &lhs, const std::pair< Point, Elem * > &rhs) [static, private]Partition the list of centroids based on the y-coordinate of the centroid. This provides a function which may be passed to the std::sort routine for sorting the elements by centroid.
bool CentroidPartitioner::sort_z (const std::pair< Point, Elem * > &lhs, const std::pair< Point, Elem * > &rhs) [static, private]Partition the list of centroids based on the z-coordinate of the centroid. This provides a function which may be passed to the std::sort routine for sorting the elements by centroid.
Member Data Documentation
std::vector<std::pair<Point, Elem*> > CentroidPartitioner::_elem_centroids [private]Vector which holds pairs of centroids and their respective element pointers.
CentroidSortMethod CentroidPartitioner::_sort_method [private]Store a flag which tells which type of sort method we are using.
const unsigned int Partitioner::communication_blocksize = 1000000 [static, protected, inherited]The blocksize to use when doing blocked parallel communication. This limits the maximum vector size which can be used in a single communication step.
Author

This document was created by man2html, using the manual pages.
Time: 21:43:14 GMT, April 16, 2011