Intel(R) Threading Building Blocks Doxygen Documentation  version 4.2.3
tbb::internal::generic_scheduler Class Referenceabstract

Work stealing task scheduler. More...

#include <scheduler.h>

Inheritance diagram for tbb::internal::generic_scheduler:
Collaboration diagram for tbb::internal::generic_scheduler:

Public Member Functions

bool is_task_pool_published () const
 
bool is_local_task_pool_quiescent () const
 
bool is_quiescent_local_task_pool_empty () const
 
bool is_quiescent_local_task_pool_reset () const
 
void attach_mailbox (affinity_id id)
 
void init_stack_info ()
 Sets up the data necessary for the stealing limiting heuristics. More...
 
bool can_steal ()
 Returns true if stealing is allowed. More...
 
void publish_task_pool ()
 Used by workers to enter the task pool. More...
 
void leave_task_pool ()
 Leave the task pool. More...
 
void reset_task_pool_and_leave ()
 Resets head and tail indices to 0, and leaves task pool. More...
 
task ** lock_task_pool (arena_slot *victim_arena_slot) const
 Locks victim's task pool, and returns pointer to it. The pointer can be NULL. More...
 
void unlock_task_pool (arena_slot *victim_arena_slot, task **victim_task_pool) const
 Unlocks victim's task pool. More...
 
void acquire_task_pool () const
 Locks the local task pool. More...
 
void release_task_pool () const
 Unlocks the local task pool. More...
 
taskprepare_for_spawning (task *t)
 Checks if t is affinitized to another thread, and if so, bundles it as proxy. More...
 
void commit_spawned_tasks (size_t new_tail)
 Makes newly spawned tasks visible to thieves. More...
 
void commit_relocated_tasks (size_t new_tail)
 Makes relocated tasks visible to thieves and releases the local task pool. More...
 
taskget_task (__TBB_ISOLATION_EXPR(isolation_tag isolation))
 Get a task from the local pool. More...
 
taskget_task (size_t T)
 Get a task from the local pool at specified location T. More...
 
taskget_mailbox_task (__TBB_ISOLATION_EXPR(isolation_tag isolation))
 Attempt to get a task from the mailbox. More...
 
tasksteal_task (__TBB_ISOLATION_EXPR(isolation_tag isolation))
 Attempts to steal a task from a randomly chosen thread/scheduler. More...
 
tasksteal_task_from (__TBB_ISOLATION_ARG(arena_slot &victim_arena_slot, isolation_tag isolation))
 Steal task from another scheduler's ready pool. More...
 
size_t prepare_task_pool (size_t n)
 Makes sure that the task pool can accommodate at least n more elements. More...
 
bool cleanup_master (bool blocking_terminate)
 Perform necessary cleanup when a master thread stops using TBB. More...
 
void assert_task_pool_valid () const
 
void attach_arena (arena *, size_t index, bool is_master)
 
void nested_arena_entry (arena *, size_t)
 
void nested_arena_exit ()
 
void wait_until_empty ()
 
void spawn (task &first, task *&next) __TBB_override
 For internal use only. More...
 
void spawn_root_and_wait (task &first, task *&next) __TBB_override
 For internal use only. More...
 
void enqueue (task &, void *reserved) __TBB_override
 For internal use only. More...
 
void local_spawn (task *first, task *&next)
 
void local_spawn_root_and_wait (task *first, task *&next)
 
virtual void local_wait_for_all (task &parent, task *child)=0
 
void destroy ()
 Destroy and deallocate this scheduler object. More...
 
void cleanup_scheduler ()
 Cleans up this scheduler (the scheduler might be destroyed). More...
 
taskallocate_task (size_t number_of_bytes, __TBB_CONTEXT_ARG(task *parent, task_group_context *context))
 Allocate task object, either from the heap or a free list. More...
 
template<free_task_hint h>
void free_task (task &t)
 Put task on free list. More...
 
void deallocate_task (task &t)
 Return task object to the memory allocator. More...
 
bool is_worker () const
 True if running on a worker thread, false otherwise. More...
 
bool outermost_level () const
 True if the scheduler is on the outermost dispatch level. More...
 
bool master_outermost_level () const
 True if the scheduler is on the outermost dispatch level in a master thread. More...
 
bool worker_outermost_level () const
 True if the scheduler is on the outermost dispatch level in a worker thread. More...
 
unsigned max_threads_in_arena ()
 Returns the concurrency limit of the current arena. More...
 
virtual taskreceive_or_steal_task (__TBB_ISOLATION_ARG(__TBB_atomic reference_count &completion_ref_count, isolation_tag isolation))=0
 Try getting a task from other threads (via mailbox, stealing, FIFO queue, orphans adoption). More...
 
void free_nonlocal_small_task (task &t)
 Free a small task t that that was allocated by a different scheduler. More...
 
- Public Member Functions inherited from tbb::internal::scheduler
virtual void wait_for_all (task &parent, task *child)=0
 For internal use only. More...
 
virtual ~scheduler ()=0
 Pure virtual destructor;. More...
 

Static Public Member Functions

static bool is_version_3_task (task &t)
 
static bool is_proxy (const task &t)
 True if t is a task_proxy. More...
 
static generic_schedulercreate_master (arena *a)
 Initialize a scheduler for a master thread. More...
 
static generic_schedulercreate_worker (market &m, size_t index, bool geniune)
 Initialize a scheduler for a worker thread. More...
 
static void cleanup_worker (void *arg, bool worker)
 Perform necessary cleanup when a worker thread finishes. More...
 
static taskplugged_return_list ()
 Special value used to mark my_return_list as not taking any more entries. More...
 

Public Attributes

uintptr_t my_stealing_threshold
 Position in the call stack specifying its maximal filling when stealing is still allowed. More...
 
marketmy_market
 The market I am in. More...
 
FastRandom my_random
 Random number generator used for picking a random victim from which to steal. More...
 
taskmy_free_list
 Free list of small tasks that can be reused. More...
 
taskmy_dummy_task
 Fake root task created by slave threads. More...
 
long my_ref_count
 Reference count for scheduler. More...
 
bool my_auto_initialized
 True if *this was created by automatic TBB initialization. More...
 
__TBB_atomic intptr_t my_small_task_count
 Number of small tasks that have been allocated by this scheduler. More...
 
taskmy_return_list
 List of small tasks that have been returned to this scheduler by other schedulers. More...
 
- Public Attributes inherited from tbb::internal::intrusive_list_node
intrusive_list_nodemy_prev_node
 
intrusive_list_nodemy_next_node
 
- Public Attributes inherited from tbb::internal::scheduler_state
size_t my_arena_index
 Index of the arena slot the scheduler occupies now, or occupied last time. More...
 
arena_slotmy_arena_slot
 Pointer to the slot in the arena we own at the moment. More...
 
arenamy_arena
 The arena that I own (if master) or am servicing at the moment (if worker) More...
 
taskmy_innermost_running_task
 Innermost task whose task::execute() is running. A dummy task on the outermost level. More...
 
mail_inbox my_inbox
 
affinity_id my_affinity_id
 The mailbox id assigned to this scheduler. More...
 
scheduler_properties my_properties
 

Static Public Attributes

static const size_t quick_task_size = 256-task_prefix_reservation_size
 If sizeof(task) is <=quick_task_size, it is handled on a free list instead of malloc'd. More...
 
static const size_t null_arena_index = ~size_t(0)
 
static const size_t min_task_pool_size = 64
 

Protected Member Functions

 generic_scheduler (market &, bool)
 

Friends

template<typename SchedulerTraits >
class custom_scheduler
 

Detailed Description

Work stealing task scheduler.

None of the fields here are ever read or written by threads other than the thread that creates the instance.

Class generic_scheduler is an abstract base class that contains most of the scheduler, except for tweaks specific to processors and tools (e.g. VTune(TM) Performance Tools). The derived template class custom_scheduler<SchedulerTraits> fills in the tweaks.

Definition at line 137 of file scheduler.h.

Constructor & Destructor Documentation

◆ generic_scheduler()

tbb::internal::generic_scheduler::generic_scheduler ( market m,
bool  genuine 
)
protected

Definition at line 84 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_CONTEXT_ARG, tbb::internal::__TBB_load_relaxed(), acquire_task_pool(), allocate_task(), assert_task_pool_valid(), tbb::internal::assert_task_valid(), tbb::internal::es_task_proxy, tbb::internal::arena_slot_line1::head, tbb::internal::governor::is_set(), ITT_SYNC_CREATE, min_task_pool_size, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::scheduler_state::my_properties, my_return_list, tbb::internal::arena_slot_line2::my_task_pool_size, tbb::internal::scheduler_properties::outermost, tbb::task::prefix(), tbb::task::ready, tbb::internal::task_prefix::ref_count, release_task_pool(), tbb::internal::suppress_unused_warning(), tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line2::task_pool_ptr.

85  : my_market(&m)
86  , my_random(this)
87  , my_ref_count(1)
88 #if __TBB_PREVIEW_RESUMABLE_TASKS
89  , my_co_context(m.worker_stack_size(), genuine ? NULL : this)
90 #endif
91  , my_small_task_count(1) // Extra 1 is a guard reference
92 #if __TBB_SURVIVE_THREAD_SWITCH && TBB_USE_ASSERT
93  , my_cilk_state(cs_none)
94 #endif /* __TBB_SURVIVE_THREAD_SWITCH && TBB_USE_ASSERT */
95 {
96  __TBB_ASSERT( !my_arena_index, "constructor expects the memory being zero-initialized" );
97  __TBB_ASSERT( governor::is_set(NULL), "scheduler is already initialized for this thread" );
98 
99  my_innermost_running_task = my_dummy_task = &allocate_task( sizeof(task), __TBB_CONTEXT_ARG(NULL, &the_dummy_context) );
100 #if __TBB_PREVIEW_CRITICAL_TASKS
101  my_properties.has_taken_critical_task = false;
102 #endif
103 #if __TBB_PREVIEW_RESUMABLE_TASKS
104  my_properties.genuine = genuine;
105  my_current_is_recalled = NULL;
106  my_post_resume_action = PRA_NONE;
107  my_post_resume_arg = NULL;
108  my_wait_task = NULL;
109 #else
110  suppress_unused_warning(genuine);
111 #endif
112  my_properties.outermost = true;
113 #if __TBB_TASK_PRIORITY
114  my_ref_top_priority = &m.my_global_top_priority;
115  my_ref_reload_epoch = &m.my_global_reload_epoch;
116 #endif /* __TBB_TASK_PRIORITY */
117 #if __TBB_TASK_GROUP_CONTEXT
118  // Sync up the local cancellation state with the global one. No need for fence here.
119  my_context_state_propagation_epoch = the_context_state_propagation_epoch;
120  my_context_list_head.my_prev = &my_context_list_head;
121  my_context_list_head.my_next = &my_context_list_head;
122  ITT_SYNC_CREATE(&my_context_list_mutex, SyncType_Scheduler, SyncObj_ContextsList);
123 #endif /* __TBB_TASK_GROUP_CONTEXT */
124  ITT_SYNC_CREATE(&my_dummy_task->prefix().ref_count, SyncType_Scheduler, SyncObj_WorkerLifeCycleMgmt);
125  ITT_SYNC_CREATE(&my_return_list, SyncType_Scheduler, SyncObj_TaskReturnList);
126 }
task * my_innermost_running_task
Innermost task whose task::execute() is running. A dummy task on the outermost level.
Definition: scheduler.h:88
FastRandom my_random
Random number generator used for picking a random victim from which to steal.
Definition: scheduler.h:175
__TBB_atomic reference_count ref_count
Reference count used for synchronization.
Definition: task.h:263
#define __TBB_CONTEXT_ARG(arg1, context)
market * my_market
The market I am in.
Definition: scheduler.h:172
__TBB_atomic intptr_t my_small_task_count
Number of small tasks that have been allocated by this scheduler.
Definition: scheduler.h:461
static bool is_set(generic_scheduler *s)
Used to check validity of the local scheduler TLS contents.
Definition: governor.cpp:120
task * my_dummy_task
Fake root task created by slave threads.
Definition: scheduler.h:186
bool outermost
Indicates that a scheduler is on outermost level.
Definition: scheduler.h:57
task * my_return_list
List of small tasks that have been returned to this scheduler by other schedulers.
Definition: scheduler.h:465
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
scheduler_properties my_properties
Definition: scheduler.h:101
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:79
long my_ref_count
Reference count for scheduler.
Definition: scheduler.h:190
task & allocate_task(size_t number_of_bytes, __TBB_CONTEXT_ARG(task *parent, task_group_context *context))
Allocate task object, either from the heap or a free list.
Definition: scheduler.cpp:335
void suppress_unused_warning(const T1 &)
Utility template function to prevent "unused" warnings by various compilers.
Definition: tbb_stddef.h:398
#define ITT_SYNC_CREATE(obj, type, name)
Definition: itt_notify.h:119
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:991
Here is the call graph for this function:

Member Function Documentation

◆ acquire_task_pool()

void tbb::internal::generic_scheduler::acquire_task_pool ( ) const
inline

Locks the local task pool.

Garbles my_arena_slot->task_pool for the duration of the lock. Requires correctly set my_arena_slot->task_pool_ptr.

ATTENTION: This method is mostly the same as generic_scheduler::lock_task_pool(), with a little different logic of slot state checks (slot is either locked or points to our task pool). Thus if either of them is changed, consider changing the counterpart as well.

Definition at line 491 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::as_atomic(), is_task_pool_published(), ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena::my_slots, tbb::internal::atomic_backoff::pause(), tbb::internal::arena_slot_line1::task_pool, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by cleanup_master(), enqueue(), generic_scheduler(), get_task(), and prepare_task_pool().

491  {
492  if ( !is_task_pool_published() )
493  return; // we are not in arena - nothing to lock
494  bool sync_prepare_done = false;
495  for( atomic_backoff b;;b.pause() ) {
496 #if TBB_USE_ASSERT
497  __TBB_ASSERT( my_arena_slot == my_arena->my_slots + my_arena_index, "invalid arena slot index" );
498  // Local copy of the arena slot task pool pointer is necessary for the next
499  // assertion to work correctly to exclude asynchronous state transition effect.
500  task** tp = my_arena_slot->task_pool;
501  __TBB_ASSERT( tp == LockedTaskPool || tp == my_arena_slot->task_pool_ptr, "slot ownership corrupt?" );
502 #endif
505  {
506  // We acquired our own slot
507  ITT_NOTIFY(sync_acquired, my_arena_slot);
508  break;
509  }
510  else if( !sync_prepare_done ) {
511  // Start waiting
512  ITT_NOTIFY(sync_prepare, my_arena_slot);
513  sync_prepare_done = true;
514  }
515  // Someone else acquired a lock, so pause and do exponential backoff.
516  }
517  __TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "not really acquired task pool" );
518 } // generic_scheduler::acquire_task_pool
atomic< T > & as_atomic(T &t)
Definition: atomic.h:572
arena_slot my_slots[1]
Definition: arena.h:386
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
#define LockedTaskPool
Definition: scheduler.h:47
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:79
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
task **__TBB_atomic task_pool_ptr
Task pool of the scheduler that owns this slot.
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ allocate_task()

task & tbb::internal::generic_scheduler::allocate_task ( size_t  number_of_bytes,
__TBB_CONTEXT_ARG(task *parent, task_group_context *context)   
)

Allocate task object, either from the heap or a free list.

Returns uninitialized task object with initialized prefix.

Definition at line 335 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_cl_prefetch, __TBB_ISOLATION_EXPR, tbb::internal::task_prefix::affinity, tbb::task::allocated, tbb::internal::task_prefix::context, tbb::internal::task_prefix::depth, tbb::internal::task_prefix::extra_state, tbb::task::freed, GATHER_STATISTIC, tbb::internal::task_prefix::isolation, ITT_NOTIFY, my_free_list, my_return_list, my_small_task_count, tbb::internal::task_prefix::next, tbb::internal::NFS_Allocate(), tbb::internal::no_isolation, tbb::internal::task_prefix::origin, tbb::internal::task_prefix::owner, p, parent, tbb::internal::task_prefix::parent, tbb::task::prefix(), quick_task_size, tbb::internal::task_prefix::ref_count, tbb::internal::task_prefix::state, tbb::task::state(), and tbb::internal::task_prefix_reservation_size.

Referenced by tbb::internal::allocate_additional_child_of_proxy::allocate(), tbb::internal::allocate_root_proxy::allocate(), tbb::internal::allocate_continuation_proxy::allocate(), tbb::internal::allocate_child_proxy::allocate(), tbb::internal::allocate_root_proxy::free(), generic_scheduler(), prepare_for_spawning(), and wait_until_empty().

336  {
337  GATHER_STATISTIC(++my_counters.active_tasks);
338  task *t;
339  if( number_of_bytes<=quick_task_size ) {
340 #if __TBB_HOARD_NONLOCAL_TASKS
341  if( (t = my_nonlocal_free_list) ) {
342  GATHER_STATISTIC(--my_counters.free_list_length);
343  __TBB_ASSERT( t->state()==task::freed, "free list of tasks is corrupted" );
344  my_nonlocal_free_list = t->prefix().next;
345  } else
346 #endif
347  if( (t = my_free_list) ) {
348  GATHER_STATISTIC(--my_counters.free_list_length);
349  __TBB_ASSERT( t->state()==task::freed, "free list of tasks is corrupted" );
350  my_free_list = t->prefix().next;
351  } else if( my_return_list ) {
352  // No fence required for read of my_return_list above, because __TBB_FetchAndStoreW has a fence.
353  t = (task*)__TBB_FetchAndStoreW( &my_return_list, 0 ); // with acquire
354  __TBB_ASSERT( t, "another thread emptied the my_return_list" );
355  __TBB_ASSERT( t->prefix().origin==this, "task returned to wrong my_return_list" );
356  ITT_NOTIFY( sync_acquired, &my_return_list );
357  my_free_list = t->prefix().next;
358  } else {
360 #if __TBB_COUNT_TASK_NODES
361  ++my_task_node_count;
362 #endif /* __TBB_COUNT_TASK_NODES */
363  t->prefix().origin = this;
364  t->prefix().next = 0;
366  }
367 #if __TBB_PREFETCHING
368  task *t_next = t->prefix().next;
369  if( !t_next ) { // the task was last in the list
370 #if __TBB_HOARD_NONLOCAL_TASKS
371  if( my_free_list )
372  t_next = my_free_list;
373  else
374 #endif
375  if( my_return_list ) // enable prefetching, gives speedup
376  t_next = my_free_list = (task*)__TBB_FetchAndStoreW( &my_return_list, 0 );
377  }
378  if( t_next ) { // gives speedup for both cache lines
379  __TBB_cl_prefetch(t_next);
380  __TBB_cl_prefetch(&t_next->prefix());
381  }
382 #endif /* __TBB_PREFETCHING */
383  } else {
384  GATHER_STATISTIC(++my_counters.big_tasks);
385  t = (task*)((char*)NFS_Allocate( 1, task_prefix_reservation_size+number_of_bytes, NULL ) + task_prefix_reservation_size );
386 #if __TBB_COUNT_TASK_NODES
387  ++my_task_node_count;
388 #endif /* __TBB_COUNT_TASK_NODES */
389  t->prefix().origin = NULL;
390  }
391  task_prefix& p = t->prefix();
392 #if __TBB_TASK_GROUP_CONTEXT
393  p.context = context;
394 #endif /* __TBB_TASK_GROUP_CONTEXT */
395  // Obsolete. But still in use, so has to be assigned correct value here.
396  p.owner = this;
397  p.ref_count = 0;
398  // Obsolete. Assign some not outrageously out-of-place value for a while.
399  p.depth = 0;
400  p.parent = parent;
401  // In TBB 2.1 and later, the constructor for task sets extra_state to indicate the version of the tbb/task.h header.
402  // In TBB 2.0 and earlier, the constructor leaves extra_state as zero.
403  p.extra_state = 0;
404  p.affinity = 0;
405  p.state = task::allocated;
406  __TBB_ISOLATION_EXPR( p.isolation = no_isolation );
407  return *t;
408 }
tbb::task * next
"next" field for list of task
Definition: task.h:286
#define __TBB_ISOLATION_EXPR(isolation)
void *__TBB_EXPORTED_FUNC NFS_Allocate(size_t n_element, size_t element_size, void *hint)
Allocate memory on cache/sector line boundary.
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id parent
#define __TBB_cl_prefetch(p)
Definition: mic_common.h:33
__TBB_atomic intptr_t my_small_task_count
Number of small tasks that have been allocated by this scheduler.
Definition: scheduler.h:461
task * my_free_list
Free list of small tasks that can be reused.
Definition: scheduler.h:178
task * my_return_list
List of small tasks that have been returned to this scheduler by other schedulers.
Definition: scheduler.h:465
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
const size_t task_prefix_reservation_size
Number of bytes reserved for a task prefix.
task object is on free list, or is going to be put there, or was just taken off.
Definition: task.h:634
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:991
void const char const char int ITT_FORMAT __itt_group_sync p
static const size_t quick_task_size
If sizeof(task) is <=quick_task_size, it is handled on a free list instead of malloc&#39;d.
Definition: scheduler.h:144
const isolation_tag no_isolation
Definition: task.h:133
#define GATHER_STATISTIC(x)
task object is freshly allocated or recycled.
Definition: task.h:632
Here is the call graph for this function:
Here is the caller graph for this function:

◆ assert_task_pool_valid()

void tbb::internal::generic_scheduler::assert_task_pool_valid ( ) const
inline

Definition at line 398 of file scheduler.h.

References __TBB_CONTEXT_ARG, __TBB_override, tbb::internal::first(), and parent.

Referenced by tbb::internal::custom_scheduler< SchedulerTraits >::allocate_scheduler(), enqueue(), generic_scheduler(), local_spawn(), prepare_task_pool(), and tbb::task::self().

398 {}
Here is the call graph for this function:
Here is the caller graph for this function:

◆ attach_arena()

void tbb::internal::generic_scheduler::attach_arena ( arena a,
size_t  index,
bool  is_master 
)

Definition at line 36 of file arena.cpp.

References __TBB_ASSERT, attach_mailbox(), tbb::internal::task_prefix::context, tbb::internal::mail_inbox::is_idle_state(), tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, tbb::internal::scheduler_state::my_inbox, my_market, tbb::internal::arena_base::my_market, tbb::internal::arena::my_slots, tbb::task::prefix(), and tbb::internal::mail_inbox::set_is_idle().

Referenced by create_master(), tbb::internal::governor::init_scheduler(), nested_arena_entry(), and tbb::internal::arena::process().

36  {
37  __TBB_ASSERT( a->my_market == my_market, NULL );
38  my_arena = a;
39  my_arena_index = index;
40  my_arena_slot = a->my_slots + index;
41  attach_mailbox( affinity_id(index+1) );
42  if ( is_master && my_inbox.is_idle_state( true ) ) {
43  // Master enters an arena with its own task to be executed. It means that master is not
44  // going to enter stealing loop and take affinity tasks.
45  my_inbox.set_is_idle( false );
46  }
47 #if __TBB_TASK_GROUP_CONTEXT
48  // Context to be used by root tasks by default (if the user has not specified one).
49  if( !is_master )
50  my_dummy_task->prefix().context = a->my_default_ctx;
51 #endif /* __TBB_TASK_GROUP_CONTEXT */
52 #if __TBB_TASK_PRIORITY
53  // In the current implementation master threads continue processing even when
54  // there are other masters with higher priority. Only TBB worker threads are
55  // redistributed between arenas based on the latters' priority. Thus master
56  // threads use arena's top priority as a reference point (in contrast to workers
57  // that use my_market->my_global_top_priority).
58  if( is_master ) {
59  my_ref_top_priority = &a->my_top_priority;
60  my_ref_reload_epoch = &a->my_reload_epoch;
61  }
62  my_local_reload_epoch = *my_ref_reload_epoch;
63  __TBB_ASSERT( !my_offloaded_tasks, NULL );
64 #endif /* __TBB_TASK_PRIORITY */
65 }
task_group_context * context
Shared context that is used to communicate asynchronous state changes.
Definition: task.h:219
market * my_market
The market I am in.
Definition: scheduler.h:172
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
task * my_dummy_task
Fake root task created by slave threads.
Definition: scheduler.h:186
void set_is_idle(bool value)
Indicate whether thread that reads this mailbox is idle.
Definition: mailbox.h:211
void attach_mailbox(affinity_id id)
Definition: scheduler.h:667
bool is_idle_state(bool value) const
Indicate whether thread that reads this mailbox is idle.
Definition: mailbox.h:218
unsigned short affinity_id
An id as used for specifying affinity.
Definition: task.h:128
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:79
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:991
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ attach_mailbox()

void tbb::internal::generic_scheduler::attach_mailbox ( affinity_id  id)
inline

Definition at line 667 of file scheduler.h.

References __TBB_ASSERT, and id.

Referenced by attach_arena(), and free_task().

667  {
668  __TBB_ASSERT(id>0,NULL);
670  my_affinity_id = id;
671 }
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:99
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
mail_outbox & mailbox(affinity_id id)
Get reference to mailbox corresponding to given affinity_id.
Definition: arena.h:301
void attach(mail_outbox &putter)
Attach inbox to a corresponding outbox.
Definition: mailbox.h:193
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id id
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
Here is the caller graph for this function:

◆ can_steal()

bool tbb::internal::generic_scheduler::can_steal ( )
inline

Returns true if stealing is allowed.

Definition at line 270 of file scheduler.h.

References __TBB_get_bsp(), and __TBB_ISOLATION_EXPR.

270  {
271  int anchor;
272  // TODO IDEA: Add performance warning?
273 #if __TBB_ipf
274  return my_stealing_threshold < (uintptr_t)&anchor && (uintptr_t)__TBB_get_bsp() < my_rsb_stealing_threshold;
275 #else
276  return my_stealing_threshold < (uintptr_t)&anchor;
277 #endif
278  }
void * __TBB_get_bsp()
Retrieves the current RSE backing store pointer. IA64 specific.
uintptr_t my_stealing_threshold
Position in the call stack specifying its maximal filling when stealing is still allowed.
Definition: scheduler.h:155
Here is the call graph for this function:

◆ cleanup_master()

bool tbb::internal::generic_scheduler::cleanup_master ( bool  blocking_terminate)

Perform necessary cleanup when a master thread stops using TBB.

Definition at line 1337 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_with_release(), acquire_task_pool(), cleanup_scheduler(), EmptyTaskPool, tbb::internal::arena_slot_line1::head, tbb::internal::governor::is_set(), is_task_pool_published(), leave_task_pool(), local_wait_for_all(), lock, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, my_market, tbb::internal::arena_slot_line1::my_scheduler, tbb::internal::arena::my_slots, tbb::internal::NFS_Free(), tbb::internal::arena::on_thread_leaving(), tbb::internal::arena::ref_external, tbb::internal::market::release(), release_task_pool(), tbb::task::set_ref_count(), tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line1::task_pool.

Referenced by tbb::internal::governor::auto_terminate(), and tbb::internal::governor::terminate_scheduler().

1337  {
1338  arena* const a = my_arena;
1339  market * const m = my_market;
1340  __TBB_ASSERT( my_market, NULL );
1341  if( a && is_task_pool_published() ) {
1345  {
1346  // Local task pool is empty
1347  leave_task_pool();
1348  }
1349  else {
1350  // Master's local task pool may e.g. contain proxies of affinitized tasks.
1352  __TBB_ASSERT ( governor::is_set(this), "TLS slot is cleared before the task pool cleanup" );
1353  // Set refcount to make the following dispach loop infinite (it is interrupted by the cleanup logic).
1357  __TBB_ASSERT ( governor::is_set(this), "Other thread reused our TLS key during the task pool cleanup" );
1358  }
1359  }
1360 #if __TBB_ARENA_OBSERVER
1361  if( a )
1362  a->my_observers.notify_exit_observers( my_last_local_observer, /*worker=*/false );
1363 #endif
1364 #if __TBB_SCHEDULER_OBSERVER
1365  the_global_observer_list.notify_exit_observers( my_last_global_observer, /*worker=*/false );
1366 #endif /* __TBB_SCHEDULER_OBSERVER */
1367 #if _WIN32||_WIN64
1368  m->unregister_master( master_exec_resource );
1369 #endif /* _WIN32||_WIN64 */
1370  if( a ) {
1371  __TBB_ASSERT(a->my_slots+0 == my_arena_slot, NULL);
1372 #if __TBB_STATISTICS
1373  *my_arena_slot->my_counters += my_counters;
1374 #endif /* __TBB_STATISTICS */
1376  }
1377 #if __TBB_TASK_GROUP_CONTEXT
1378  else { // task_group_context ownership was not transferred to arena
1379  default_context()->~task_group_context();
1380  NFS_Free(default_context());
1381  }
1382  context_state_propagation_mutex_type::scoped_lock lock(the_context_state_propagation_mutex);
1383  my_market->my_masters.remove( *this );
1384  lock.release();
1385 #endif /* __TBB_TASK_GROUP_CONTEXT */
1386  my_arena_slot = NULL; // detached from slot
1387  cleanup_scheduler(); // do not use scheduler state after this point
1388 
1389  if( a )
1390  a->on_thread_leaving<arena::ref_external>();
1391  // If there was an associated arena, it added a public market reference
1392  return m->release( /*is_public*/ a != NULL, blocking_terminate );
1393 }
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
void release_task_pool() const
Unlocks the local task pool.
Definition: scheduler.cpp:520
void set_ref_count(int count)
Set reference count.
Definition: task.h:750
market * my_market
The market I am in.
Definition: scheduler.h:172
void leave_task_pool()
Leave the task pool.
Definition: scheduler.cpp:1256
static bool is_set(generic_scheduler *s)
Used to check validity of the local scheduler TLS contents.
Definition: governor.cpp:120
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
task * my_dummy_task
Fake root task created by slave threads.
Definition: scheduler.h:186
generic_scheduler(market &, bool)
Definition: scheduler.cpp:84
void acquire_task_pool() const
Locks the local task pool.
Definition: scheduler.cpp:491
static const unsigned ref_external
Reference increment values for externals and workers.
Definition: arena.h:323
void __TBB_EXPORTED_FUNC NFS_Free(void *)
Free memory allocated by NFS_Allocate.
__TBB_atomic size_t head
Index of the first ready task in the deque.
virtual void local_wait_for_all(task &parent, task *child)=0
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void * lock
void cleanup_scheduler()
Cleans up this scheduler (the scheduler might be destroyed).
Definition: scheduler.cpp:294
#define EmptyTaskPool
Definition: scheduler.h:46
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
generic_scheduler * my_scheduler
Scheduler of the thread attached to the slot.
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ cleanup_scheduler()

void tbb::internal::generic_scheduler::cleanup_scheduler ( )

Cleans up this scheduler (the scheduler might be destroyed).

Definition at line 294 of file scheduler.cpp.

References __TBB_ASSERT, deallocate_task(), destroy(), free_nonlocal_small_task(), tbb::internal::scheduler_state::my_arena_slot, my_free_list, my_market, tbb::internal::scheduler_state::my_properties, my_return_list, my_small_task_count, tbb::internal::task_prefix::next, tbb::internal::task_prefix::origin, p, plugged_return_list(), tbb::task::prefix(), and tbb::internal::governor::sign_off().

Referenced by cleanup_master(), and cleanup_worker().

294  {
295  __TBB_ASSERT( !my_arena_slot, NULL );
296  __TBB_ASSERT( my_offloaded_tasks == NULL, NULL );
297 #if __TBB_PREVIEW_CRITICAL_TASKS
298  __TBB_ASSERT( !my_properties.has_taken_critical_task, "Critical tasks miscount." );
299 #endif
300 #if __TBB_TASK_GROUP_CONTEXT
301  cleanup_local_context_list();
302 #endif /* __TBB_TASK_GROUP_CONTEXT */
303  free_task<small_local_task>( *my_dummy_task );
304 
305 #if __TBB_HOARD_NONLOCAL_TASKS
306  while( task* t = my_nonlocal_free_list ) {
307  task_prefix& p = t->prefix();
308  my_nonlocal_free_list = p.next;
309  __TBB_ASSERT( p.origin && p.origin!=this, NULL );
311  }
312 #endif
313  // k accounts for a guard reference and each task that we deallocate.
314  intptr_t k = 1;
315  for(;;) {
316  while( task* t = my_free_list ) {
317  my_free_list = t->prefix().next;
318  deallocate_task(*t);
319  ++k;
320  }
322  break;
323  my_free_list = (task*)__TBB_FetchAndStoreW( &my_return_list, (intptr_t)plugged_return_list() );
324  }
325 #if __TBB_COUNT_TASK_NODES
326  my_market->update_task_node_count( my_task_node_count );
327 #endif /* __TBB_COUNT_TASK_NODES */
328  // Update my_small_task_count last. Doing so sooner might cause another thread to free *this.
329  __TBB_ASSERT( my_small_task_count>=k, "my_small_task_count corrupted" );
330  governor::sign_off(this);
331  if( __TBB_FetchAndAddW( &my_small_task_count, -k )==k )
332  destroy();
333 }
void destroy()
Destroy and deallocate this scheduler object.
Definition: scheduler.cpp:285
tbb::task * next
"next" field for list of task
Definition: task.h:286
static task * plugged_return_list()
Special value used to mark my_return_list as not taking any more entries.
Definition: scheduler.h:458
market * my_market
The market I am in.
Definition: scheduler.h:172
__TBB_atomic intptr_t my_small_task_count
Number of small tasks that have been allocated by this scheduler.
Definition: scheduler.h:461
task * my_free_list
Free list of small tasks that can be reused.
Definition: scheduler.h:178
task * my_return_list
List of small tasks that have been returned to this scheduler by other schedulers.
Definition: scheduler.h:465
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
scheduler_properties my_properties
Definition: scheduler.h:101
static void sign_off(generic_scheduler *s)
Unregister TBB scheduler instance from thread-local storage.
Definition: governor.cpp:145
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void free_nonlocal_small_task(task &t)
Free a small task t that that was allocated by a different scheduler.
Definition: scheduler.cpp:410
void deallocate_task(task &t)
Return task object to the memory allocator.
Definition: scheduler.h:683
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:991
void const char const char int ITT_FORMAT __itt_group_sync p
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ cleanup_worker()

void tbb::internal::generic_scheduler::cleanup_worker ( void arg,
bool  worker 
)
static

Perform necessary cleanup when a worker thread finishes.

Definition at line 1327 of file scheduler.cpp.

References __TBB_ASSERT, cleanup_scheduler(), tbb::internal::scheduler_state::my_arena_slot, and s.

Referenced by tbb::internal::market::cleanup().

1327  {
1329  __TBB_ASSERT( !s.my_arena_slot, "cleaning up attached worker" );
1330 #if __TBB_SCHEDULER_OBSERVER
1331  if ( worker ) // can be called by master for worker, do not notify master twice
1332  the_global_observer_list.notify_exit_observers( s.my_last_global_observer, /*worker=*/true );
1333 #endif /* __TBB_SCHEDULER_OBSERVER */
1334  s.cleanup_scheduler();
1335 }
generic_scheduler(market &, bool)
Definition: scheduler.cpp:84
void const char const char int ITT_FORMAT __itt_group_sync s
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
Here is the call graph for this function:
Here is the caller graph for this function:

◆ commit_relocated_tasks()

void tbb::internal::generic_scheduler::commit_relocated_tasks ( size_t  new_tail)
inline

Makes relocated tasks visible to thieves and releases the local task pool.

Obviously, the task pool must be locked when calling this method.

Definition at line 719 of file scheduler.h.

References __TBB_ASSERT, tbb::internal::__TBB_store_relaxed(), and __TBB_store_release.

Referenced by prepare_task_pool().

719  {
721  "Task pool must be locked when calling commit_relocated_tasks()" );
723  // Tail is updated last to minimize probability of a thread making arena
724  // snapshot being misguided into thinking that this task pool is empty.
725  __TBB_store_release( my_arena_slot->tail, new_tail );
727 }
#define __TBB_store_release
Definition: tbb_machine.h:860
void release_task_pool() const
Unlocks the local task pool.
Definition: scheduler.cpp:520
bool is_local_task_pool_quiescent() const
Definition: scheduler.h:633
__TBB_atomic size_t head
Index of the first ready task in the deque.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:742
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ commit_spawned_tasks()

void tbb::internal::generic_scheduler::commit_spawned_tasks ( size_t  new_tail)
inline

Makes newly spawned tasks visible to thieves.

Definition at line 710 of file scheduler.h.

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), ITT_NOTIFY, and sync_releasing.

Referenced by local_spawn().

710  {
711  __TBB_ASSERT ( new_tail <= my_arena_slot->my_task_pool_size, "task deque end was overwritten" );
712  // emit "task was released" signal
713  ITT_NOTIFY(sync_releasing, (void*)((uintptr_t)my_arena_slot+sizeof(uintptr_t)));
714  // Release fence is necessary to make sure that previously stored task pointers
715  // are visible to thieves.
717 }
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ create_master()

generic_scheduler * tbb::internal::generic_scheduler::create_master ( arena a)
static

Initialize a scheduler for a master thread.

Definition at line 1283 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::allocate_scheduler(), attach_arena(), tbb::internal::task_prefix::context, tbb::task_group_context::default_traits, tbb::internal::market::global_market(), init_stack_info(), tbb::task_group_context::isolated, lock, tbb::internal::scheduler_properties::master, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, my_market, tbb::internal::scheduler_state::my_properties, tbb::internal::arena_slot_line1::my_scheduler, tbb::internal::NFS_Allocate(), tbb::task::prefix(), tbb::internal::task_prefix::ref_count, tbb::internal::market::release(), s, tbb::internal::governor::sign_on(), and tbb::internal::scheduler_properties::type.

Referenced by tbb::internal::governor::init_scheduler(), and tbb::internal::governor::init_scheduler_weak().

1283  {
1284  // add an internal market reference; the public reference is possibly added in create_arena
1285  generic_scheduler* s = allocate_scheduler( market::global_market(/*is_public=*/false), /* genuine = */ true );
1286  __TBB_ASSERT( !s->my_arena, NULL );
1287  __TBB_ASSERT( s->my_market, NULL );
1288  task& t = *s->my_dummy_task;
1289  s->my_properties.type = scheduler_properties::master;
1290  t.prefix().ref_count = 1;
1291 #if __TBB_TASK_GROUP_CONTEXT
1292  t.prefix().context = new ( NFS_Allocate(1, sizeof(task_group_context), NULL) )
1294 #if __TBB_FP_CONTEXT
1295  s->default_context()->capture_fp_settings();
1296 #endif
1297  // Do not call init_stack_info before the scheduler is set as master or worker.
1298  s->init_stack_info();
1299  context_state_propagation_mutex_type::scoped_lock lock(the_context_state_propagation_mutex);
1300  s->my_market->my_masters.push_front( *s );
1301  lock.release();
1302 #endif /* __TBB_TASK_GROUP_CONTEXT */
1303  if( a ) {
1304  // Master thread always occupies the first slot
1305  s->attach_arena( a, /*index*/0, /*is_master*/true );
1306  s->my_arena_slot->my_scheduler = s;
1307 #if __TBB_TASK_GROUP_CONTEXT
1308  a->my_default_ctx = s->default_context(); // also transfers implied ownership
1309 #endif
1310  }
1311  __TBB_ASSERT( s->my_arena_index == 0, "Master thread must occupy the first slot in its arena" );
1312  governor::sign_on(s);
1313 
1314 #if _WIN32||_WIN64
1315  s->my_market->register_master( s->master_exec_resource );
1316 #endif /* _WIN32||_WIN64 */
1317  // Process any existing observers.
1318 #if __TBB_ARENA_OBSERVER
1319  __TBB_ASSERT( !a || a->my_observers.empty(), "Just created arena cannot have any observers associated with it" );
1320 #endif
1321 #if __TBB_SCHEDULER_OBSERVER
1322  the_global_observer_list.notify_entry_observers( s->my_last_global_observer, /*worker=*/false );
1323 #endif /* __TBB_SCHEDULER_OBSERVER */
1324  return s;
1325 }
void *__TBB_EXPORTED_FUNC NFS_Allocate(size_t n_element, size_t element_size, void *hint)
Allocate memory on cache/sector line boundary.
generic_scheduler(market &, bool)
Definition: scheduler.cpp:84
void const char const char int ITT_FORMAT __itt_group_sync s
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
static market & global_market(bool is_public, unsigned max_num_workers=0, size_t stack_size=0)
Factory method creating new market object.
Definition: market.cpp:96
generic_scheduler * allocate_scheduler(market &m, bool genuine)
Definition: scheduler.cpp:37
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void * lock
static void sign_on(generic_scheduler *s)
Register TBB scheduler instance in thread-local storage.
Definition: governor.cpp:124
Here is the call graph for this function:
Here is the caller graph for this function:

◆ create_worker()

generic_scheduler * tbb::internal::generic_scheduler::create_worker ( market m,
size_t  index,
bool  geniune 
)
static

Initialize a scheduler for a worker thread.

Definition at line 1269 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::allocate_scheduler(), init_stack_info(), tbb::internal::scheduler_state::my_arena_index, my_dummy_task, tbb::internal::scheduler_state::my_properties, tbb::task::prefix(), tbb::internal::task_prefix::ref_count, s, tbb::internal::governor::sign_on(), tbb::internal::scheduler_properties::type, and tbb::internal::scheduler_properties::worker.

Referenced by tbb::internal::market::create_one_job(), and wait_until_empty().

1269  {
1270  generic_scheduler* s = allocate_scheduler( m, genuine );
1271  __TBB_ASSERT(!genuine || index, "workers should have index > 0");
1272  s->my_arena_index = index; // index is not a real slot in arena yet
1273  s->my_dummy_task->prefix().ref_count = 2;
1274  s->my_properties.type = scheduler_properties::worker;
1275  // Do not call init_stack_info before the scheduler is set as master or worker.
1276  if (genuine)
1277  s->init_stack_info();
1278  governor::sign_on(s);
1279  return s;
1280 }
generic_scheduler(market &, bool)
Definition: scheduler.cpp:84
void const char const char int ITT_FORMAT __itt_group_sync s
generic_scheduler * allocate_scheduler(market &m, bool genuine)
Definition: scheduler.cpp:37
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
static void sign_on(generic_scheduler *s)
Register TBB scheduler instance in thread-local storage.
Definition: governor.cpp:124
Here is the call graph for this function:
Here is the caller graph for this function:

◆ deallocate_task()

void tbb::internal::generic_scheduler::deallocate_task ( task t)
inline

Return task object to the memory allocator.

Definition at line 683 of file scheduler.h.

References tbb::internal::task_prefix::extra_state, tbb::internal::task_prefix::next, tbb::internal::NFS_Free(), p, tbb::internal::poison_pointer(), tbb::task::prefix(), tbb::internal::task_prefix::state, and tbb::internal::task_prefix_reservation_size.

Referenced by cleanup_scheduler(), and free_nonlocal_small_task().

683  {
684 #if TBB_USE_ASSERT
685  task_prefix& p = t.prefix();
686  p.state = 0xFF;
687  p.extra_state = 0xFF;
688  poison_pointer(p.next);
689 #endif /* TBB_USE_ASSERT */
691 #if __TBB_COUNT_TASK_NODES
692  --my_task_node_count;
693 #endif /* __TBB_COUNT_TASK_NODES */
694 }
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
void __TBB_EXPORTED_FUNC NFS_Free(void *)
Free memory allocated by NFS_Allocate.
const size_t task_prefix_reservation_size
Number of bytes reserved for a task prefix.
void const char const char int ITT_FORMAT __itt_group_sync p
Here is the call graph for this function:
Here is the caller graph for this function:

◆ destroy()

void tbb::internal::generic_scheduler::destroy ( )

Destroy and deallocate this scheduler object.

Definition at line 285 of file scheduler.cpp.

References __TBB_ASSERT, my_small_task_count, and tbb::internal::NFS_Free().

Referenced by cleanup_scheduler().

285  {
286  __TBB_ASSERT(my_small_task_count == 0, "The scheduler is still in use.");
287  this->~generic_scheduler();
288 #if TBB_USE_DEBUG
289  memset((void*)this, -1, sizeof(generic_scheduler));
290 #endif
291  NFS_Free(this);
292 }
__TBB_atomic intptr_t my_small_task_count
Number of small tasks that have been allocated by this scheduler.
Definition: scheduler.h:461
generic_scheduler(market &, bool)
Definition: scheduler.cpp:84
void __TBB_EXPORTED_FUNC NFS_Free(void *)
Free memory allocated by NFS_Allocate.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
Here is the call graph for this function:
Here is the caller graph for this function:

◆ enqueue()

void tbb::internal::generic_scheduler::enqueue ( task t,
void reserved 
)
virtual

For internal use only.

Implements tbb::internal::scheduler.

Definition at line 745 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_ISOLATION_ARG, __TBB_ISOLATION_EXPR, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_relaxed(), acquire_task_pool(), tbb::internal::arena::advertise_new_work(), assert_task_pool_valid(), tbb::internal::assert_task_valid(), tbb::internal::fast_reverse_vector< T, max_segments >::copy_memory(), tbb::internal::arena::enqueue_task(), tbb::internal::arena_slot::fill_with_canary_pattern(), GATHER_STATISTIC, get_task(), tbb::internal::arena_slot_line1::head, is_local_task_pool_quiescent(), is_proxy(), is_task_pool_published(), leave_task_pool(), tbb::internal::governor::local_scheduler(), tbb::internal::max(), min_task_pool_size, tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::scheduler_state::my_innermost_running_task, my_market, tbb::internal::arena_base::my_num_workers_requested, my_random, tbb::internal::task_prefix::next_offloaded, tbb::task::note_affinity(), tbb::internal::num_priority_levels, tbb::internal::task_prefix::owner, p, tbb::internal::poison_pointer(), tbb::task::prefix(), prepare_task_pool(), publish_task_pool(), tbb::internal::fast_reverse_vector< T, max_segments >::push_back(), tbb::task::ready, release_task_pool(), s, tbb::internal::fast_reverse_vector< T, max_segments >::size(), tbb::internal::task_prefix::state, tbb::internal::arena_slot_line2::tail, tbb::internal::arena_slot_line2::task_pool_ptr, tbb::internal::arena::wakeup, and tbb::internal::arena::work_spawned.

745  {
747  // these redirections are due to bw-compatibility, consider reworking some day
748  __TBB_ASSERT( s->my_arena, "thread is not in any arena" );
749  s->my_arena->enqueue_task(t, (intptr_t)prio, s->my_random );
750 }
static generic_scheduler * local_scheduler()
Obtain the thread-local instance of the TBB scheduler.
Definition: governor.h:129
generic_scheduler(market &, bool)
Definition: scheduler.cpp:84
void const char const char int ITT_FORMAT __itt_group_sync s
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
Here is the call graph for this function:

◆ free_nonlocal_small_task()

void tbb::internal::generic_scheduler::free_nonlocal_small_task ( task t)

Free a small task t that that was allocated by a different scheduler.

Definition at line 410 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_cl_evict, __TBB_FetchAndDecrementWrelease, tbb::internal::as_atomic(), deallocate_task(), tbb::task::freed, ITT_NOTIFY, tbb::internal::task_prefix::next, tbb::internal::task_prefix::origin, plugged_return_list(), tbb::task::prefix(), s, tbb::task::state(), and sync_releasing.

Referenced by cleanup_scheduler().

410  {
411  __TBB_ASSERT( t.state()==task::freed, NULL );
412  generic_scheduler& s = *static_cast<generic_scheduler*>(t.prefix().origin);
413  __TBB_ASSERT( &s!=this, NULL );
414  for(;;) {
415  task* old = s.my_return_list;
416  if( old==plugged_return_list() )
417  break;
418  // Atomically insert t at head of s.my_return_list
419  t.prefix().next = old;
420  ITT_NOTIFY( sync_releasing, &s.my_return_list );
421  if( as_atomic(s.my_return_list).compare_and_swap(&t, old )==old ) {
422 #if __TBB_PREFETCHING
423  __TBB_cl_evict(&t.prefix());
424  __TBB_cl_evict(&t);
425 #endif
426  return;
427  }
428  }
429  deallocate_task(t);
430  if( __TBB_FetchAndDecrementWrelease( &s.my_small_task_count )==1 ) {
431  // We freed the last task allocated by scheduler s, so it's our responsibility
432  // to free the scheduler.
433  s.destroy();
434  }
435 }
atomic< T > & as_atomic(T &t)
Definition: atomic.h:572
static task * plugged_return_list()
Special value used to mark my_return_list as not taking any more entries.
Definition: scheduler.h:458
generic_scheduler(market &, bool)
Definition: scheduler.cpp:84
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
void const char const char int ITT_FORMAT __itt_group_sync s
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
task object is on free list, or is going to be put there, or was just taken off.
Definition: task.h:634
#define __TBB_FetchAndDecrementWrelease(P)
Definition: tbb_machine.h:314
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define __TBB_cl_evict(p)
Definition: mic_common.h:34
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
void deallocate_task(task &t)
Return task object to the memory allocator.
Definition: scheduler.h:683
Here is the call graph for this function:
Here is the caller graph for this function:

◆ free_task()

template<free_task_hint hint>
void tbb::internal::generic_scheduler::free_task ( task t)

Put task on free list.

Does not call destructor.

Definition at line 730 of file scheduler.h.

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), tbb::task::allocated, tbb::internal::governor::assume_scheduler(), attach_mailbox(), tbb::internal::co_local_wait_for_all(), tbb::internal::task_prefix::depth, tbb::task::executing, tbb::task::freed, GATHER_STATISTIC, tbb::internal::cpu_ctl_env::get_env(), h, init_stack_info(), tbb::internal::governor::is_set(), ITT_TASK_BEGIN, ITT_TASK_END, tbb::internal::local_task, local_wait_for_all(), tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, tbb::task_group_context::my_cpu_ctl_env, my_dummy_task, tbb::internal::scheduler_state::my_innermost_running_task, tbb::task_group_context::my_name, tbb::internal::scheduler_state::my_properties, tbb::internal::task_prefix::next, tbb::internal::task_prefix::next_offloaded, tbb::internal::no_cache, tbb::internal::arena::on_thread_leaving(), tbb::internal::task_prefix::origin, tbb::internal::task_prefix::owner, p, tbb::internal::poison_pointer(), poison_value, tbb::task::prefix(), tbb::internal::punned_cast(), tbb::task::ready, tbb::internal::task_prefix::ref_count, tbb::internal::arena::ref_external, tbb::relaxed, tbb::internal::co_context::resume(), s, tbb::internal::cpu_ctl_env::set_env(), tbb::internal::small_local_task, tbb::internal::small_task, tbb::internal::task_prefix::state, tbb::task::state(), and worker_outermost_level().

Referenced by tbb::interface5::internal::task_base::destroy(), tbb::internal::allocate_additional_child_of_proxy::free(), tbb::internal::allocate_root_proxy::free(), tbb::internal::allocate_continuation_proxy::free(), tbb::internal::allocate_child_proxy::free(), and tbb::internal::auto_empty_task::~auto_empty_task().

730  {
731 #if __TBB_HOARD_NONLOCAL_TASKS
732  static const int h = hint&(~local_task);
733 #else
734  static const free_task_hint h = hint;
735 #endif
736  GATHER_STATISTIC(--my_counters.active_tasks);
737  task_prefix& p = t.prefix();
738  // Verify that optimization hints are correct.
739  __TBB_ASSERT( h!=small_local_task || p.origin==this, NULL );
740  __TBB_ASSERT( !(h&small_task) || p.origin, NULL );
741  __TBB_ASSERT( !(h&local_task) || (!p.origin || uintptr_t(p.origin) > uintptr_t(4096)), "local_task means allocated");
742  poison_value(p.depth);
743  poison_value(p.ref_count);
744  poison_pointer(p.owner);
745 #if __TBB_PREVIEW_RESUMABLE_TASKS
746  __TBB_ASSERT(1L << t.state() & (1L << task::executing | 1L << task::allocated | 1 << task::to_resume), NULL);
747 #else
748  __TBB_ASSERT(1L << t.state() & (1L << task::executing | 1L << task::allocated), NULL);
749 #endif
750  p.state = task::freed;
751  if( h==small_local_task || p.origin==this ) {
752  GATHER_STATISTIC(++my_counters.free_list_length);
753  p.next = my_free_list;
754  my_free_list = &t;
755  } else if( !(h&local_task) && p.origin && uintptr_t(p.origin) < uintptr_t(4096) ) {
756  // a special value reserved for future use, do nothing since
757  // origin is not pointing to a scheduler instance
758  } else if( !(h&local_task) && p.origin ) {
759  GATHER_STATISTIC(++my_counters.free_list_length);
760 #if __TBB_HOARD_NONLOCAL_TASKS
761  if( !(h&no_cache) ) {
762  p.next = my_nonlocal_free_list;
763  my_nonlocal_free_list = &t;
764  } else
765 #endif
767  } else {
768  GATHER_STATISTIC(--my_counters.big_tasks);
769  deallocate_task(t);
770  }
771 }
#define poison_value(g)
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function h
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
free_task_hint
Optimization hint to free_task that enables it omit unnecessary tests and code.
task is running, and will be destroyed after method execute() completes.
Definition: task.h:626
Bitwise-OR of local_task and small_task.
task * my_free_list
Free list of small tasks that can be reused.
Definition: scheduler.h:178
Disable caching for a small task.
Task is known to be a small task.
Task is known to have been allocated by this scheduler.
task object is on free list, or is going to be put there, or was just taken off.
Definition: task.h:634
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void free_nonlocal_small_task(task &t)
Free a small task t that that was allocated by a different scheduler.
Definition: scheduler.cpp:410
void deallocate_task(task &t)
Return task object to the memory allocator.
Definition: scheduler.h:683
void const char const char int ITT_FORMAT __itt_group_sync p
#define GATHER_STATISTIC(x)
task object is freshly allocated or recycled.
Definition: task.h:632
Here is the call graph for this function:
Here is the caller graph for this function:

◆ get_mailbox_task()

task * tbb::internal::generic_scheduler::get_mailbox_task ( __TBB_ISOLATION_EXPR(isolation_tag isolation)  )

Attempt to get a task from the mailbox.

Gets a task only if it has not been executed by its sender or a thief that has stolen it from the sender's task pool. Otherwise returns NULL.

This method is intended to be used only by the thread extracting the proxy from its mailbox. (In contrast to local task pool, mailbox can be read only by its owner).

Definition at line 1230 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_ISOLATION_EXPR, tbb::internal::es_task_is_stolen, ITT_NOTIFY, tbb::internal::task_proxy::mailbox_bit, tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_inbox, and tbb::internal::mail_inbox::pop().

1230  {
1231  __TBB_ASSERT( my_affinity_id>0, "not in arena" );
1232  while ( task_proxy* const tp = my_inbox.pop( __TBB_ISOLATION_EXPR( isolation ) ) ) {
1233  if ( task* result = tp->extract_task<task_proxy::mailbox_bit>() ) {
1234  ITT_NOTIFY( sync_acquired, my_inbox.outbox() );
1235  result->prefix().extra_state |= es_task_is_stolen;
1236  return result;
1237  }
1238  // We have exclusive access to the proxy, and can destroy it.
1239  free_task<no_cache_small_task>(*tp);
1240  }
1241  return NULL;
1242 }
#define __TBB_ISOLATION_EXPR(isolation)
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:99
static const intptr_t mailbox_bit
Definition: mailbox.h:31
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
Set if the task has been stolen.
task_proxy * pop(__TBB_ISOLATION_EXPR(isolation_tag isolation))
Get next piece of mail, or NULL if mailbox is empty.
Definition: mailbox.h:202
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
Here is the call graph for this function:

◆ get_task() [1/2]

task * tbb::internal::generic_scheduler::get_task ( __TBB_ISOLATION_EXPR(isolation_tag isolation)  )
inline

Get a task from the local pool.

Called only by the pool owner. Returns the pointer to the task or NULL if a suitable task is not found. Resets the pool if it is empty.

Definition at line 1008 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_control_consistency_helper, __TBB_ISOLATION_EXPR, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_relaxed(), tbb::internal::__TBB_store_with_release(), acquire_task_pool(), tbb::internal::arena::advertise_new_work(), tbb::internal::assert_task_valid(), tbb::atomic_fence(), tbb::internal::arena_slot_line1::head, is_quiescent_local_task_pool_reset(), is_task_pool_published(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::scheduler_state::my_innermost_running_task, tbb::task::note_affinity(), tbb::internal::poison_pointer(), publish_task_pool(), release_task_pool(), reset_task_pool_and_leave(), tbb::internal::arena_slot_line2::tail, tbb::internal::arena_slot_line2::task_pool_ptr, and tbb::internal::arena::wakeup.

Referenced by enqueue().

1008  {
1010  // The current task position in the task pool.
1011  size_t T0 = __TBB_load_relaxed( my_arena_slot->tail );
1012  // The bounds of available tasks in the task pool. H0 is only used when the head bound is reached.
1013  size_t H0 = (size_t)-1, T = T0;
1014  task* result = NULL;
1015  bool task_pool_empty = false;
1016  __TBB_ISOLATION_EXPR( bool tasks_omitted = false );
1017  do {
1018  __TBB_ASSERT( !result, NULL );
1020  atomic_fence();
1021  if ( (intptr_t)__TBB_load_relaxed( my_arena_slot->head ) > (intptr_t)T ) {
1024  if ( (intptr_t)H0 > (intptr_t)T ) {
1025  // The thief has not backed off - nothing to grab.
1028  && H0 == T + 1, "victim/thief arbitration algorithm failure" );
1030  // No tasks in the task pool.
1031  task_pool_empty = true;
1032  break;
1033  } else if ( H0 == T ) {
1034  // There is only one task in the task pool.
1036  task_pool_empty = true;
1037  } else {
1038  // Release task pool if there are still some tasks.
1039  // After the release, the tail will be less than T, thus a thief
1040  // will not attempt to get a task at position T.
1042  }
1043  }
1044  __TBB_control_consistency_helper(); // on my_arena_slot->head
1045 #if __TBB_TASK_ISOLATION
1046  result = get_task( T, isolation, tasks_omitted );
1047  if ( result ) {
1049  break;
1050  } else if ( !tasks_omitted ) {
1052  __TBB_ASSERT( T0 == T+1, NULL );
1053  T0 = T;
1054  }
1055 #else
1056  result = get_task( T );
1057 #endif /* __TBB_TASK_ISOLATION */
1058  } while ( !result && !task_pool_empty );
1059 
1060 #if __TBB_TASK_ISOLATION
1061  if ( tasks_omitted ) {
1062  if ( task_pool_empty ) {
1063  // All tasks have been checked. The task pool should be in reset state.
1064  // We just restore the bounds for the available tasks.
1065  // TODO: Does it have sense to move them to the beginning of the task pool?
1067  if ( result ) {
1068  // If we have a task, it should be at H0 position.
1069  __TBB_ASSERT( H0 == T, NULL );
1070  ++H0;
1071  }
1072  __TBB_ASSERT( H0 <= T0, NULL );
1073  if ( H0 < T0 ) {
1074  // Restore the task pool if there are some tasks.
1077  // The release fence is used in publish_task_pool.
1079  // Synchronize with snapshot as we published some tasks.
1081  }
1082  } else {
1083  // A task has been obtained. We need to make a hole in position T.
1085  __TBB_ASSERT( result, NULL );
1086  my_arena_slot->task_pool_ptr[T] = NULL;
1088  // Synchronize with snapshot as we published some tasks.
1089  // TODO: consider some approach not to call wakeup for each time. E.g. check if the tail reached the head.
1091  }
1092 
1093  // Now it is safe to call note_affinity because the task pool is restored.
1094  if ( my_innermost_running_task == result ) {
1095  assert_task_valid( result );
1096  result->note_affinity( my_affinity_id );
1097  }
1098  }
1099 #endif /* __TBB_TASK_ISOLATION */
1100  __TBB_ASSERT( (intptr_t)__TBB_load_relaxed( my_arena_slot->tail ) >= 0, NULL );
1101  __TBB_ASSERT( result || __TBB_ISOLATION_EXPR( tasks_omitted || ) is_quiescent_local_task_pool_reset(), NULL );
1102  return result;
1103 } // generic_scheduler::get_task
void reset_task_pool_and_leave()
Resets head and tail indices to 0, and leaves task pool.
Definition: scheduler.h:702
task * get_task(__TBB_ISOLATION_EXPR(isolation_tag isolation))
Get a task from the local pool.
Definition: scheduler.cpp:1008
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
#define __TBB_ISOLATION_EXPR(isolation)
task * my_innermost_running_task
Innermost task whose task::execute() is running. A dummy task on the outermost level.
Definition: scheduler.h:88
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:99
bool is_quiescent_local_task_pool_reset() const
Definition: scheduler.h:644
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
void release_task_pool() const
Unlocks the local task pool.
Definition: scheduler.cpp:520
void advertise_new_work()
If necessary, raise a flag that there is new job in arena.
Definition: arena.h:480
void assert_task_valid(const task *)
void publish_task_pool()
Used by workers to enter the task pool.
Definition: scheduler.cpp:1244
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
void atomic_fence()
Sequentially consistent full memory fence.
Definition: tbb_machine.h:342
#define __TBB_control_consistency_helper()
Definition: gcc_generic.h:60
void acquire_task_pool() const
Locks the local task pool.
Definition: scheduler.cpp:491
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
__TBB_atomic size_t head
Index of the first ready task in the deque.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:742
task **__TBB_atomic task_pool_ptr
Task pool of the scheduler that owns this slot.
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ get_task() [2/2]

task * tbb::internal::generic_scheduler::get_task ( size_t  T)
inline

Get a task from the local pool at specified location T.

Returns the pointer to the task or NULL if the task cannot be executed, e.g. proxy has been deallocated or isolation constraint is not met. tasks_omitted tells if some tasks have been omitted. Called only by the pool owner. The caller should guarantee that the position T is not available for a thief.

Definition at line 957 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::task_proxy::extract_task(), GATHER_STATISTIC, is_local_task_pool_quiescent(), is_proxy(), is_version_3_task(), tbb::internal::task_prefix::isolation, tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::no_isolation, tbb::task::note_affinity(), tbb::internal::poison_pointer(), tbb::internal::task_proxy::pool_bit, tbb::task::prefix(), tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line2::task_pool_ptr.

959 {
961  || is_local_task_pool_quiescent(), "Is it safe to get a task at position T?" );
962 
963  task* result = my_arena_slot->task_pool_ptr[T];
964  __TBB_ASSERT( !is_poisoned( result ), "The poisoned task is going to be processed" );
965 #if __TBB_TASK_ISOLATION
966  if ( !result )
967  return NULL;
968 
969  bool omit = isolation != no_isolation && isolation != result->prefix().isolation;
970  if ( !omit && !is_proxy( *result ) )
971  return result;
972  else if ( omit ) {
973  tasks_omitted = true;
974  return NULL;
975  }
976 #else
978  if ( !result || !is_proxy( *result ) )
979  return result;
980 #endif /* __TBB_TASK_ISOLATION */
981 
982  task_proxy& tp = static_cast<task_proxy&>(*result);
983  if ( task *t = tp.extract_task<task_proxy::pool_bit>() ) {
984  GATHER_STATISTIC( ++my_counters.proxies_executed );
985  // Following assertion should be true because TBB 2.0 tasks never specify affinity, and hence are not proxied.
986  __TBB_ASSERT( is_version_3_task( *t ), "backwards compatibility with TBB 2.0 broken" );
987  my_innermost_running_task = t; // prepare for calling note_affinity()
988 #if __TBB_TASK_ISOLATION
989  // Task affinity has changed. Postpone calling note_affinity because the task pool is in invalid state.
990  if ( !tasks_omitted )
991 #endif /* __TBB_TASK_ISOLATION */
992  {
994  t->note_affinity( my_affinity_id );
995  }
996  return t;
997  }
998 
999  // Proxy was empty, so it's our responsibility to free it
1000  free_task<small_task>( tp );
1001 #if __TBB_TASK_ISOLATION
1002  if ( tasks_omitted )
1003  my_arena_slot->task_pool_ptr[T] = NULL;
1004 #endif /* __TBB_TASK_ISOLATION */
1005  return NULL;
1006 }
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
task * my_innermost_running_task
Innermost task whose task::execute() is running. A dummy task on the outermost level.
Definition: scheduler.h:88
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:99
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
static bool is_proxy(const task &t)
True if t is a task_proxy.
Definition: scheduler.h:348
static bool is_version_3_task(task &t)
Definition: scheduler.h:146
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
bool is_local_task_pool_quiescent() const
Definition: scheduler.h:633
static const intptr_t pool_bit
Definition: mailbox.h:30
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
const isolation_tag no_isolation
Definition: task.h:133
task **__TBB_atomic task_pool_ptr
Task pool of the scheduler that owns this slot.
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
#define GATHER_STATISTIC(x)
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:

◆ init_stack_info()

void tbb::internal::generic_scheduler::init_stack_info ( )

Sets up the data necessary for the stealing limiting heuristics.

Definition at line 158 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_get_bsp(), __TBB_get_object_ref, tbb::internal::__TBB_load_relaxed(), tbb::spin_mutex::scoped_lock::acquire(), tbb::internal::as_atomic(), tbb::atomic_fence(), tbb::task_group_context::binding_required, tbb::task_group_context::detached, tbb::task_group_context::dying, is_worker(), lock, tbb::internal::MByte, tbb::task_group_context::my_kind, my_market, tbb::internal::context_list_node_t::my_next, tbb::internal::scheduler_state::my_properties, my_stealing_threshold, tbb::task_group_context::my_version_and_traits, tbb::relaxed, tbb::release, tbb::internal::spin_wait_until_eq(), and tbb::internal::market::worker_stack_size().

Referenced by create_master(), create_worker(), and free_task().

158  {
159  // Stacks are growing top-down. Highest address is called "stack base",
160  // and the lowest is "stack limit".
161  __TBB_ASSERT( !my_stealing_threshold, "Stealing threshold has already been calculated" );
162  size_t stack_size = my_market->worker_stack_size();
163 #if USE_WINTHREAD
164 #if defined(_MSC_VER)&&_MSC_VER<1400 && !_WIN64
165  NT_TIB *pteb;
166  __asm mov eax, fs:[0x18]
167  __asm mov pteb, eax
168 #else
169  NT_TIB *pteb = (NT_TIB*)NtCurrentTeb();
170 #endif
171  __TBB_ASSERT( &pteb < pteb->StackBase && &pteb > pteb->StackLimit, "invalid stack info in TEB" );
172  __TBB_ASSERT( stack_size >0, "stack_size not initialized?" );
173  // When a thread is created with the attribute STACK_SIZE_PARAM_IS_A_RESERVATION, stack limit
174  // in the TIB points to the committed part of the stack only. This renders the expression
175  // "(uintptr_t)pteb->StackBase / 2 + (uintptr_t)pteb->StackLimit / 2" virtually useless.
176  // Thus for worker threads we use the explicit stack size we used while creating them.
177  // And for master threads we rely on the following fact and assumption:
178  // - the default stack size of a master thread on Windows is 1M;
179  // - if it was explicitly set by the application it is at least as large as the size of a worker stack.
180  if ( is_worker() || stack_size < MByte )
181  my_stealing_threshold = (uintptr_t)pteb->StackBase - stack_size / 2;
182  else
183  my_stealing_threshold = (uintptr_t)pteb->StackBase - MByte / 2;
184 #else /* USE_PTHREAD */
185  // There is no portable way to get stack base address in Posix, so we use
186  // non-portable method (on all modern Linux) or the simplified approach
187  // based on the common sense assumptions. The most important assumption
188  // is that the main thread's stack size is not less than that of other threads.
189  // See also comment 3 at the end of this file
190  void *stack_base = &stack_size;
191 #if __linux__ && !__bg__
192 #if __TBB_ipf
193  void *rsb_base = __TBB_get_bsp();
194 #endif
195  size_t np_stack_size = 0;
196  // Points to the lowest addressable byte of a stack.
197  void *stack_limit = NULL;
198 
199 #if __TBB_PREVIEW_RESUMABLE_TASKS
200  if ( !my_properties.genuine ) {
201  stack_limit = my_co_context.get_stack_limit();
202  __TBB_ASSERT( (uintptr_t)stack_base > (uintptr_t)stack_limit, "stack size must be positive" );
203  // Size of the stack free part
204  stack_size = size_t((char*)stack_base - (char*)stack_limit);
205  }
206 #endif
207 
208  pthread_attr_t np_attr_stack;
209  if( !stack_limit && 0 == pthread_getattr_np(pthread_self(), &np_attr_stack) ) {
210  if ( 0 == pthread_attr_getstack(&np_attr_stack, &stack_limit, &np_stack_size) ) {
211 #if __TBB_ipf
212  pthread_attr_t attr_stack;
213  if ( 0 == pthread_attr_init(&attr_stack) ) {
214  if ( 0 == pthread_attr_getstacksize(&attr_stack, &stack_size) ) {
215  if ( np_stack_size < stack_size ) {
216  // We are in a secondary thread. Use reliable data.
217  // IA-64 architecture stack is split into RSE backup and memory parts
218  rsb_base = stack_limit;
219  stack_size = np_stack_size/2;
220  // Limit of the memory part of the stack
221  stack_limit = (char*)stack_limit + stack_size;
222  }
223  // We are either in the main thread or this thread stack
224  // is bigger that that of the main one. As we cannot discern
225  // these cases we fall back to the default (heuristic) values.
226  }
227  pthread_attr_destroy(&attr_stack);
228  }
229  // IA-64 architecture stack is split into RSE backup and memory parts
230  my_rsb_stealing_threshold = (uintptr_t)((char*)rsb_base + stack_size/2);
231 #endif /* __TBB_ipf */
232  // TODO: pthread_attr_getstack cannot be used with Intel(R) Cilk(TM) Plus
233  // __TBB_ASSERT( (uintptr_t)stack_base > (uintptr_t)stack_limit, "stack size must be positive" );
234  // Size of the stack free part
235  stack_size = size_t((char*)stack_base - (char*)stack_limit);
236  }
237  pthread_attr_destroy(&np_attr_stack);
238  }
239 #endif /* __linux__ */
240  __TBB_ASSERT( stack_size>0, "stack size must be positive" );
241  my_stealing_threshold = (uintptr_t)((char*)stack_base - stack_size/2);
242 #endif /* USE_PTHREAD */
243 }
void * __TBB_get_bsp()
Retrieves the current RSE backing store pointer. IA64 specific.
uintptr_t my_stealing_threshold
Position in the call stack specifying its maximal filling when stealing is still allowed.
Definition: scheduler.h:155
bool is_worker() const
True if running on a worker thread, false otherwise.
Definition: scheduler.h:673
market * my_market
The market I am in.
Definition: scheduler.h:172
scheduler_properties my_properties
Definition: scheduler.h:101
size_t worker_stack_size() const
Returns the requested stack size of worker threads.
Definition: market.h:300
const size_t MByte
Definition: tbb_misc.h:50
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_local_task_pool_quiescent()

bool tbb::internal::generic_scheduler::is_local_task_pool_quiescent ( ) const
inline

Definition at line 633 of file scheduler.h.

References __TBB_ASSERT, EmptyTaskPool, and LockedTaskPool.

Referenced by enqueue(), and get_task().

633  {
635  task** tp = my_arena_slot->task_pool;
636  return tp == EmptyTaskPool || tp == LockedTaskPool;
637 }
#define LockedTaskPool
Definition: scheduler.h:47
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define EmptyTaskPool
Definition: scheduler.h:46
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the caller graph for this function:

◆ is_proxy()

static bool tbb::internal::generic_scheduler::is_proxy ( const task t)
inlinestatic

True if t is a task_proxy.

Definition at line 348 of file scheduler.h.

References __TBB_ISOLATION_ARG, __TBB_ISOLATION_EXPR, tbb::internal::es_task_proxy, tbb::internal::task_prefix::extra_state, and tbb::task::prefix().

Referenced by enqueue(), get_task(), steal_task(), and steal_task_from().

348  {
349  return t.prefix().extra_state==es_task_proxy;
350  }
Tag for v3 task_proxy.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_quiescent_local_task_pool_empty()

bool tbb::internal::generic_scheduler::is_quiescent_local_task_pool_empty ( ) const
inline

Definition at line 639 of file scheduler.h.

References __TBB_ASSERT, and tbb::internal::__TBB_load_relaxed().

Referenced by leave_task_pool().

639  {
640  __TBB_ASSERT( is_local_task_pool_quiescent(), "Task pool is not quiescent" );
642 }
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
bool is_local_task_pool_quiescent() const
Definition: scheduler.h:633
__TBB_atomic size_t head
Index of the first ready task in the deque.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_quiescent_local_task_pool_reset()

bool tbb::internal::generic_scheduler::is_quiescent_local_task_pool_reset ( ) const
inline

Definition at line 644 of file scheduler.h.

References __TBB_ASSERT, and tbb::internal::__TBB_load_relaxed().

Referenced by get_task(), prepare_task_pool(), and tbb::internal::arena::process().

644  {
645  __TBB_ASSERT( is_local_task_pool_quiescent(), "Task pool is not quiescent" );
647 }
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
bool is_local_task_pool_quiescent() const
Definition: scheduler.h:633
__TBB_atomic size_t head
Index of the first ready task in the deque.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_task_pool_published()

bool tbb::internal::generic_scheduler::is_task_pool_published ( ) const
inline

Definition at line 628 of file scheduler.h.

References __TBB_ASSERT, and EmptyTaskPool.

Referenced by acquire_task_pool(), cleanup_master(), enqueue(), get_task(), leave_task_pool(), local_spawn(), prepare_task_pool(), and release_task_pool().

628  {
631 }
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define EmptyTaskPool
Definition: scheduler.h:46
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the caller graph for this function:

◆ is_version_3_task()

static bool tbb::internal::generic_scheduler::is_version_3_task ( task t)
inlinestatic

Definition at line 146 of file scheduler.h.

References tbb::internal::task_prefix::extra_state, and tbb::task::prefix().

Referenced by get_task(), prepare_for_spawning(), and steal_task().

146  {
147 #if __TBB_PREVIEW_CRITICAL_TASKS
148  return (t.prefix().extra_state & 0x7)>=0x1;
149 #else
150  return (t.prefix().extra_state & 0x0F)>=0x1;
151 #endif
152  }
Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_worker()

bool tbb::internal::generic_scheduler::is_worker ( ) const
inline

True if running on a worker thread, false otherwise.

Definition at line 673 of file scheduler.h.

References tbb::internal::scheduler_properties::worker.

Referenced by tbb::internal::market::cleanup(), init_stack_info(), nested_arena_entry(), nested_arena_exit(), tbb::internal::governor::tls_value_of(), and wait_until_empty().

673  {
675 }
scheduler_properties my_properties
Definition: scheduler.h:101
bool type
Indicates that a scheduler acts as a master or a worker.
Definition: scheduler.h:54
Here is the caller graph for this function:

◆ leave_task_pool()

void tbb::internal::generic_scheduler::leave_task_pool ( )
inline

Leave the task pool.

Leaving task pool automatically releases the task pool if it is locked.

Definition at line 1256 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_store_relaxed(), EmptyTaskPool, is_quiescent_local_task_pool_empty(), is_task_pool_published(), ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena::my_slots, sync_releasing, and tbb::internal::arena_slot_line1::task_pool.

Referenced by cleanup_master(), and enqueue().

1256  {
1257  __TBB_ASSERT( is_task_pool_published(), "Not in arena" );
1258  // Do not reset my_arena_index. It will be used to (attempt to) re-acquire the slot next time
1259  __TBB_ASSERT( &my_arena->my_slots[my_arena_index] == my_arena_slot, "arena slot and slot index mismatch" );
1260  __TBB_ASSERT ( my_arena_slot->task_pool == LockedTaskPool, "Task pool must be locked when leaving arena" );
1261  __TBB_ASSERT ( is_quiescent_local_task_pool_empty(), "Cannot leave arena when the task pool is not empty" );
1263  // No release fence is necessary here as this assignment precludes external
1264  // accesses to the local task pool when becomes visible. Thus it is harmless
1265  // if it gets hoisted above preceding local bookkeeping manipulations.
1267 }
arena_slot my_slots[1]
Definition: arena.h:386
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
bool is_quiescent_local_task_pool_empty() const
Definition: scheduler.h:639
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
#define LockedTaskPool
Definition: scheduler.h:47
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:79
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:742
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
#define EmptyTaskPool
Definition: scheduler.h:46
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ local_spawn()

void tbb::internal::generic_scheduler::local_spawn ( task first,
task *&  next 
)

Conceptually, this method should be a member of class scheduler. But doing so would force us to publish class scheduler in the headers.

Definition at line 649 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::arena::advertise_new_work(), assert_task_pool_valid(), commit_spawned_tasks(), tbb::internal::fast_reverse_vector< T, max_segments >::copy_memory(), end, tbb::internal::governor::is_set(), is_task_pool_published(), min_task_pool_size, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::task_prefix::next, tbb::task::prefix(), prepare_for_spawning(), prepare_task_pool(), publish_task_pool(), tbb::internal::fast_reverse_vector< T, max_segments >::push_back(), tbb::internal::fast_reverse_vector< T, max_segments >::size(), tbb::internal::arena_slot_line2::task_pool_ptr, and tbb::internal::arena::work_spawned.

Referenced by local_spawn_root_and_wait(), spawn(), and tbb::task::spawn_and_wait_for_all().

649  {
650  __TBB_ASSERT( first, NULL );
651  __TBB_ASSERT( governor::is_set(this), NULL );
652 #if __TBB_TODO
653  // We need to consider capping the max task pool size and switching
654  // to in-place task execution whenever it is reached.
655 #endif
656  if ( &first->prefix().next == &next ) {
657  // Single task is being spawned
658 #if __TBB_TODO
659  // TODO:
660  // In the future we need to add overloaded spawn method for a single task,
661  // and a method accepting an array of task pointers (we may also want to
662  // change the implementation of the task_list class). But since such changes
663  // may affect the binary compatibility, we postpone them for a while.
664 #endif
665 #if __TBB_PREVIEW_CRITICAL_TASKS
666  if( !handled_as_critical( *first ) )
667 #endif
668  {
669  size_t T = prepare_task_pool( 1 );
671  commit_spawned_tasks( T + 1 );
672  if ( !is_task_pool_published() )
674  }
675  }
676  else {
677  // Task list is being spawned
678 #if __TBB_TODO
679  // TODO: add task_list::front() and implement&document the local execution ordering which is
680  // opposite to the current implementation. The idea is to remove hackish fast_reverse_vector
681  // and use push_back/push_front when accordingly LIFO and FIFO order of local execution is
682  // desired. It also requires refactoring of the reload_tasks method and my_offloaded_tasks list.
683  // Additional benefit may come from adding counter to the task_list so that it can reserve enough
684  // space in the task pool in advance and move all the tasks directly without any intermediate
685  // storages. But it requires dealing with backward compatibility issues and still supporting
686  // counter-less variant (though not necessarily fast implementation).
687 #endif
688  task *arr[min_task_pool_size];
689  fast_reverse_vector<task*> tasks(arr, min_task_pool_size);
690  task *t_next = NULL;
691  for( task* t = first; ; t = t_next ) {
692  // If t is affinitized to another thread, it may already be executed
693  // and destroyed by the time prepare_for_spawning returns.
694  // So milk it while it is alive.
695  bool end = &t->prefix().next == &next;
696  t_next = t->prefix().next;
697 #if __TBB_PREVIEW_CRITICAL_TASKS
698  if( !handled_as_critical( *t ) )
699 #endif
700  tasks.push_back( prepare_for_spawning(t) );
701  if( end )
702  break;
703  }
704  if( size_t num_tasks = tasks.size() ) {
705  size_t T = prepare_task_pool( num_tasks );
706  tasks.copy_memory( my_arena_slot->task_pool_ptr + T );
707  commit_spawned_tasks( T + num_tasks );
708  if ( !is_task_pool_published() )
710  }
711  }
714 }
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain __itt_id ITT_FORMAT p const __itt_domain __itt_id __itt_timestamp __itt_timestamp end
void advertise_new_work()
If necessary, raise a flag that there is new job in arena.
Definition: arena.h:480
task * prepare_for_spawning(task *t)
Checks if t is affinitized to another thread, and if so, bundles it as proxy.
Definition: scheduler.cpp:593
void publish_task_pool()
Used by workers to enter the task pool.
Definition: scheduler.cpp:1244
static bool is_set(generic_scheduler *s)
Used to check validity of the local scheduler TLS contents.
Definition: governor.cpp:120
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
auto first(Container &c) -> decltype(begin(c))
void commit_spawned_tasks(size_t new_tail)
Makes newly spawned tasks visible to thieves.
Definition: scheduler.h:710
size_t prepare_task_pool(size_t n)
Makes sure that the task pool can accommodate at least n more elements.
Definition: scheduler.cpp:437
static const size_t min_task_pool_size
Definition: scheduler.h:369
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
task **__TBB_atomic task_pool_ptr
Task pool of the scheduler that owns this slot.
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ local_spawn_root_and_wait()

void tbb::internal::generic_scheduler::local_spawn_root_and_wait ( task first,
task *&  next 
)

Definition at line 716 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_CONTEXT_ARG, tbb::internal::task_prefix::context, tbb::internal::governor::is_set(), local_spawn(), local_wait_for_all(), tbb::internal::task_prefix::next, tbb::internal::task_prefix::parent, tbb::internal::auto_empty_task::prefix(), tbb::task::prefix(), and tbb::internal::task_prefix::ref_count.

Referenced by spawn_root_and_wait().

716  {
717  __TBB_ASSERT( governor::is_set(this), NULL );
718  __TBB_ASSERT( first, NULL );
719  auto_empty_task dummy( __TBB_CONTEXT_ARG(this, first->prefix().context) );
721  for( task* t=first; ; t=t->prefix().next ) {
722  ++n;
723  __TBB_ASSERT( !t->prefix().parent, "not a root task, or already running" );
724  t->prefix().parent = &dummy;
725  if( &t->prefix().next==&next ) break;
726 #if __TBB_TASK_GROUP_CONTEXT
727  __TBB_ASSERT( t->prefix().context == t->prefix().next->prefix().context,
728  "all the root tasks in list must share the same context");
729 #endif /* __TBB_TASK_GROUP_CONTEXT */
730  }
731  dummy.prefix().ref_count = n+1;
732  if( n>1 )
733  local_spawn( first->prefix().next, next );
734  local_wait_for_all( dummy, first );
735 }
void local_spawn(task *first, task *&next)
Definition: scheduler.cpp:649
#define __TBB_CONTEXT_ARG(arg1, context)
static bool is_set(generic_scheduler *s)
Used to check validity of the local scheduler TLS contents.
Definition: governor.cpp:120
auto first(Container &c) -> decltype(begin(c))
intptr_t reference_count
A reference count.
Definition: task.h:120
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
virtual void local_wait_for_all(task &parent, task *child)=0
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
Here is the call graph for this function:
Here is the caller graph for this function:

◆ local_wait_for_all()

virtual void tbb::internal::generic_scheduler::local_wait_for_all ( task parent,
task child 
)
pure virtual

Implemented in tbb::internal::custom_scheduler< SchedulerTraits >.

Referenced by cleanup_master(), free_task(), local_spawn_root_and_wait(), tbb::internal::arena::process(), tbb::task::spawn_and_wait_for_all(), and wait_until_empty().

Here is the caller graph for this function:

◆ lock_task_pool()

task ** tbb::internal::generic_scheduler::lock_task_pool ( arena_slot victim_arena_slot) const
inline

Locks victim's task pool, and returns pointer to it. The pointer can be NULL.

Garbles victim_arena_slot->task_pool for the duration of the lock.

ATTENTION: This method is mostly the same as generic_scheduler::acquire_task_pool(), with a little different logic of slot state checks (slot can be empty, locked or point to any task pool other than ours, and asynchronous transitions between all these states are possible). Thus if any of them is changed, consider changing the counterpart as well

Definition at line 535 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_Yield, tbb::internal::as_atomic(), EmptyTaskPool, GATHER_STATISTIC, ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena, tbb::internal::arena_base::my_limit, sync_cancel, and tbb::internal::arena_slot_line1::task_pool.

Referenced by steal_task_from().

535  {
536  task** victim_task_pool;
537  bool sync_prepare_done = false;
538  for( atomic_backoff backoff;; /*backoff pause embedded in the loop*/) {
539  victim_task_pool = victim_arena_slot->task_pool;
540  // NOTE: Do not use comparison of head and tail indices to check for
541  // the presence of work in the victim's task pool, as they may give
542  // incorrect indication because of task pool relocations and resizes.
543  if ( victim_task_pool == EmptyTaskPool ) {
544  // The victim thread emptied its task pool - nothing to lock
545  if( sync_prepare_done )
546  ITT_NOTIFY(sync_cancel, victim_arena_slot);
547  break;
548  }
549  if( victim_task_pool != LockedTaskPool &&
550  as_atomic(victim_arena_slot->task_pool).compare_and_swap(LockedTaskPool, victim_task_pool ) == victim_task_pool )
551  {
552  // We've locked victim's task pool
553  ITT_NOTIFY(sync_acquired, victim_arena_slot);
554  break;
555  }
556  else if( !sync_prepare_done ) {
557  // Start waiting
558  ITT_NOTIFY(sync_prepare, victim_arena_slot);
559  sync_prepare_done = true;
560  }
561  GATHER_STATISTIC( ++my_counters.thieves_conflicts );
562  // Someone else acquired a lock, so pause and do exponential backoff.
563 #if __TBB_STEALING_ABORT_ON_CONTENTION
564  if(!backoff.bounded_pause()) {
565  // the 16 was acquired empirically and a theory behind it supposes
566  // that number of threads becomes much bigger than number of
567  // tasks which can be spawned by one thread causing excessive contention.
568  // TODO: However even small arenas can benefit from the abort on contention
569  // if preemption of a thief is a problem
570  if(my_arena->my_limit >= 16)
571  return EmptyTaskPool;
572  __TBB_Yield();
573  }
574 #else
575  backoff.pause();
576 #endif
577  }
578  __TBB_ASSERT( victim_task_pool == EmptyTaskPool ||
579  (victim_arena_slot->task_pool == LockedTaskPool && victim_task_pool != LockedTaskPool),
580  "not really locked victim's task pool?" );
581  return victim_task_pool;
582 } // generic_scheduler::lock_task_pool
atomic< T > & as_atomic(T &t)
Definition: atomic.h:572
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p sync_cancel
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
#define __TBB_Yield()
Definition: ibm_aix51.h:44
#define LockedTaskPool
Definition: scheduler.h:47
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
atomic< unsigned > my_limit
The maximal number of currently busy slots.
Definition: arena.h:157
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
#define EmptyTaskPool
Definition: scheduler.h:46
#define GATHER_STATISTIC(x)
Here is the call graph for this function:
Here is the caller graph for this function:

◆ master_outermost_level()

bool tbb::internal::generic_scheduler::master_outermost_level ( ) const
inline

True if the scheduler is on the outermost dispatch level in a master thread.

Returns true when this scheduler instance is associated with an application thread, and is not executing any TBB task. This includes being in a TBB dispatch loop (one of wait_for_all methods) invoked directly from that thread.

Definition at line 653 of file scheduler.h.

Referenced by tbb::internal::allocate_root_proxy::free(), tbb::task_scheduler_init::initialize(), tbb::task_scheduler_init::internal_terminate(), tbb::task::note_affinity(), and wait_until_empty().

653  {
654  return !is_worker() && outermost_level();
655 }
bool is_worker() const
True if running on a worker thread, false otherwise.
Definition: scheduler.h:673
bool outermost_level() const
True if the scheduler is on the outermost dispatch level.
Definition: scheduler.h:649
Here is the caller graph for this function:

◆ max_threads_in_arena()

unsigned tbb::internal::generic_scheduler::max_threads_in_arena ( )
inline

Returns the concurrency limit of the current arena.

Definition at line 677 of file scheduler.h.

References __TBB_ASSERT.

Referenced by tbb::internal::get_initial_auto_partitioner_divisor(), and tbb::internal::affinity_partitioner_base_v3::resize().

677  {
678  __TBB_ASSERT(my_arena, NULL);
679  return my_arena->my_num_slots;
680 }
unsigned my_num_slots
The number of slots in the arena.
Definition: arena.h:246
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
Here is the caller graph for this function:

◆ nested_arena_entry()

void tbb::internal::generic_scheduler::nested_arena_entry ( arena a,
size_t  slot_index 
)

Definition at line 678 of file arena.cpp.

References __TBB_ASSERT, tbb::internal::market::adjust_demand(), tbb::internal::governor::assume_scheduler(), attach_arena(), is_worker(), tbb::internal::scheduler_state::my_arena, tbb::internal::arena_base::my_market, and tbb::internal::arena_base::my_num_reserved_slots.

Referenced by tbb::internal::nested_arena_context::nested_arena_context().

678  {
679  __TBB_ASSERT( is_alive(a->my_guard), NULL );
680  __TBB_ASSERT( a!=my_arena, NULL);
681 
682  // overwrite arena settings
683 #if __TBB_TASK_PRIORITY
684  if ( my_offloaded_tasks )
685  my_arena->orphan_offloaded_tasks( *this );
686  my_offloaded_tasks = NULL;
687 #endif /* __TBB_TASK_PRIORITY */
688  attach_arena( a, slot_index, /*is_master*/true );
689  __TBB_ASSERT( my_arena == a, NULL );
691  // TODO? ITT_NOTIFY(sync_acquired, a->my_slots + index);
692  // TODO: it requires market to have P workers (not P-1)
693  // TODO: a preempted worker should be excluded from assignment to other arenas e.g. my_slack--
694  if( !is_worker() && slot_index >= my_arena->my_num_reserved_slots )
696 #if __TBB_ARENA_OBSERVER
697  my_last_local_observer = 0; // TODO: try optimize number of calls
698  my_arena->my_observers.notify_entry_observers( my_last_local_observer, /*worker=*/false );
699 #endif
700 #if __TBB_PREVIEW_RESUMABLE_TASKS
701  my_wait_task = NULL;
702 #endif
703 }
unsigned my_num_reserved_slots
The number of reserved slots (can be occupied only by masters).
Definition: arena.h:249
void adjust_demand(arena &, int delta)
Request that arena&#39;s need in workers should be adjusted.
Definition: market.cpp:556
bool is_worker() const
True if running on a worker thread, false otherwise.
Definition: scheduler.h:673
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
void attach_arena(arena *, size_t index, bool is_master)
Definition: arena.cpp:36
static void assume_scheduler(generic_scheduler *s)
Temporarily set TLS slot to the given scheduler.
Definition: governor.cpp:116
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
market * my_market
The market that owns this arena.
Definition: arena.h:228
Here is the call graph for this function:
Here is the caller graph for this function:

◆ nested_arena_exit()

void tbb::internal::generic_scheduler::nested_arena_exit ( )

Definition at line 705 of file arena.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), tbb::internal::market::adjust_demand(), is_worker(), tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::arena_base::my_exit_monitors, tbb::internal::arena_base::my_market, tbb::internal::arena_base::my_num_reserved_slots, tbb::internal::arena_slot_line1::my_scheduler, tbb::internal::arena::my_slots, and tbb::internal::concurrent_monitor::notify_one().

705  {
706 #if __TBB_ARENA_OBSERVER
707  my_arena->my_observers.notify_exit_observers( my_last_local_observer, /*worker=*/false );
708 #endif /* __TBB_ARENA_OBSERVER */
709 #if __TBB_TASK_PRIORITY
710  if ( my_offloaded_tasks )
711  my_arena->orphan_offloaded_tasks( *this );
712 #endif
715  // Free the master slot.
716  __TBB_ASSERT(my_arena->my_slots[my_arena_index].my_scheduler, "A slot is already empty");
718  my_arena->my_exit_monitors.notify_one(); // do not relax!
719 }
unsigned my_num_reserved_slots
The number of reserved slots (can be occupied only by masters).
Definition: arena.h:249
void adjust_demand(arena &, int delta)
Request that arena&#39;s need in workers should be adjusted.
Definition: market.cpp:556
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
arena_slot my_slots[1]
Definition: arena.h:386
concurrent_monitor my_exit_monitors
Waiting object for master threads that cannot join the arena.
Definition: arena.h:259
bool is_worker() const
True if running on a worker thread, false otherwise.
Definition: scheduler.h:673
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
generic_scheduler(market &, bool)
Definition: scheduler.cpp:84
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:79
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
market * my_market
The market that owns this arena.
Definition: arena.h:228
generic_scheduler * my_scheduler
Scheduler of the thread attached to the slot.
void notify_one()
Notify one thread about the event.
Here is the call graph for this function:

◆ outermost_level()

bool tbb::internal::generic_scheduler::outermost_level ( ) const
inline

True if the scheduler is on the outermost dispatch level.

Definition at line 649 of file scheduler.h.

Referenced by wait_until_empty().

649  {
650  return my_properties.outermost;
651 }
bool outermost
Indicates that a scheduler is on outermost level.
Definition: scheduler.h:57
scheduler_properties my_properties
Definition: scheduler.h:101
Here is the caller graph for this function:

◆ plugged_return_list()

static task* tbb::internal::generic_scheduler::plugged_return_list ( )
inlinestatic

Special value used to mark my_return_list as not taking any more entries.

Definition at line 458 of file scheduler.h.

Referenced by cleanup_scheduler(), and free_nonlocal_small_task().

458 {return (task*)(intptr_t)(-1);}
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
Here is the caller graph for this function:

◆ prepare_for_spawning()

task * tbb::internal::generic_scheduler::prepare_for_spawning ( task t)
inline

Checks if t is affinitized to another thread, and if so, bundles it as proxy.

Returns either t or proxy containing t.

Definition at line 593 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_CONTEXT_ARG, __TBB_ISOLATION_EXPR, tbb::internal::task_prefix::affinity, allocate_task(), tbb::task::allocated, tbb::task::context(), tbb::internal::es_ref_count_active, tbb::internal::es_task_proxy, tbb::internal::is_critical(), is_version_3_task(), tbb::internal::task_prefix::isolation, ITT_NOTIFY, tbb::internal::task_proxy::location_mask, tbb::internal::arena::mailbox(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::task_proxy::outbox, parent, tbb::task::parent(), tbb::internal::poison_pointer(), tbb::task::prefix(), tbb::internal::mail_outbox::push(), tbb::task::ready, tbb::internal::task_prefix::state, tbb::task::state(), sync_releasing, and tbb::internal::task_proxy::task_and_tag.

Referenced by local_spawn().

593  {
594  __TBB_ASSERT( t->state()==task::allocated, "attempt to spawn task that is not in 'allocated' state" );
595  t->prefix().state = task::ready;
596 #if TBB_USE_ASSERT
597  if( task* parent = t->parent() ) {
598  internal::reference_count ref_count = parent->prefix().ref_count;
599  __TBB_ASSERT( ref_count>=0, "attempt to spawn task whose parent has a ref_count<0" );
600  __TBB_ASSERT( ref_count!=0, "attempt to spawn task whose parent has a ref_count==0 (forgot to set_ref_count?)" );
601  parent->prefix().extra_state |= es_ref_count_active;
602  }
603 #endif /* TBB_USE_ASSERT */
604  affinity_id dst_thread = t->prefix().affinity;
605  __TBB_ASSERT( dst_thread == 0 || is_version_3_task(*t),
606  "backwards compatibility to TBB 2.0 tasks is broken" );
607 #if __TBB_TASK_ISOLATION
609  t->prefix().isolation = isolation;
610 #endif /* __TBB_TASK_ISOLATION */
611  if( dst_thread != 0 && dst_thread != my_affinity_id ) {
612  task_proxy& proxy = (task_proxy&)allocate_task( sizeof(task_proxy),
613  __TBB_CONTEXT_ARG(NULL, NULL) );
614  // Mark as a proxy
615  proxy.prefix().extra_state = es_task_proxy;
616  proxy.outbox = &my_arena->mailbox(dst_thread);
617  // Mark proxy as present in both locations (sender's task pool and destination mailbox)
618  proxy.task_and_tag = intptr_t(t) | task_proxy::location_mask;
619 #if __TBB_TASK_PRIORITY
620  poison_pointer( proxy.prefix().context );
621 #endif /* __TBB_TASK_PRIORITY */
622  __TBB_ISOLATION_EXPR( proxy.prefix().isolation = isolation );
623  ITT_NOTIFY( sync_releasing, proxy.outbox );
624  // Mail the proxy - after this point t may be destroyed by another thread at any moment.
625  proxy.outbox->push(&proxy);
626  return &proxy;
627  }
628  return t;
629 }
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
#define __TBB_ISOLATION_EXPR(isolation)
task * my_innermost_running_task
Innermost task whose task::execute() is running. A dummy task on the outermost level.
Definition: scheduler.h:88
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:99
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id parent
#define __TBB_CONTEXT_ARG(arg1, context)
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
task is in ready pool, or is going to be put there, or was just taken off.
Definition: task.h:630
static const intptr_t location_mask
Definition: mailbox.h:32
isolation_tag isolation
The tag used for task isolation.
Definition: task.h:209
mail_outbox & mailbox(affinity_id id)
Get reference to mailbox corresponding to given affinity_id.
Definition: arena.h:301
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
intptr_t isolation_tag
A tag for task isolation.
Definition: task.h:132
Set if ref_count might be changed by another thread. Used for debugging.
static bool is_version_3_task(task &t)
Definition: scheduler.h:146
intptr_t reference_count
A reference count.
Definition: task.h:120
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
unsigned char extra_state
Miscellaneous state that is not directly visible to users, stored as a byte for compactness.
Definition: task.h:281
unsigned short affinity_id
An id as used for specifying affinity.
Definition: task.h:128
task & allocate_task(size_t number_of_bytes, __TBB_CONTEXT_ARG(task *parent, task_group_context *context))
Allocate task object, either from the heap or a free list.
Definition: scheduler.cpp:335
Tag for v3 task_proxy.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:991
task object is freshly allocated or recycled.
Definition: task.h:632
Here is the call graph for this function:
Here is the caller graph for this function:

◆ prepare_task_pool()

size_t tbb::internal::generic_scheduler::prepare_task_pool ( size_t  n)
inline

Makes sure that the task pool can accommodate at least n more elements.

If necessary relocates existing task pointers or grows the ready task deque. Returns (possible updated) tail index (not accounting for n).

Definition at line 437 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), acquire_task_pool(), tbb::internal::arena_slot::allocate_task_pool(), assert_task_pool_valid(), commit_relocated_tasks(), tbb::internal::arena_slot::fill_with_canary_pattern(), tbb::internal::arena_slot_line1::head, is_quiescent_local_task_pool_reset(), is_task_pool_published(), min_task_pool_size, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena_slot_line2::my_task_pool_size, new_size, tbb::internal::NFS_Free(), tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by enqueue(), and local_spawn().

437  {
438  size_t T = __TBB_load_relaxed(my_arena_slot->tail); // mirror
439  if ( T + num_tasks <= my_arena_slot->my_task_pool_size )
440  return T;
441 
442  size_t new_size = num_tasks;
443 
447  if ( num_tasks < min_task_pool_size ) new_size = min_task_pool_size;
448  my_arena_slot->allocate_task_pool( new_size );
449  return 0;
450  }
451 
453  size_t H = __TBB_load_relaxed( my_arena_slot->head ); // mirror
454  task** task_pool = my_arena_slot->task_pool_ptr;;
456  // Count not skipped tasks. Consider using std::count_if.
457  for ( size_t i = H; i < T; ++i )
458  if ( task_pool[i] ) ++new_size;
459  // If the free space at the beginning of the task pool is too short, we
460  // are likely facing a pathological single-producer-multiple-consumers
461  // scenario, and thus it's better to expand the task pool
462  bool allocate = new_size > my_arena_slot->my_task_pool_size - min_task_pool_size/4;
463  if ( allocate ) {
464  // Grow task pool. As this operation is rare, and its cost is asymptotically
465  // amortizable, we can tolerate new task pool allocation done under the lock.
466  if ( new_size < 2 * my_arena_slot->my_task_pool_size )
467  new_size = 2 * my_arena_slot->my_task_pool_size;
468  my_arena_slot->allocate_task_pool( new_size ); // updates my_task_pool_size
469  }
470  // Filter out skipped tasks. Consider using std::copy_if.
471  size_t T1 = 0;
472  for ( size_t i = H; i < T; ++i )
473  if ( task_pool[i] )
474  my_arena_slot->task_pool_ptr[T1++] = task_pool[i];
475  // Deallocate the previous task pool if a new one has been allocated.
476  if ( allocate )
477  NFS_Free( task_pool );
478  else
480  // Publish the new state.
483  return T1;
484 }
void fill_with_canary_pattern(size_t, size_t)
void commit_relocated_tasks(size_t new_tail)
Makes relocated tasks visible to thieves and releases the local task pool.
Definition: scheduler.h:719
bool is_quiescent_local_task_pool_reset() const
Definition: scheduler.h:644
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t new_size
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
static const size_t min_task_pool_size
Definition: scheduler.h:369
void acquire_task_pool() const
Locks the local task pool.
Definition: scheduler.cpp:491
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
void __TBB_EXPORTED_FUNC NFS_Free(void *)
Free memory allocated by NFS_Allocate.
__TBB_atomic size_t head
Index of the first ready task in the deque.
size_t my_task_pool_size
Capacity of the primary task pool (number of elements - pointers to task).
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
task **__TBB_atomic task_pool_ptr
Task pool of the scheduler that owns this slot.
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
void allocate_task_pool(size_t n)
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ publish_task_pool()

void tbb::internal::generic_scheduler::publish_task_pool ( )
inline

Used by workers to enter the task pool.

Does not lock the task pool in case if arena slot has been successfully grabbed.

Definition at line 1244 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_with_release(), EmptyTaskPool, tbb::internal::arena_slot_line1::head, ITT_NOTIFY, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena::my_slots, sync_releasing, tbb::internal::arena_slot_line2::tail, tbb::internal::arena_slot_line1::task_pool, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by enqueue(), get_task(), and local_spawn().

1244  {
1245  __TBB_ASSERT ( my_arena, "no arena: initialization not completed?" );
1246  __TBB_ASSERT ( my_arena_index < my_arena->my_num_slots, "arena slot index is out-of-bound" );
1248  __TBB_ASSERT ( my_arena_slot->task_pool == EmptyTaskPool, "someone else grabbed my arena slot?" );
1250  "entering arena without tasks to share" );
1251  // Release signal on behalf of previously spawned tasks (when this thread was not in arena yet)
1254 }
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
arena_slot my_slots[1]
Definition: arena.h:386
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:79
__TBB_atomic size_t head
Index of the first ready task in the deque.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
#define EmptyTaskPool
Definition: scheduler.h:46
task **__TBB_atomic task_pool_ptr
Task pool of the scheduler that owns this slot.
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ receive_or_steal_task()

virtual task* tbb::internal::generic_scheduler::receive_or_steal_task ( __TBB_ISOLATION_ARG(__TBB_atomic reference_count &completion_ref_count, isolation_tag isolation)  )
pure virtual

Try getting a task from other threads (via mailbox, stealing, FIFO queue, orphans adoption).

Returns obtained task or NULL if all attempts fail.

Implemented in tbb::internal::custom_scheduler< SchedulerTraits >.

Referenced by tbb::internal::arena::process().

Here is the caller graph for this function:

◆ release_task_pool()

void tbb::internal::generic_scheduler::release_task_pool ( ) const
inline

Unlocks the local task pool.

Restores my_arena_slot->task_pool munged by acquire_task_pool. Requires correctly set my_arena_slot->task_pool_ptr.

Definition at line 520 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), is_task_pool_published(), ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena_slot, sync_releasing, tbb::internal::arena_slot_line1::task_pool, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by cleanup_master(), enqueue(), generic_scheduler(), and get_task().

520  {
521  if ( !is_task_pool_published() )
522  return; // we are not in arena - nothing to unlock
523  __TBB_ASSERT( my_arena_slot, "we are not in arena" );
524  __TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "arena slot is not locked" );
527 }
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
#define LockedTaskPool
Definition: scheduler.h:47
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
task **__TBB_atomic task_pool_ptr
Task pool of the scheduler that owns this slot.
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ reset_task_pool_and_leave()

void tbb::internal::generic_scheduler::reset_task_pool_and_leave ( )
inline

Resets head and tail indices to 0, and leaves task pool.

The task pool must be locked by the owner (via acquire_task_pool).

Definition at line 702 of file scheduler.h.

References __TBB_ASSERT, tbb::internal::__TBB_store_relaxed(), and LockedTaskPool.

Referenced by get_task().

702  {
703  __TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "Task pool must be locked when resetting task pool" );
706  leave_task_pool();
707 }
void leave_task_pool()
Leave the task pool.
Definition: scheduler.cpp:1256
#define LockedTaskPool
Definition: scheduler.h:47
__TBB_atomic size_t head
Index of the first ready task in the deque.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:742
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:82
Here is the call graph for this function:
Here is the caller graph for this function:

◆ spawn()

void tbb::internal::generic_scheduler::spawn ( task first,
task *&  next 
)
virtual

For internal use only.

Implements tbb::internal::scheduler.

Definition at line 737 of file scheduler.cpp.

References tbb::internal::governor::local_scheduler(), and local_spawn().

737  {
739 }
void local_spawn(task *first, task *&next)
Definition: scheduler.cpp:649
auto first(Container &c) -> decltype(begin(c))
static generic_scheduler * local_scheduler()
Obtain the thread-local instance of the TBB scheduler.
Definition: governor.h:129
Here is the call graph for this function:

◆ spawn_root_and_wait()

void tbb::internal::generic_scheduler::spawn_root_and_wait ( task first,
task *&  next 
)
virtual

For internal use only.

Implements tbb::internal::scheduler.

Definition at line 741 of file scheduler.cpp.

References tbb::internal::governor::local_scheduler(), and local_spawn_root_and_wait().

741  {
743 }
auto first(Container &c) -> decltype(begin(c))
static generic_scheduler * local_scheduler()
Obtain the thread-local instance of the TBB scheduler.
Definition: governor.h:129
void local_spawn_root_and_wait(task *first, task *&next)
Definition: scheduler.cpp:716
Here is the call graph for this function:

◆ steal_task()

task * tbb::internal::generic_scheduler::steal_task ( __TBB_ISOLATION_EXPR(isolation_tag isolation)  )

Attempts to steal a task from a randomly chosen thread/scheduler.

Definition at line 1105 of file scheduler.cpp.

References __TBB_ISOLATION_ARG, EmptyTaskPool, tbb::internal::es_task_is_stolen, tbb::internal::task_prefix::extra_state, tbb::internal::task_proxy::extract_task(), GATHER_STATISTIC, tbb::internal::FastRandom::get(), is_proxy(), is_version_3_task(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::arena_base::my_limit, my_random, tbb::internal::arena::my_slots, tbb::task::note_affinity(), tbb::internal::task_prefix::owner, tbb::internal::task_proxy::pool_bit, tbb::task::prefix(), steal_task_from(), and tbb::internal::arena_slot_line1::task_pool.

1105  {
1106  // Try to steal a task from a random victim.
1107  size_t k = my_random.get() % (my_arena->my_limit-1);
1108  arena_slot* victim = &my_arena->my_slots[k];
1109  // The following condition excludes the master that might have
1110  // already taken our previous place in the arena from the list .
1111  // of potential victims. But since such a situation can take
1112  // place only in case of significant oversubscription, keeping
1113  // the checks simple seems to be preferable to complicating the code.
1114  if( k >= my_arena_index )
1115  ++victim; // Adjusts random distribution to exclude self
1116  task **pool = victim->task_pool;
1117  task *t = NULL;
1118  if( pool == EmptyTaskPool || !(t = steal_task_from( __TBB_ISOLATION_ARG(*victim, isolation) )) )
1119  return NULL;
1120  if( is_proxy(*t) ) {
1121  task_proxy &tp = *(task_proxy*)t;
1122  t = tp.extract_task<task_proxy::pool_bit>();
1123  if ( !t ) {
1124  // Proxy was empty, so it's our responsibility to free it
1125  free_task<no_cache_small_task>(tp);
1126  return NULL;
1127  }
1128  GATHER_STATISTIC( ++my_counters.proxies_stolen );
1129  }
1130  t->prefix().extra_state |= es_task_is_stolen;
1131  if( is_version_3_task(*t) ) {
1133  t->prefix().owner = this;
1134  t->note_affinity( my_affinity_id );
1135  }
1136  GATHER_STATISTIC( ++my_counters.steals_committed );
1137  return t;
1138 }
task * my_innermost_running_task
Innermost task whose task::execute() is running. A dummy task on the outermost level.
Definition: scheduler.h:88
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:99
FastRandom my_random
Random number generator used for picking a random victim from which to steal.
Definition: scheduler.h:175
arena_slot my_slots[1]
Definition: arena.h:386
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
static bool is_proxy(const task &t)
True if t is a task_proxy.
Definition: scheduler.h:348
scheduler * owner
Obsolete. The scheduler that owns the task.
Definition: task.h:236
static bool is_version_3_task(task &t)
Definition: scheduler.h:146
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
task * steal_task_from(__TBB_ISOLATION_ARG(arena_slot &victim_arena_slot, isolation_tag isolation))
Steal task from another scheduler&#39;s ready pool.
Definition: scheduler.cpp:1140
#define __TBB_ISOLATION_ARG(arg1, isolation)
static const intptr_t pool_bit
Definition: mailbox.h:30
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:79
Set if the task has been stolen.
atomic< unsigned > my_limit
The maximal number of currently busy slots.
Definition: arena.h:157
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:991
#define EmptyTaskPool
Definition: scheduler.h:46
#define GATHER_STATISTIC(x)
unsigned short get()
Get a random number.
Definition: tbb_misc.h:151
Here is the call graph for this function:

◆ steal_task_from()

task * tbb::internal::generic_scheduler::steal_task_from ( __TBB_ISOLATION_ARG(arena_slot &victim_arena_slot, isolation_tag isolation)  )

Steal task from another scheduler's ready pool.

Definition at line 1140 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_cl_evict, __TBB_control_consistency_helper, __TBB_ISOLATION_EXPR, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_relaxed(), tbb::internal::arena::advertise_new_work(), tbb::atomic_fence(), GATHER_STATISTIC, tbb::internal::arena_slot_line1::head, is_proxy(), tbb::internal::task_proxy::is_shared(), tbb::internal::task_prefix::isolation, ITT_NOTIFY, lock_task_pool(), tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::scheduler_state::my_properties, tbb::internal::no_isolation, tbb::internal::task_proxy::outbox, tbb::internal::poison_pointer(), tbb::task::prefix(), tbb::internal::mail_outbox::recipient_is_idle(), tbb::internal::arena_slot_line2::tail, tbb::internal::task_proxy::task_and_tag, unlock_task_pool(), and tbb::internal::arena::wakeup.

Referenced by steal_task().

1140  {
1141  task** victim_pool = lock_task_pool( &victim_slot );
1142  if ( !victim_pool )
1143  return NULL;
1144  task* result = NULL;
1145  size_t H = __TBB_load_relaxed(victim_slot.head); // mirror
1146  size_t H0 = H;
1147  bool tasks_omitted = false;
1148  do {
1149  __TBB_store_relaxed( victim_slot.head, ++H );
1150  atomic_fence();
1151  if ( (intptr_t)H > (intptr_t)__TBB_load_relaxed( victim_slot.tail ) ) {
1152  // Stealing attempt failed, deque contents has not been changed by us
1153  GATHER_STATISTIC( ++my_counters.thief_backoffs );
1154  __TBB_store_relaxed( victim_slot.head, /*dead: H = */ H0 );
1155  __TBB_ASSERT( !result, NULL );
1156  goto unlock;
1157  }
1158  __TBB_control_consistency_helper(); // on victim_slot.tail
1159  result = victim_pool[H-1];
1160  __TBB_ASSERT( !is_poisoned( result ), NULL );
1161 
1162  if ( result ) {
1163  __TBB_ISOLATION_EXPR( if ( isolation == no_isolation || isolation == result->prefix().isolation ) )
1164  {
1165  if ( !is_proxy( *result ) )
1166  break;
1167  task_proxy& tp = *static_cast<task_proxy*>(result);
1168  // If mailed task is likely to be grabbed by its destination thread, skip it.
1169  if ( !(task_proxy::is_shared( tp.task_and_tag ) && tp.outbox->recipient_is_idle()) )
1170  break;
1171  GATHER_STATISTIC( ++my_counters.proxies_bypassed );
1172  }
1173  // The task cannot be executed either due to isolation or proxy constraints.
1174  result = NULL;
1175  tasks_omitted = true;
1176  } else if ( !tasks_omitted ) {
1177  // Cleanup the task pool from holes until a task is skipped.
1178  __TBB_ASSERT( H0 == H-1, NULL );
1179  poison_pointer( victim_pool[H0] );
1180  H0 = H;
1181  }
1182  } while ( !result );
1183  __TBB_ASSERT( result, NULL );
1184 
1185  // emit "task was consumed" signal
1186  ITT_NOTIFY( sync_acquired, (void*)((uintptr_t)&victim_slot+sizeof( uintptr_t )) );
1187  poison_pointer( victim_pool[H-1] );
1188  if ( tasks_omitted ) {
1189  // Some proxies in the task pool have been omitted. Set the stolen task to NULL.
1190  victim_pool[H-1] = NULL;
1191  __TBB_store_relaxed( victim_slot.head, /*dead: H = */ H0 );
1192  }
1193 unlock:
1194  unlock_task_pool( &victim_slot, victim_pool );
1195 #if __TBB_PREFETCHING
1196  __TBB_cl_evict(&victim_slot.head);
1197  __TBB_cl_evict(&victim_slot.tail);
1198 #endif
1199  if ( tasks_omitted )
1200  // Synchronize with snapshot as the head and tail can be bumped which can falsely trigger EMPTY state
1202  return result;
1203 }
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
#define __TBB_ISOLATION_EXPR(isolation)
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
void advertise_new_work()
If necessary, raise a flag that there is new job in arena.
Definition: arena.h:480
void unlock_task_pool(arena_slot *victim_arena_slot, task **victim_task_pool) const
Unlocks victim&#39;s task pool.
Definition: scheduler.cpp:584
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
void atomic_fence()
Sequentially consistent full memory fence.
Definition: tbb_machine.h:342
static bool is_proxy(const task &t)
True if t is a task_proxy.
Definition: scheduler.h:348
task ** lock_task_pool(arena_slot *victim_arena_slot) const
Locks victim&#39;s task pool, and returns pointer to it. The pointer can be NULL.
Definition: scheduler.cpp:535
#define __TBB_control_consistency_helper()
Definition: gcc_generic.h:60
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define __TBB_cl_evict(p)
Definition: mic_common.h:34
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:742
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
static bool is_shared(intptr_t tat)
True if the proxy is stored both in its sender&#39;s pool and in the destination mailbox.
Definition: mailbox.h:46
const isolation_tag no_isolation
Definition: task.h:133
#define GATHER_STATISTIC(x)
Here is the call graph for this function:
Here is the caller graph for this function:

◆ unlock_task_pool()

void tbb::internal::generic_scheduler::unlock_task_pool ( arena_slot victim_arena_slot,
task **  victim_task_pool 
) const
inline

Unlocks victim's task pool.

Restores victim_arena_slot->task_pool munged by lock_task_pool.

Definition at line 584 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), ITT_NOTIFY, LockedTaskPool, sync_releasing, and tbb::internal::arena_slot_line1::task_pool.

Referenced by steal_task_from().

585  {
586  __TBB_ASSERT( victim_arena_slot, "empty victim arena slot pointer" );
587  __TBB_ASSERT( victim_arena_slot->task_pool == LockedTaskPool, "victim arena slot is not locked" );
588  ITT_NOTIFY(sync_releasing, victim_arena_slot);
589  __TBB_store_with_release( victim_arena_slot->task_pool, victim_task_pool );
590 }
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
#define LockedTaskPool
Definition: scheduler.h:47
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
Here is the call graph for this function:
Here is the caller graph for this function:

◆ wait_until_empty()

void tbb::internal::generic_scheduler::wait_until_empty ( )

Definition at line 721 of file arena.cpp.

References __TBB_ASSERT, __TBB_CONTEXT_ARG, __TBB_CONTEXT_ARG1, tbb::internal::__TBB_load_with_acquire(), __TBB_override, tbb::internal::__TBB_store_with_release(), __TBB_Yield, tbb::internal::arena::advertise_new_work(), tbb::task::allocate_root(), allocate_task(), tbb::internal::as_atomic(), tbb::internal::governor::assume_scheduler(), tbb::internal::concurrent_monitor::cancel_wait(), tbb::internal::concurrent_monitor::commit_wait(), tbb::internal::task_prefix::context, tbb::task::context(), tbb::task_group_context::copy_fp_settings(), tbb::internal::market::create_arena(), tbb::internal::create_coroutine(), create_worker(), d, tbb::internal::governor::default_num_threads(), tbb::task_group_context::default_traits, tbb::internal::arena::enqueue_task(), tbb::task_group_context::exact_exception, tbb::task::executing, tbb::internal::market::global_market(), int, tbb::interface7::internal::task_arena_base::internal_attach(), tbb::interface7::internal::task_arena_base::internal_current_slot(), tbb::interface7::internal::task_arena_base::internal_enqueue(), tbb::interface7::internal::task_arena_base::internal_execute(), tbb::interface7::internal::task_arena_base::internal_initialize(), tbb::interface7::internal::task_arena_base::internal_max_concurrency(), tbb::interface7::internal::task_arena_base::internal_terminate(), tbb::interface7::internal::task_arena_base::internal_wait(), tbb::internal::arena::is_out_of_work(), is_worker(), tbb::interface7::internal::isolate_within_arena(), tbb::task_group_context::isolated, tbb::internal::task_prefix::isolation, tbb::internal::governor::local_scheduler(), tbb::internal::governor::local_scheduler_if_initialized(), tbb::internal::governor::local_scheduler_weak(), local_wait_for_all(), tbb::internal::make_critical(), tbb::internal::scheduler_properties::master, master_outermost_level(), tbb::internal::scheduler_state::my_arena, tbb::interface7::internal::task_arena_base::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, tbb::task_group_context::my_exception, tbb::internal::arena_base::my_exit_monitors, tbb::interface7::internal::delegated_function< F, R >::my_func, tbb::internal::scheduler_state::my_innermost_running_task, my_market, tbb::internal::arena_base::my_market, tbb::interface7::internal::task_arena_base::my_max_concurrency, tbb::internal::arena_base::my_max_num_workers, tbb::internal::arena_base::my_num_reserved_slots, tbb::internal::arena_base::my_num_slots, tbb::internal::arena_base::my_pool_state, tbb::internal::scheduler_state::my_properties, my_random, tbb::internal::arena_base::my_references, tbb::internal::arena_slot_line1::my_scheduler, tbb::internal::arena::my_slots, tbb::internal::NFS_Allocate(), tbb::internal::concurrent_monitor::notify(), tbb::internal::concurrent_monitor::notify_one(), tbb::internal::arena::num_arena_slots(), tbb::internal::arena::num_workers_active(), tbb::internal::arena::occupy_free_slot(), tbb::internal::arena::on_thread_leaving(), tbb::internal::governor::one_time_init(), tbb::internal::arena::out_of_arena, outermost_level(), tbb::internal::binary_semaphore::P(), tbb::internal::auto_empty_task::prefix(), tbb::task::prefix(), tbb::internal::concurrent_monitor::prepare_wait(), tbb::internal::task_prefix::ref_count, tbb::task::ref_count(), tbb::internal::arena::ref_external, tbb::task_group_context::register_pending_exception(), tbb::internal::market::release(), tbb::internal::context_guard_helper< T >::restore_default(), s, scope, tbb::internal::context_guard_helper< T >::set_ctx(), tbb::internal::arena::SNAPSHOT_EMPTY, tbb::internal::spin_wait_while_eq(), tbb::internal::scheduler_properties::type, tbb::internal::binary_semaphore::V(), wait_until_empty(), tbb::internal::arena::work_spawned, and tbb::internal::scheduler_properties::worker.

Referenced by wait_until_empty().

721  {
722  my_dummy_task->prefix().ref_count++; // prevents exit from local_wait_for_all when local work is done enforcing the stealing
726 }
tbb::atomic< uintptr_t > my_pool_state
Current task pool state and estimate of available tasks amount.
Definition: arena.h:191
__TBB_atomic reference_count ref_count
Reference count used for synchronization.
Definition: task.h:263
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:85
task * my_dummy_task
Fake root task created by slave threads.
Definition: scheduler.h:186
static const pool_state_t SNAPSHOT_EMPTY
No tasks to steal since last snapshot was taken.
Definition: arena.h:314
virtual void local_wait_for_all(task &parent, task *child)=0
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:991
Here is the call graph for this function:
Here is the caller graph for this function:

◆ worker_outermost_level()

bool tbb::internal::generic_scheduler::worker_outermost_level ( ) const
inline

True if the scheduler is on the outermost dispatch level in a worker thread.

Definition at line 657 of file scheduler.h.

References tbb::internal::task_prefix::context, and tbb::task::prefix().

Referenced by tbb::internal::arena::free_arena(), free_task(), and tbb::internal::arena::process().

657  {
658  return is_worker() && outermost_level();
659 }
bool is_worker() const
True if running on a worker thread, false otherwise.
Definition: scheduler.h:673
bool outermost_level() const
True if the scheduler is on the outermost dispatch level.
Definition: scheduler.h:649
Here is the call graph for this function:
Here is the caller graph for this function:

Friends And Related Function Documentation

◆ custom_scheduler

template<typename SchedulerTraits >
friend class custom_scheduler
friend

Definition at line 389 of file scheduler.h.

Member Data Documentation

◆ min_task_pool_size

const size_t tbb::internal::generic_scheduler::min_task_pool_size = 64
static

Initial size of the task deque sufficient to serve without reallocation 4 nested parallel_for calls with iteration space of 65535 grains each.

Definition at line 369 of file scheduler.h.

Referenced by enqueue(), generic_scheduler(), local_spawn(), and prepare_task_pool().

◆ my_auto_initialized

bool tbb::internal::generic_scheduler::my_auto_initialized

True if *this was created by automatic TBB initialization.

Definition at line 197 of file scheduler.h.

Referenced by tbb::internal::governor::auto_terminate(), tbb::internal::governor::init_scheduler(), and tbb::internal::governor::init_scheduler_weak().

◆ my_dummy_task

task* tbb::internal::generic_scheduler::my_dummy_task

Fake root task created by slave threads.

The task is used as the "parent" argument to method wait_for_all.

Definition at line 186 of file scheduler.h.

Referenced by attach_arena(), cleanup_master(), create_master(), create_worker(), free_task(), generic_scheduler(), tbb::internal::nested_arena_context::mimic_outermost_level(), tbb::internal::arena::process(), and wait_until_empty().

◆ my_free_list

task* tbb::internal::generic_scheduler::my_free_list

Free list of small tasks that can be reused.

Definition at line 178 of file scheduler.h.

Referenced by allocate_task(), and cleanup_scheduler().

◆ my_market

◆ my_random

FastRandom tbb::internal::generic_scheduler::my_random

Random number generator used for picking a random victim from which to steal.

Definition at line 175 of file scheduler.h.

Referenced by enqueue(), tbb::internal::arena::occupy_free_slot_in_range(), steal_task(), and wait_until_empty().

◆ my_ref_count

long tbb::internal::generic_scheduler::my_ref_count

Reference count for scheduler.

Number of task_scheduler_init objects that point to this scheduler

Definition at line 190 of file scheduler.h.

Referenced by tbb::internal::governor::auto_terminate(), tbb::internal::governor::init_scheduler(), and tbb::internal::governor::terminate_scheduler().

◆ my_return_list

task* tbb::internal::generic_scheduler::my_return_list

List of small tasks that have been returned to this scheduler by other schedulers.

Definition at line 465 of file scheduler.h.

Referenced by allocate_task(), cleanup_scheduler(), and generic_scheduler().

◆ my_small_task_count

__TBB_atomic intptr_t tbb::internal::generic_scheduler::my_small_task_count

Number of small tasks that have been allocated by this scheduler.

Definition at line 461 of file scheduler.h.

Referenced by allocate_task(), cleanup_scheduler(), and destroy().

◆ my_stealing_threshold

uintptr_t tbb::internal::generic_scheduler::my_stealing_threshold

Position in the call stack specifying its maximal filling when stealing is still allowed.

Definition at line 155 of file scheduler.h.

Referenced by init_stack_info().

◆ null_arena_index

const size_t tbb::internal::generic_scheduler::null_arena_index = ~size_t(0)
static

Definition at line 161 of file scheduler.h.

◆ quick_task_size

const size_t tbb::internal::generic_scheduler::quick_task_size = 256-task_prefix_reservation_size
static

If sizeof(task) is <=quick_task_size, it is handled on a free list instead of malloc'd.

Definition at line 144 of file scheduler.h.

Referenced by allocate_task().


The documentation for this class was generated from the following files:

Copyright © 2005-2019 Intel Corporation. All Rights Reserved.

Intel, Pentium, Intel Xeon, Itanium, Intel XScale and VTune are registered trademarks or trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

* Other names and brands may be claimed as the property of others.