Allocation

Functions

static __inline void * _tm_type_alloc_node_from_free_list (tm_type *t)
 Allocates a node from at tm_type's free list.
static __inline int _tm_type_memory_pressureQ (tm_type *t)
static __inline int _tm_type_memory_pressureQ_2 (tm_type *t)
void * _tm_alloc_type_inner (tm_type *t)
 Allocates a node of a given type.
void * _tm_alloc_inner (size_t size)
 Allocates a node of a particular size.
void * _tm_alloc_desc_inner (tm_adesc *desc)
 Allocates a node of a particular type.
void * _tm_realloc_inner (void *oldptr, size_t size)
 Reallocates a node to a particular size.
void * tm_alloc (size_t size)
 API: Allocate a node.
void * tm_alloc_desc (tm_adesc *desc)
 API: Allocate a node based on a allocation descriptor.
void * tm_realloc (void *ptr, size_t size)
 API: Reallocate a node of a givin size.
void tm_free (void *ptr)
 API: Explicitly free a node.

Function Documentation

void* _tm_alloc_desc_inner ( tm_adesc desc  ) 

Allocates a node of a particular type.

Definition at line 2109 of file tm.c.

References _tm_alloc_type_inner(), tm_data::alloc_request_size, tm_data::alloc_request_type, tm_adesc::hidden, tm_type::size, and tm.

Referenced by tm_alloc_desc().

02110 {
02111   tm_type *type = (tm_type*) desc->hidden;
02112   tm.alloc_request_size = type->size;
02113   tm.alloc_request_type = type;
02114   return _tm_alloc_type_inner(type);
02115 }

Here is the call graph for this function:

Here is the caller graph for this function:

void* _tm_alloc_inner ( size_t  size  ) 

Allocates a node of a particular size.

Definition at line 2097 of file tm.c.

References _tm_alloc_type_inner(), tm_data::alloc_request_size, tm_data::alloc_request_type, tm, and tm_size_to_type().

Referenced by _tm_realloc_inner(), and tm_alloc().

02098 {
02099   tm_type *type = tm_size_to_type(size);
02100   tm.alloc_request_size = size;
02101   tm.alloc_request_type = type;
02102   return _tm_alloc_type_inner(type);  
02103 }

Here is the call graph for this function:

Here is the caller graph for this function:

void* _tm_alloc_type_inner ( tm_type t  ) 

Allocates a node of a given type.

Algorithm:

Depending on tm.phase, the allocator will do some other work before returning an data ptr from a tm_node:

  • Unmark some allocated nodes (BLACK->ECRU).
  • Scan roots (stacks or globals) for pointers to nodes.
  • Scan marked nodes for pointers to other nodes (GREY->BLACK).
  • Sweep some unused nodes to their free lists (ECRU->WHITE).

A node is taken from the tm_type's free list, or a new block is allocated and parcelled into the tm_type's free list.

Increment the allocation id.

Reset during tm_alloc() stats: tm_data.alloc_n.

Keep track of how many allocs since last sweep.

Assume next allocation phase will be same as current phase.

If current allocation phase is:

  • tm_ALLOC: Allocate tm_ECRU nodes from tm_WHITE free lists.

If there are no free nodes for the type, Or if there are none to parcel.

  • tm_UNMARK: Unmark some tm_BLACK nodes.

If all nodes are unmarked, next phase is tm_ROOT.

  • tm_ROOT: scan some roots.

If tm_root_scan_full,

Scan all roots for pointers, next phase is tm_SCAN.

Otherwise, scan some roots for pointers.

If there were no roots scanned, next phase is tm_SCAN.

If scanning is complete,

If still left to scan and under pressure, scan all grey nodes.

  • tm_SWEEP: reclaim any tm_ECRU nodes.

Do not remark nodes after root mutations because all newly allocated objects are marked "in use" during SWEEP. (See tm_phase_alloc).

Try to immediately sweep tm_ECRU nodes for the type request, otherwise try to immediately sweep nodes for any type.

If no tm_ECRU (unused) nodes are left to sweep,

Switch to allocating tm_ECRU nodes from free lists or from OS.

Switch to new phase.

Increment the number tm_node allocs per phase.

If a node is not available on the tm_type free list,

Attempt to parcel or allocate some nodes:

  • from currently parceled block,
  • a free block or,
  • from new block from OS.

Return a new data ptr or 0.

Definition at line 1862 of file tm.c.

References _tm_node_parcel_or_alloc(), _tm_node_scan_all(), _tm_node_scan_some(), _tm_node_sweep_some(), _tm_node_sweep_some_for_type(), _tm_node_unmark_some(), _tm_phase_init(), _tm_root_scan_all(), _tm_root_scan_some(), _tm_type_alloc_node_from_free_list(), _tm_type_memory_pressureQ(), _tm_type_memory_pressureQ_2(), tm_data::alloc_by_phase, tm_data::alloc_id, tm_data::alloc_n, tm_data::alloc_pass, tm_data::alloc_since_sweep, tm_data::free_blocks_n, tm_type::n, tm_data::n, tm_data::next_phase, tm_type::size, tm, tm_abort(), tm_ALLOC, tm_assert, tm_B, tm_BLACK, tm_ECRU, tm_GREY, tm_msg(), tm_node_HDR_SIZE, tm_node_scan_some_size, tm_node_sweep_some_size, tm_node_unmark_some_size, tm_ptr_to_node(), tm_ROOT, tm_root_scan_full, tm_SCAN, tm_stop(), tm_SWEEP, tm_time_stat_begin(), tm_time_stat_end(), tm_TOTAL, tm_UNMARK, tm_WHITE, and tm_data::ts_phase.

Referenced by _tm_alloc_desc_inner(), and _tm_alloc_inner().

01863 {
01864   void *ptr;
01865   int again;
01866 #if tm_TIME_STAT
01867   tm_time_stat *ts;
01868 #endif
01869 
01870   /*! Increment the allocation id. */
01871   ++ tm.alloc_id;
01872 #if 0
01873   /* DEBUG DELETE ME! */
01874   if ( tm.alloc_id == 5040 )
01875     tm_stop();
01876 #endif
01877 
01878   /*! Reset during tm_alloc() stats: tm_data.alloc_n. */
01879   tm.alloc_pass = 0;
01880   memset(tm.alloc_n, 0, sizeof(tm.alloc_n));
01881 
01882   /*! Keep track of how many allocs since last sweep. */
01883   ++ tm.alloc_since_sweep;
01884 
01885   // tm_validate_lists();
01886 
01887   /* Try to allocate again? */
01888   ++ tm.alloc_pass;
01889 
01890   /* BEGIN CRITICAL SECTION */
01891 
01892 #if tm_TIME_STAT
01893   tm_time_stat_begin(ts = &tm.ts_phase[tm.phase]);
01894 #endif
01895 
01896  again:
01897   again = 0;
01898 
01899   /*! Assume next allocation phase will be same as current phase. */
01900   tm.next_phase = (enum tm_phase) -1;
01901   
01902   /*! If current allocation phase is: */
01903   switch ( tm.phase ) {
01904   case tm_ALLOC:
01905     /*! - tm_ALLOC: Allocate tm_ECRU nodes from tm_WHITE free lists. */
01906 
01907     _tm_node_unmark_some(tm_node_unmark_some_size);
01908 
01909     /**
01910      * If there are no free nodes for the type,
01911      * Or if there are none to parcel.
01912      */
01913     if ( _tm_type_memory_pressureQ_2(t) ) {
01914       tm.next_phase = tm_UNMARK;
01915     }
01916     break;
01917 
01918   case tm_UNMARK:
01919     /*! - tm_UNMARK: Unmark some tm_BLACK nodes. */
01920 
01921     /*! If all nodes are unmarked, next phase is tm_ROOT. */
01922     if ( ! _tm_node_unmark_some(tm_node_unmark_some_size) ) {
01923       tm.next_phase = tm_ROOT;
01924     }
01925 
01926 #if 0
01927     /**
01928      * Begin tm_ROOT phase immediately if memory pressure is still too high.
01929      */
01930     if ( _tm_type_memory_pressureQ(t) ) {
01931       tm.next_phase = tm_ROOT;
01932     }
01933 #endif
01934     break;
01935     
01936   case tm_ROOT:
01937     /*! - tm_ROOT: scan some roots. */
01938 
01939     _tm_node_scan_some(tm_node_scan_some_size);
01940 
01941     /*! If tm_root_scan_full, */
01942     if ( tm_root_scan_full ) {
01943       /*! Scan all roots for pointers, next phase is tm_SCAN. */
01944       _tm_root_scan_all();
01945       tm.next_phase = tm_SCAN;
01946     } else {
01947       /*! Otherwise, scan some roots for pointers. */
01948       if ( ! _tm_root_scan_some() ) {
01949         /*! If there were no roots scanned, next phase is tm_SCAN. */
01950         tm.next_phase = tm_SCAN;
01951       }
01952     }
01953     break;
01954     
01955   case tm_SCAN:
01956     /* - tm_SCAN: Scan some tm_GREY nodes for internal pointers. */
01957 
01958     /*! If scanning is complete, */
01959     if ( ! _tm_node_scan_some(tm_node_scan_some_size) ) {
01960       /* Scan all roots once more. */
01961       _tm_root_scan_all();
01962     }
01963 
01964     /**
01965      * If still left to scan and under pressure,
01966      * scan all grey nodes. 
01967      */
01968     if ( tm.n[tm_GREY] && _tm_type_memory_pressureQ_2(t) ) {
01969       _tm_node_scan_all();
01970     }
01971 
01972     if ( ! tm.n[tm_GREY] ) {
01973       tm.next_phase = tm_SWEEP;
01974       again = 1;
01975     }
01976     break;
01977       
01978   case tm_SWEEP:
01979     /**
01980      * - tm_SWEEP: reclaim any tm_ECRU nodes.
01981      *
01982      * Do not remark nodes after root mutations
01983      * because all newly allocated objects are marked "in use" during SWEEP.
01984      * (See tm_phase_alloc).
01985      */
01986 
01987     /**
01988      * Try to immediately sweep tm_ECRU nodes for the type request,
01989      * otherwise try to immediately sweep nodes for any type.
01990      */
01991     if ( t->n[tm_TOTAL] && ! _tm_node_sweep_some_for_type(t, tm_node_sweep_some_size) ) {
01992       _tm_node_sweep_some(tm_node_sweep_some_size);
01993     }
01994     
01995     /*! If no tm_ECRU (unused) nodes are left to sweep, */
01996     if ( ! tm.n[tm_ECRU] ) { 
01997       /*! Switch to allocating tm_ECRU nodes from free lists or from OS. */
01998       tm.next_phase = tm_ALLOC;
01999     }
02000     break;
02001 
02002   default:
02003     tm_abort();
02004     break;
02005   }
02006   
02007   /* END CRITICAL SECTION */
02008 
02009   /*! Switch to new phase. */
02010   if ( tm.next_phase != (enum tm_phase) -1 )
02011     _tm_phase_init(tm.next_phase);
02012 
02013   if ( again )
02014     goto again;
02015 
02016   /*! Increment the number tm_node allocs per phase. */
02017   ++ tm.alloc_by_phase[tm.phase];
02018 
02019   /*! If a node is not available on the tm_type free list, */
02020   if ( ! (ptr = _tm_type_alloc_node_from_free_list(t)) ) {
02021     /**
02022      * Attempt to parcel or allocate some nodes:
02023      * - from currently parceled block, 
02024      * - a free block or,
02025      * - from new block from OS. */
02026     _tm_node_parcel_or_alloc(t);
02027     ptr = _tm_type_alloc_node_from_free_list(t);
02028   }
02029 
02030   // tm_validate_lists();
02031 
02032 #if tm_ptr_to_node_TEST
02033   /* Validate tm_ptr_to_node() */
02034   if ( ptr ) {
02035     char *p = ptr;
02036     tm_node *n = (void*) (p - tm_node_HDR_SIZE);
02037     tm_assert(tm_ptr_to_node(n) == 0);
02038     tm_assert(tm_ptr_to_node(p) == n);
02039     tm_assert(tm_ptr_to_node(p + 1) == n);
02040     tm_assert(tm_ptr_to_node(p + t->size / 2) == n);
02041     tm_assert(tm_ptr_to_node(p + t->size - 1) == n);
02042 #if tm_ptr_AT_END_IS_VALID
02043     tm_assert(tm_ptr_to_node(p + t->size) == n);
02044 #else
02045     tm_assert(tm_ptr_to_node(p + t->size) == 0);
02046 #endif
02047   }
02048 #endif
02049 
02050 #if tm_TIME_STAT
02051   tm_time_stat_end(ts);
02052 #endif
02053 
02054   {
02055     static FILE *tm_alloc_log;
02056     static unsigned long log_id = 0;
02057     static unsigned long log_ratio = 1;
02058 
02059     if ( ! tm_alloc_log ) {
02060       tm_alloc_log = fopen("/tmp/tm_alloc.log", "w+");
02061       fprintf(tm_alloc_log, "#ID PTR WHITE ECRU GREY BLACK PHASE BLOCKS FREE_BLOCKS\n");
02062     }
02063     if ( tm_alloc_log ) {
02064       if ( log_id ++ % log_ratio == 0 ) {
02065         fprintf(tm_alloc_log, "%lu %lu %lu %lu %lu %lu %lu %lu %lu, %lu\n",  
02066                 (unsigned long) tm.alloc_id,
02067                 (unsigned long) ptr,
02068                 (unsigned long) tm.n[tm_WHITE],
02069                 (unsigned long) tm.n[tm_ECRU],
02070                 (unsigned long) tm.n[tm_GREY],
02071                 (unsigned long) tm.n[tm_BLACK],
02072                 (unsigned long) tm.n[tm_TOTAL],
02073                 (unsigned long) tm.phase,
02074                 (unsigned long) tm.n[tm_B],
02075                 (unsigned long) tm.free_blocks_n,
02076                 0
02077                 );
02078       }
02079       if ( log_ratio < 1000 && log_id / log_ratio > 100 ) {
02080         log_ratio *= 10;
02081       }
02082     }
02083   }
02084 
02085 #if 0
02086   tm_msg("a %p[%lu]\n", ptr, (unsigned long) t->size);
02087 #endif
02088 
02089   /*! Return a new data ptr or 0. */
02090   return (void*) ptr;
02091 }

Here is the call graph for this function:

Here is the caller graph for this function:

void* _tm_realloc_inner ( void *  oldptr,
size_t  size 
)

Reallocates a node to a particular size.

Definition at line 2121 of file tm.c.

References _tm_alloc_inner(), tm_type::size, tm_node_to_type(), tm_pure_ptr_to_node(), and tm_size_to_type().

Referenced by tm_realloc().

02122 {
02123   char *ptr = 0;
02124   tm_type *t, *oldt;
02125 
02126   oldt = tm_node_to_type(tm_pure_ptr_to_node(oldptr));
02127   t = tm_size_to_type(size);
02128 
02129   if ( oldt == t ) {
02130     ptr = oldptr;
02131   } else {
02132     ptr = _tm_alloc_inner(size);
02133     memcpy(ptr, oldptr, size < oldt->size ? size : oldt->size);
02134   }
02135   
02136   return (void*) ptr;
02137 }

Here is the call graph for this function:

Here is the caller graph for this function:

static __inline void* _tm_type_alloc_node_from_free_list ( tm_type t  )  [static]

Allocates a node from at tm_type's free list.

If a tm_node is not available on the tm_type's free (tm_WHITE) list, return 0;

Otherwise,

Assert its located in a valid tm_block.

Assert that is really tm_WHITE.

Get the ptr the tm_node's data space.

Clear the data space.

Put the tm_node on the appropriate allocated list, depending on the tm.phase.

Mark the tm_node's page as used.

Keep track of allocation amounts.

Return the pointer to the data space.

Definition at line 1766 of file tm.c.

References _tm_page_mark_used(), tm_data::bytes_allocated_since_gc, tm_type::color_list, tm_data::n, tm_data::nodes_allocated_since_gc, tm_type::size, tm, tm_assert_test, tm_b, tm_list_first(), tm_msg(), tm_node_color, tm_node_set_color(), tm_node_to_block(), tm_node_to_ptr(), tm_phase_alloc, and tm_WHITE.

Referenced by _tm_alloc_type_inner().

01767 {
01768   tm_node *n;
01769   char *ptr;
01770 
01771   /*! If a tm_node is not available on the tm_type's free (tm_WHITE) list, return 0; */
01772   if ( ! (n = tm_list_first(&t->color_list[tm_WHITE])) ) {
01773     return 0;
01774   }
01775 
01776   /*! Otherwise, */
01777 
01778   /*! Assert its located in a valid tm_block. */
01779   tm_assert_test(tm_node_to_block(n)->type == t);
01780   
01781   /*! Assert that is really tm_WHITE. */
01782   tm_assert_test(tm_node_color(n) == tm_WHITE);
01783   
01784   /*! Get the ptr the tm_node's data space. */
01785   ptr = tm_node_to_ptr(n);
01786   
01787   /*! Clear the data space. */
01788   memset(ptr, 0, t->size);
01789   
01790   /*! Put the tm_node on the appropriate allocated list, depending on the tm.phase. */
01791   tm_node_set_color(n, tm_node_to_block(n), tm_phase_alloc[tm.phase]);
01792   
01793   /*! Mark the tm_node's page as used. */
01794   _tm_page_mark_used(ptr);
01795   
01796   /*! Keep track of allocation amounts. */
01797   ++ tm.nodes_allocated_since_gc;
01798   tm.bytes_allocated_since_gc += t->size;
01799   tm.n[tm_b] += t->size;
01800   
01801 #if 0
01802   tm_msg("N a n%p[%lu] t%p\n", 
01803          (void*) n, 
01804          (unsigned long) t->size,
01805          (void*) t);
01806 #endif
01807 
01808   /*! Return the pointer to the data space. */
01809   return ptr;
01810 }

Here is the call graph for this function:

Here is the caller graph for this function:

static __inline int _tm_type_memory_pressureQ ( tm_type t  )  [static]

Definition at line 1820 of file tm.c.

References tm_data::free_blocks_n, tm_type::n, tm_type::parcel_from_block, tm, and tm_WHITE.

Referenced by _tm_alloc_type_inner(), and _tm_type_memory_pressureQ_2().

01821 {
01822   return 
01823     (! t->n[tm_WHITE] && ! t->parcel_from_block && ! tm.free_blocks_n)
01824     // || ((t->n[tm_TOTAL] - t->n[tm_FREE]) > t->n[tm_TOTAL] * tm_GC_THRESHOLD)
01825     // || ((tm.n[tm_TOTAL] - t->n[tm_FREE]) > tm.n[tm_TOTAL] * tm_GC_THRESHOLD)
01826     ;
01827 }

Here is the caller graph for this function:

static __inline int _tm_type_memory_pressureQ_2 ( tm_type t  )  [static]

Definition at line 1836 of file tm.c.

References _tm_type_memory_pressureQ(), tm_data::alloc_since_sweep, and tm.

Referenced by _tm_alloc_type_inner().

01837 {
01838   return 
01839     _tm_type_memory_pressureQ(t) &&
01840     tm.alloc_since_sweep > 100
01841     ;
01842 }

Here is the call graph for this function:

Here is the caller graph for this function:

void* tm_alloc ( size_t  size  ) 

API: Allocate a node.

  • Begin timing stats,
  • Clear some stack words,
  • Remember current stack pointer,
  • Save the registers and stack pointers,
  • Call "inner" routines,
  • End timing stats.

Definition at line 29 of file user.c.

References _tm_alloc_inner(), _tm_clear_some_stack_words, _tm_gc_full_inner(), _tm_set_stack_ptr(), tm_data::inited, tm, tm_init(), tm_time_stat_begin(), tm_time_stat_end(), tm_data::trigger_full_gc, and tm_data::ts_alloc.

Referenced by calloc(), malloc(), and tm_realloc().

00030 {
00031   void *ptr = 0;
00032 
00033   if ( size == 0 )
00034     return 0;
00035 
00036   if ( ! tm.inited ) {
00037     tm_init(0, (char***) ptr, 0);
00038   }
00039 
00040 #if tm_TIME_STAT
00041   tm_time_stat_begin(&tm.ts_alloc);
00042 #endif
00043 
00044   _tm_clear_some_stack_words();
00045   _tm_set_stack_ptr(&ptr);
00046 
00047   if ( tm.trigger_full_gc ) {
00048     tm.trigger_full_gc = 0;
00049     _tm_gc_full_inner();
00050   }
00051 
00052   ptr = _tm_alloc_inner(size);
00053 
00054 #if tm_TIME_STAT
00055   tm_time_stat_end(&tm.ts_alloc);
00056 #endif
00057 
00058   return ptr;
00059 }

Here is the call graph for this function:

Here is the caller graph for this function:

void* tm_alloc_desc ( tm_adesc desc  ) 

API: Allocate a node based on a allocation descriptor.

  • Begin timing stats,
  • Clear some stack words,
  • Remember current stack pointer,
  • Save the registers and stack pointers,
  • Call "inner" routines,
  • End timing stats.

Definition at line 72 of file user.c.

References _tm_alloc_desc_inner(), _tm_clear_some_stack_words, _tm_set_stack_ptr(), tm_adesc::size, tm, tm_time_stat_begin(), tm_time_stat_end(), and tm_data::ts_alloc.

00073 {
00074   void *ptr = 0;
00075 
00076   if ( desc == 0 || desc->size == 0 )
00077     return 0;
00078 
00079 #if tm_TIME_STAT
00080   tm_time_stat_begin(&tm.ts_alloc);
00081 #endif
00082 
00083   _tm_clear_some_stack_words();
00084   _tm_set_stack_ptr(&ptr);
00085   ptr = _tm_alloc_desc_inner(desc);
00086 
00087 #if tm_TIME_STAT
00088   tm_time_stat_end(&tm.ts_alloc);
00089 #endif
00090 
00091   return ptr;
00092 }

Here is the call graph for this function:

void tm_free ( void *  ptr  ) 

API: Explicitly free a node.

  • Begin timing stats,
  • Clear some stack words,
  • Remember current stack pointer,
  • Save the registers and stack pointers,
  • Call "inner" routines,
  • End timing stats.

Definition at line 151 of file user.c.

References _tm_clear_some_stack_words, _tm_free_inner(), _tm_set_stack_ptr(), tm_data::inited, tm, tm_init(), tm_time_stat_begin(), tm_time_stat_end(), and tm_data::ts_free.

Referenced by free(), and tm_realloc().

00152 {
00153   if ( ! tm.inited ) {
00154     tm_init(0, (char***) ptr, 0);
00155   }
00156 
00157 #if tm_TIME_STAT
00158   tm_time_stat_begin(&tm.ts_free);
00159 #endif
00160 
00161   _tm_clear_some_stack_words();
00162   _tm_set_stack_ptr(&ptr);
00163   _tm_free_inner(ptr);
00164 
00165 #if tm_TIME_STAT
00166   tm_time_stat_end(&tm.ts_free);
00167 #endif
00168 }

Here is the call graph for this function:

Here is the caller graph for this function:

void* tm_realloc ( void *  oldptr,
size_t  size 
)

API: Reallocate a node of a givin size.

May return the same pointer, a different pointer or 0.

  • Begin timing stats,
  • Clear some stack words,
  • Remember current stack pointer,
  • Save the registers and stack pointers,
  • Call "inner" routines,
  • End timing stats.

Definition at line 110 of file user.c.

References _tm_clear_some_stack_words, _tm_realloc_inner(), _tm_set_stack_ptr(), tm, tm_alloc(), tm_free(), tm_time_stat_begin(), tm_time_stat_end(), and tm_data::ts_alloc.

Referenced by realloc().

00111 {
00112   void *ptr = 0;
00113 
00114   if ( oldptr == 0 )
00115     return tm_alloc(size);
00116 
00117   if ( size == 0 ) {
00118     tm_free(oldptr);
00119     return 0;
00120   }
00121 
00122 #if tm_TIME_STAT
00123   tm_time_stat_begin(&tm.ts_alloc);
00124 #endif
00125 
00126   _tm_clear_some_stack_words();
00127   _tm_set_stack_ptr(&ptr);
00128   ptr = _tm_realloc_inner(oldptr, size);
00129 
00130 #if tm_TIME_STAT
00131   tm_time_stat_end(&tm.ts_alloc);
00132 #endif
00133 
00134   return ptr;
00135 }

Here is the call graph for this function:

Here is the caller graph for this function:


Generated on Mon Jan 25 06:33:12 2010 for TM(tredmill) by  doxygen 1.6.1