Permalink
Switch branches/tags
Nothing to show
Commits on Jun 4, 2018
Commits on May 28, 2018
Commits on Feb 4, 2018
  1. Move finalizeObject to GlobalAllocator

    ysbaddaden committed Feb 4, 2018
    - Global allocator is responsible of finalizers hash (Collector
      merely delegated);
    - Use `Hash_deleteIf` to clear finalized objects from the hash.
  2. Register finalizers in hash table

    ysbaddaden committed Feb 4, 2018
    Finalizers aren't common enough that they should require a void*
    overhead on each and every object (4 to 8 bytes per individual
    allocation) and to iterate the whole small and large HEAP for
    unmarked objects with finalizers.
    
    This patch moves finalizers to a hash table (key=object,
    value=callback) attached to the global allocator, and registers
    finalizers for both small and large objects. The collector
    merely iterates the hash for unmarked objects to finalize.
  3. Add Hash data structure

    ysbaddaden committed Feb 4, 2018
    - adapted from musl-libc (MIT licensed) hsearch implementation:
      See https://git.musl-libc.org/cgit/musl/tree/src/search/hsearch.c
    - murmurhash2 hashing
    - open addressing with 2^n table size
    - quadratic probing in case of hash collision
    - lazy deletion with tombstones (insert will recycle), eventual clear
  4. Add -Wextra option to CFLAGS

    ysbaddaden committed Feb 4, 2018
    Obviously fixes some extra pedantic issues.
Commits on Jan 25, 2018
  1. Fix: only finalize allocated chunks

    ysbaddaden committed Jan 25, 2018
    The object.finalizer entry wasn't zeroed correctly, and the
    collector didn't check whether a chunk was actually allocated or
    not, leading to a potential invalid finalizer address to call and
    segfaults.
Commits on Jan 24, 2018
  1. Fix: run finalizer when an object is manually freed.

    ysbaddaden committed Jan 24, 2018
    Only applies to large objects, since small objects are never freed
    manually (NOOP).
    
    This shouldn't happen since references aren't expected to be freed
    manually, but still implemented to bugs further down the line.
  2. Fix: move finalizer when reallocating object

    ysbaddaden committed Jan 24, 2018
    Reallocating an object should move any finalizer set on the object.
    Keeping the finalizer on the old object could lead to finalizing an
    old reference despite a newer reference being still alive.
    
    This shouldn't happen in Crystal, since references are unexpected to
    be reallocated, but still implement to avoid future issues.
Commits on Jan 23, 2018
  1. Fix: reset global counters before running finalizers

    ysbaddaden committed Jan 23, 2018
    This will allow reentrant calls to GC_malloc in finalizers by
    avoiding nested GC_collect calls, but will probably force the HEAP
    to grow.
  2. Abstract DATA and BSS stack detection

    ysbaddaden committed Jan 23, 2018
    Introduces pseudo support for various platforms, such as OpenBSD,
    FreeBSD and Darwin, but didn't test them. The Darwin support even
    relies on the deprecated interface from `mach-o/getsect.h`.
Commits on Jan 22, 2018
  1. Fix: replace recursive marking with a LIFO stack

    ysbaddaden committed Jan 22, 2018
    Programs that keep deeply nested tree of objects used to cause stack
    overflows, because the collector was using recursive calls to the
    Collector_markRegion function. The Crystal compiler is subject to
    this issue, when compiling large programs, such as itself for
    example.
    
    This patch fixes the issue by replacing the recursive markRegion
    function with a loop and a manual stack of roots to trace.
    
    The `GC_mark_region` function has been replaced with the
    `GC_add_roots` function with the same signature. The function now
    pushes a stack root trace, which will be marked later, instead of
    being marked immediately.
Commits on Jan 19, 2018
  1. Don't collect repeatedly for little (no) gain

    ysbaddaden committed Jan 19, 2018
    Programs with a rapidly growing HEAP of reachable allocations will
    trigger useless collections again and again that will slow the
    program down to disastrous performance.
    
    This patch introduces a counter of allocated bytes since the last
    collection. Further collections will be skipped until the GC
    allocated at least 1/Nth of the HEAP memory, with N the
    configurable GC_FREE_SPACE_DIVISOR option which defaults to 3.
    
    A higher value for the free space divisor will mean more frequent
    collections, but less free memory overhead. A value as high as 16
    still has a contained performance impact, and may improve
    performance, depending on usages. Lower values aren't recommended.
Commits on Jan 18, 2018
  1. Fix: clang printf warning

    ysbaddaden committed Jan 18, 2018
    Also consistently use `#ifndef NDEBUG` to enable block validation
    instead of mixing GC_DEBUG and NDEBUG sometimes.
Commits on Jan 16, 2018
  1. Use C refactoring of GC

    ysbaddaden committed Jan 16, 2018
Commits on Jan 2, 2018
  1. Overflow allocation, Configurable heap size, Stats

    ysbaddaden committed Jan 2, 2018
    Implements Overflow Allocation when medium allocations don't fit
    within the current hole. This allocation always uses free blocks,
    and the global allocator will grow the HEAP instead of collecting
    first, which mitigates the crystal compiler issue of rapidly
    allocating which exhausts the free HEAP —but doesn't solve it, we
    still miss a mechanism to sometimes prefer grow over collect.
    
    Allows to configure the initial and maximum HEAP size at runtime,
    through the GC_INITIAL_HEAP_SIZE and GC_MAXIMUM_HEAP_SIZE
    environment variables, that may be suffixed with a multiplier (k, m
    or g).
    
    Adds `GC_get_memory_use()` to know how big the HEAP is, and
    `GET_get_heap_usage()` to know how many bytes are currently
    allocated.
Commits on Dec 31, 2017
  1. Updated README

    ysbaddaden committed Dec 31, 2017
Commits on Dec 24, 2017
Commits on Dec 13, 2017
Commits on Dec 11, 2017
  1. C: keep cursor to iterating all the time

    ysbaddaden committed Dec 11, 2017
    Instead of iterating the whole linked list on each and every
    allocation, which degrades the allocator performance over time, keep
    keep a cursor to the next chunk after the current allocation, which
    allows the allocate to keep a steady performance, with a once in a
    while full loop.
Commits on Dec 7, 2017
  1. Fix: mark from DATA and BSS sections

    ysbaddaden committed Dec 7, 2017
    Constants are initialized into the DATA and BSS sections, so the
    collector must mark from these roots, in addition to fiber stacks.
    
    Since we can't access the address of an extern, we rely on a small
    C helper script to access some linker symbols (`__data_start`,
    `__bss_start` and `_end`).
  2. Fix: Block#line_index must return an UInt32

    ysbaddaden committed Dec 7, 2017
    Among other fixes and corrections.
Commits on Dec 5, 2017
  1. Update README

    ysbaddaden committed Dec 5, 2017
Commits on Dec 4, 2017