Consider the following example:
1 | [export] |
It causes daScript runtime panic.
1 | unhandled exception |
The underlying reason for the panic is that the array data is allocated dynamically.push
among other operations like resize
,reserve
,emplace
, and erase
can sometimes cause the array data to be reallocated and moved to a new memory location, which in turn would cause loop variables to expire while still pointing to the no longer used region of the heap - which is obviously unsafe.
To prevent this from happening locking is implemented.
An array iterator increases the lock count during its initialization, and decreases it during the iterator’s finalization.
To access the lock counter during the finalization, the array iterator caches the pointer to the original array.
1 | [export] |
The result is exactly as expected
1 | before the first loop: 0 |
Similar counter is also implemented for the tables, which also happen to store their data dynamically.
On the surface this provides a fool-proof mechanism which prevents dangling references to the array data, however, this is not the case. Consider another example:
1 | [export] |
Lock count on the a
before the resize is 0. However the code above is unsafe.
Resizing a
may cause relocation of both a[0]
and a[1]
, which are being iterated over.
That way cached pointers to the arrays inside the iterators expire.
However, example above causes daScript runtime panic during the resize
:
1 | unhandled exception |
If we check options log
we could see what actually happens:
1 | def private builtin`resize ( var Arr:array<array<int> aka numT> explicit; newSize:int const ) |
daScript generates custom resize function for the array<array<int>>
, and calls lock verification function _builtin_verify_locks
.
Internally, it looks over provided data and checks lock counters on all arrays (and tables). Panic occurs if any of them are non-zero.
1 | [export] |
It’s important to note that lock verification could have a significant performance overhead.
1 | "with lock check", 1.10712, 1 |
The best way to avoid such overhead is to not create data structures with internal locking.
set_verify_array_locks’ and set_verify_table_locks
provide more practical albeit unsafe solution.