Consider the following example:
It causes daScript runtime panic.
The underlying reason for the panic is that the array data is allocated dynamically.
push among other operations like
erase can sometimes cause the array data to be reallocated and moved to a new memory location, which in turn would cause loop variables to expire while still pointing to the no longer used region of the heap - which is obviously unsafe.
To prevent this from happening locking is implemented.
An array iterator increases the lock count during its initialization, and decreases it during the iterator’s finalization.
To access the lock counter during the finalization, the array iterator caches the pointer to the original array.
The result is exactly as expected
before the first loop: 0
Similar counter is also implemented for the tables, which also happen to store their data dynamically.
On the surface this provides a fool-proof mechanism which prevents dangling references to the array data, however, this is not the case. Consider another example:
Lock count on the
a before the resize is 0. However the code above is unsafe.
a may cause relocation of both
a, which are being iterated over.
That way cached pointers to the arrays inside the iterators expire.
However, example above causes daScript runtime panic during the
If we check
options log we could see what actually happens:
def private builtin`resize ( var Arr:array<array<int> aka numT> explicit; newSize:int const )
daScript generates custom resize function for the
array<array<int>>, and calls lock verification function
Internally, it looks over provided data and checks lock counters on all arrays (and tables). Panic occurs if any of them are non-zero.
It’s important to note that lock verification could have a significant performance overhead.
"with lock check", 1.10712, 1
The best way to avoid such overhead is to not create data structures with internal locking.
set_verify_table_locks provide more practical albeit unsafe solution.