Lua Lanes - multithreading in Lua (original) (raw)
Description
Lua Lanes is a Lua extension library providing the possibility to run multiple Lua states in parallel. It is intended to be used for optimizing performance on multicore CPU's and to study ways to make Lua programs naturally parallel to begin with.
Lanes is included into your software by the regular require "lanes" method. No C side programming is needed; all APIs are Lua side, and most existing extension modules should work seamlessly together with the multiple lanes.
Starting with version 3.1.6, Lanes should build and run identically with either Lua 5.1 or Lua 5.2. Version 3.10.0 supports Lua 5.3.
See comparison of Lua Lanes with other Lua multithreading solutions.
Features:
- Lanes have separated data, by default. Shared data is possible with Linda objects.
- Communications is separate of threads, using Linda objects.
- Data passing uses fast inter-state copies (no serialization required).
- "Deep userdata" concept, for sharing userdata over multiple lanes.
- Millisecond level timers, integrated with the Linda system.
- Threads can be given priorities.
- Lanes are cancellable, with proper cleanup.
- No Lua-side application level locking - ever!
- Several totally independant Lanes universes may coexist in an application, one per "master" Lua state.
Limitations:
- Coroutines are not passed between states.
- Sharing full userdata between states needs special C side preparations (-> deep userdata and -> clonable userdata).
- Network level parallelism not included.
- Multi-CPU is done with OS threads, not processes. A lane is a Lua full userdata, therefore it will exist only as long as the Lua state that created it still exists. Therefore, a lane won't continue execution after the main program's termination.
- Just like independant Lua states, Lanes universes cannot communicate together.
Supported systems
Lua Lanes supports the following operating systems:
- Mac OS X PowerPC / Intel (10.4 and later)
- Linux x86
- Openwrt (15.05 and later)
- Windows 2000/XP and later (MinGW or Visual C++ 2005/2008)
The underlying threading code can be compiled either towards Win32 API or Pthreads. Unfortunately, thread prioritization under Pthreads is a JOKE, requiring OS specific tweaks and guessing undocumented behaviour. Other features should be portable to any modern platform.
Building and Installing
Lua Lanes is built simply by make on the supported platforms (make-vc for Visual C++). See README for system specific details and limitations.
To install Lanes, all you need are the lanes.lua and lanes/core.so|dll files to be reachable by Lua (see LUA_PATH, LUA_CPATH). Or use Lua Rocks package management.
> luarocks search lanes
... output listing Lua Lanes is there ...
> luarocks install lanes
... output ...
Embedding
When Lanes is embedded, it is possible to statically initialize with
extern void LANES_API luaopen_lanes_embedded( lua_State* L, lua_CFunction _luaopen_lanes);
luaopen_lanes_embedded leaves the module table on the stack. lanes.configure() must still be called in order to use Lanes.
If _luaopen_lanes is NULL, a default loader will simply attempt the equivalent of luaL_dofile( L, "lanes.lua").
To embed Lanes, compile source files in you application. In any Lua state were you want to use Lanes, initialize it as follows:
#include "lanes.h"
int load_lanes_lua( lua_State* L)
{
// retrieve lanes.lua from wherever it is stored and return the result of its execution
// trivial example 1:
luaL_dofile( L, "lanes.lua");
// trivial example 2:
luaL_dostring( L, bin2c_lanes_lua);
}
void embed_lanes( lua_State* L)
{
// we need base libraries for Lanes for work
luaL_openlibs( L);
...
// will attempt luaL_dofile( L, "lanes.lua");
luaopen_lanes_embedded( L, NULL);
lua_pop( L, 1);
// another example with a custom loader
luaopen_lanes_embedded( L, load_lanes_lua);
lua_pop( L, 1);
// a little test to make sure things work as expected
luaL_dostring( L, "local lanes = require 'lanes'.configure{with_timers = false}; local l = lanes.linda()");
}
Initialization
The following sample shows how to initialize the Lanes module.
local lanes = require "lanes".configure()
Starting with version 3.0-beta, requiring the module follows Lua 5.2 rules: the module is not available under the global name "lanes", but has to be accessed through require's return value.
After lanes is required, it is necessary to call lanes.configure(), which is the only function exposed by the module at this point. Calling configure() will perform one-time initializations and make the rest of the API available.
At the same time, configure() itself will be replaced by another function that raises an error if called again with differing arguments, if any.
Also, once Lanes is initialized, require() is replaced by another one that wraps it inside a mutex, both in the main state and in all created lanes. This prevents multiple thread-unsafe module initializations from several lanes to occur simultaneously. It remains to be seen whether this is actually useful or not: If a module is already threadsafe, protecting its initialization isn't useful. And if it is not, any parallel operation may crash without Lanes being able to do anything about it.
IMPORTANT NOTE: Starting with version 3.7.0, only the first occurence of require "lanes" must be followed by a call to .configure(). From this point, a simple require "lanes" will do wherever you need to require lanes again.
lanes.configure( [opt_tbl])
lanes.configure accepts an optional options table as sole argument.
name | value | definition |
---|---|---|
.nb_keepers | integer >= 1 | Controls the number of keeper states used internally by lindas to transfer data between lanes. (see below). Default is 1. |
.with_timers | nil/false/true | If equal to false or nil, Lanes doesn't start the timer service, and the associated API will be absent from the interface (see below). Default is true. |
.verbose_errors | nil/false/true | (Since v3.6.3) If equal to true, Lanes will collect more information when transfering stuff across Lua states to help identify errors (with a cost). Default is false. |
.protect_allocator | nil/false/true | REPLACED BY allocator="protected" AS OF VERSION v3.13.0. (Since v3.5.2) If equal to true, Lanes wraps all calls to the state's allocator function inside a mutex. Since v3.6.3, when left unset, Lanes attempts to autodetect this value for LuaJIT (the guess might be wrong if "ffi" isn't loaded though). Default is true when Lanes detects it is run by LuaJIT, else nil. |
.allocator | nil/"protected"/function | (Since v3.13.0) If nil, Lua states are created with lua_newstate() and reuse the allocator from the master state. If "protected", The default allocator obtained from lua_getallocf() in the master state is wrapped inside a critical section and used in all newly created states. If a function, this function is called prior to creating the state. It should return a full userdata containing the following structure: struct { lua_Alloc allocF; void* allocUD;} The contents will be used to create the state with lua_newstate( allocF, allocUD). This option is mostly useful for embedders that want to provide different allocators to each lane, for example to have each one work in a different memory pool thus preventing the need for the allocator itself to be threadsafe. Note however that linda deep proxy are allocated with the allocator from the master state, because they are not tied to a particular state. |
.internal_allocator | "libc"/"allocator" | (Since v3.16.1) Controls which allocator is used for Lanest internal allocations (for keeper and deep userdata management). If "libc", Lanes uses realloc and free. If "allocator", Lanes uses whatever was obtained from the "allocator" setting. This option is mostly useful for embedders that want control all memory allocations, but have issues when Lanes tries to use the Lua State allocator for internal purposes (especially with LuaJIT). |
.demote_full_userdata | nil/false/true | (Since v3.7.5) If equal to false or nil, Lanes raises an error when attempting to transfer a non-deep full userdata, else it will be demoted to a light userdata in the destination. Default is false (set to true to get the legacy behaviour). |
.track_lanes | nil/false/anything | Any non-nil|false value instructs Lanes keeps track of all lanes, so that lanes.threads() can list them. If false, lanes.threads() will raise an error when called. Default is false. |
.on_state_create | function/nil | If provided, will be called in every created Lua state right after initializing the base libraries. Keeper states will call it as well, but only if it is a C function (keeper states are not able to execute any user Lua code). Typical usage is twofold: Tweak package.loaders Load some additional C functions in the global space (of course only a C function will be able to do this). That way, all changes in the state can be properly taken into account when building the function lookup database. Default is nil. (Since version 3.7.6) If on_state_create() is a Lua function, it will be transfered normally before the call. If it is a C function, a C closure will be reconstructed in the created state from the C pointer. Lanes will raise an error if the function has upvalues. |
.shutdown_timeout | number >= 0 | (Since v3.3.0) Sets the duration in seconds Lanes will wait for graceful termination of running lanes at application shutdown. Irrelevant for builds using pthreads. Default is 0.25. |
(Since v3.5.0) Once Lanes is configured, one should register with Lanes the modules exporting functions that will be transferred either during lane generation or through lindas.
Use lanes.require() for this purpose. This will call the original require(), then add the result to the lookup databases.
(Since version 3.11) It is also possible to register a given module with lanes.register(). This function will raise an error if the registered module is not a function or table.
local m = lanes.require "modname"
lanes.register( "modname", module)
Creation
The following sample shows preparing a function for parallel calling, and calling it with varying arguments. Each of the two results is calculated in a separate OS thread, parallel to the calling one. Reading the results joins the threads, waiting for any results not already there.
local lanes = require "lanes".configure()
f = lanes.gen( function( n) return 2 * n end)
a = f( 1)
b = f( 2)
print( a[1], b[1] ) -- 2 4
func = lanes.gen( [libs_str | opt_tbl [, ...],] lane_func)
lane_h = func( ...)
The function returned by lanes.gen() is a "generator" for launching any number of lanes. They will share code, options, initial globals, but the particular arguments may vary. Only calling the generator function actually launches a lane, and provides a handle for controlling it.
Alternatively, lane_func may be a string, in which case it will be compiled in the lane. This was to be able to launch lanes with older versions of LuaJIT, which didn't not support lua_dump, used internally to transfer functions to the lane.
Lanes automatically copies upvalues over to the new lanes, so you need not wrap all the required elements into one 'wrapper' function. If lane_func uses some local values, or local functions, they will be there also in the new lanes.
libs_str
defines the standard libraries made available to the new Lua state:
(nothing) | no standard libraries (default) | |
---|---|---|
"base" or "" | ||
"bit" | ||
"bit32" | ||
"coroutine" | ||
"debug" | ||
"ffi" | ||
"io" | ||
"jit" | ||
"math" | ||
"os" | ||
"package" | ||
"string" | ||
"table" | ||
"utf8" | ||
"*" |
Initializing the standard libs takes a bit of time at each lane invocation. This is the main reason why "no libraries" is the default.
opt_tbl
is a collection of named options to control the way lanes are run:
name | value | definition |
---|---|---|
.globals | table | Sets the globals table for the launched threads. This can be used for giving them constants. The key/value pairs of table are transfered in the lane globals after the libraries have been loaded and the modules required. The global values of different lanes are in no manner connected; modifying one will only affect the particular lane. |
.required | table | Lists modules that have to be required in order to be able to transfer functions they exposed. Starting with Lanes 3.0-beta, non-Lua functions are no longer copied by recreating a C closure from a C pointer, but are searched in lookup tables. These tables are built from the modules listed here. required must be a list of strings, each one being the name of a module to be required. Each module is required with require() before the lanes function is invoked. So, from the required module's point of view, requiring it manually from inside the lane body or having it required this way doesn't change anything. From the lane body's point of view, the only difference is that a module not creating a global won't be accessible. Therefore, a lane body will also have to require a module manually, but this won't do anything more (see Lua's require documentation). ATTEMPTING TO TRANSFER A FUNCTION REGISTERED BY A MODULE NOT LISTED HERE WILL RAISE AN ERROR. |
.gc_cb | function | (Since version 3.8.2) Callback that gets invoked when the lane is garbage collected. The function receives two arguments (the lane name and a string, either "closed" or "selfdestruct"). |
.priority | integer | The priority of lanes generated in the range -3..+3 (default is 0). These values are a mapping over the actual priority range of the underlying implementation. Implementation and dependability of priorities varies by platform. Especially Linux kernel 2.6 is not supporting priorities in user mode. A lane can also change its own thread priority dynamically with lanes.set_thread_priority(). |
.package | table | Introduced at version 3.0. Specifying it when libs_str doesn't cause the package library to be loaded will generate an error. If not specified, the created lane will receive the current values of package. Only path, cpath, preload and loaders (Lua 5.1)/searchers (Lua 5.2) are transfered. |
Each lane also gets a global function set_debug_threadname() that it can use anytime to do as the name says. Supported debuggers are Microsoft Visual Studio (for the C side) and Decoda (for the Lua side).
Starting with version 3.8.1, the lane has a new method lane:get_debug_threadname() that gives access to that name from the caller side (returns "" if unset, "" if the internal Lua state is closed).
If a lane body pulls a C function imported by a module required before Lanes itself (thus not through a hooked require), the lane generator creation will raise an error. The function name it shows is a path where it was found by scanning _G. As a utility, the name guessing functionality is exposed as such:
"type", "name" = lanes.nameof( o)
Starting with version 3.8.3, lanes.nameof() searches the registry as well.
Free running lanes
The lane handles are allowed to be 'let loose'; in other words you may execute a lane simply by:
lanes.gen( function( params) ... end ) ( ...)
Normally, this kind of lanes will be in an eternal loop handling messages. Since the lane handle is gone, there is no way to control such a lane from the outside, nor read its potential return values. Then again, such a lane does not even normally return.
Priority
lanes.set_thread_priority( prio)
Besides setting a default priority in the generator settings, each thread can change its own priority at will. This is also true for the main Lua state.
The priority must be in the range [-3,+3].
Affinity
lanes.set_thread_affinity( affinity)
Each thread can change its own affinity at will. This is also true for the main Lua state.
Status
The current execution state of a lane can be read via its status member, providing one of these values: (2)
"pending" | Not started yet. Shouldn't stay very long in that state. | |
---|---|---|
"running" | ||
"waiting" | ||
"done" | ||
"error" | ||
"cancelled" | ||
"killed" |
This is similar to coroutine.status, which has: "running" / "suspended" / "normal" / "dead". Not using the exact same names is intentional.
Tracking
{{name = "name", status = "status", ...}|nil = lanes.threads()
Only available if lane tracking feature is compiled (see HAVE_LANE_TRACKING in lanes.c) and track_lanes is set.
Returns an array table where each entry is a table containing a lane's name and status. Returns nil if no lane is running.
Results and errors
set_error_reporting( "basic"|"extended")
Sets the error reporting mode. "basic" is selected by default.
A lane can be waited upon by simply reading its results. This can be done in two ways.
Makes sure lane has finished, and gives its first (maybe only) return value. Other return values will be available in other lane_h indices.
If the lane ended in an error, it is propagated to master state at this place.
[...]|[nil,err,stack_tbl]= lane_h:join( [timeout_secs] )
Waits until the lane finishes, or timeout seconds have passed. Returns nil, "timeout" on timeout (since v3.13), nil,err,stack_tbl if the lane hit an error, nil, "killed" if forcefully killed (starting with v3.3.0), or the return values of the lane. Unlike in reading the results in table fashion, errors are not propagated.
stack_tbl is a table describing where the error was thrown.
In "extended" mode, stack_tbl is an array of tables containing info gathered with lua_getinfo() ("source","currentline","name","namewhat","what").
In "basic mode", stack_tbl is an array of ":" strings. Use table.concat() to format it to your liking (or just ignore it).
If you use :join, make sure your lane main function returns a non-nil value so you can tell timeout and error cases apart from succesful return (using the .status property may be risky, since it might change between a timed out join and the moment you read it).
require "lanes".configure()
f = lanes.gen( function() error "!!!" end)
a = f( 1)
--print( a[1]) -- propagates error
v, err = a:join() -- no propagation
if v == nil then
error( "'a' faced error"..tostring(err)) -- manual propagation
end
If you want to wait for multiple lanes to finish (any of a set of lanes), use a Linda object. Give each lane a specific id, and send that id over a Linda once that thread is done (as the last thing you do).
require "lanes".configure()
local sync_linda = lanes.linda()
f = lanes.gen( function() dostuff() sync_linda:send( "done", true) end)
a = f()
b = f()
c = f()
sync_linda:receive( nil, sync_linda.batched, "done", 3) -- wait for 3 lanes to write something in "done" slot of sync_linda
Cancelling
bool[,reason] = lane_h:cancel( "soft" [, timeout] [, wake_bool])
bool[,reason] = lane_h:cancel( "hard" [, timeout] [, force [, forcekill_timeout]])
bool[,reason] = lane_h:cancel( [mode, hookcount] [, timeout] [, force [, forcekill_timeout]])
cancel() sends a cancellation request to the lane.
First argument is a mode can be one of "hard", "soft", "count", "line", "call", "ret". If mode is not specified, it defaults to "hard".
If mode is "soft", cancellation will only cause cancel_test() to return true, so that the lane can cleanup manually.
If wake_bool is true, the lane is also signalled so that execution returns from any pending linda operation. Linda operations detecting the cancellation request return lanes.cancel_error.
If mode is "hard", waits for the request to be processed, or a timeout to occur. Linda operations detecting the cancellation request will raise a special cancellation error (meaning they won't return in that case).
timeout defaults to 0 if not specified.
Other values of mode will asynchronously install the corresponding hook, then behave as "hard".
If force_kill_bool is true, forcekill_timeout can be set to tell how long lanes will wait for the OS thread to terminate before raising an error. Windows threads always terminate immediately, but it might not always be the case with some pthread implementations.
Returns true, lane_h.status if lane was already done (in "done", "error" or "cancelled" status), or the cancellation was fruitful within timeout_secs timeout period.
Returns false, "timeout" otherwise.
If the lane is still running after the timeout expired and force_kill is true, the OS thread running the lane is forcefully killed. This means no GC, probable OS resource leaks (thread stack, locks, DLL notifications), and should generally be the last resort.
Cancellation is tested before going to sleep in receive() or send() calls and after executing cancelstep Lua statements. Starting with version 3.0-beta, a pending receive()or send() call is awakened.
This means the execution of the lane will resume although the operation has not completed, to give the lane a chance to detect cancellation (even in the case the code waits on a linda with infinite timeout).
The code should be able to handle this situation appropriately if required (in other words, it should gracefully handle the fact that it didn't receive the expected values).
It is also possible to manually test for cancel requests with cancel_test().
Finalizers
set_finalizer( finalizer_func)
void = finalizer_func( [err, stack_tbl])
The error call is used for throwing exceptions in Lua. What Lua does not offer, however, is scoped finalizers that would get called when a certain block of instructions gets exited, whether through peaceful return or abrupt error.
Since 2.0.3, Lanes registers a function set_finalizer in the lane's Lua state for doing this. Any functions given to it will be called in the lane Lua state, just prior to closing it. It is possible to set more than one finalizer. They are not called in any particular order.
An error in a finalizer itself overrides the state of the regular chunk (in practise, it would be highly preferable not to have errors in finalizers). If one finalizer errors, the others may not get called. If a finalizer error occurs after an error in the lane body, then this new error replaces the previous one (including the full stack trace).
local lane_body = function()
set_finalizer( function( err, stk)
if err and type( err) ~= "userdata" then
-- no special error: true error
print( " error: "..tostring(err))
elseif type( err) == "userdata" then
-- lane [cancellation](#cancelling) is performed by throwing a special userdata as error
print( "after cancel")
else
-- no error: we just got finalized
print( "finalized")
end
end)
end
Lindas
Communications between lanes is completely detached from the lane handles themselves. By itself, a lane can only provide return values once it's finished, or throw an error. Needs to communicate during runtime are handled by Linda objects, which aredeep userdata instances. They can be provided to a lane as startup parameters, upvalues or in some other Linda's message.
Access to a Linda object means a lane can read or write to any of its data slots. Multiple lanes can be accessing the same Linda in parallel. No application level locking is required; each Linda operation is atomic.
require "lanes".configure()
local linda = lanes.linda()
local function loop( max)
for i = 1, max do
print( "sending: " .. i)
linda:send( "x", i) -- linda as upvalue
end
end
a = lanes.gen( "", loop)( 10000)
while true do
local key, val = linda:receive( 3.0, "x") -- timeout in seconds
if val == nil then
print( "timed out")
break
end
print( tostring( linda) .. " received: " .. val)
end
Characteristics of the Lanes implementation of Lindas are:
Keys can be of boolean, number, string or light userdata type. Tables and functions can't be keys because their identity isn't preserved when transfered from one Lua state to another.
values can be any type supported by inter-state copying (same limits as for function parameters and upvalues).
consuming method is :receive (not in).
non-consuming method is :get (not rd).
two producer-side methods: :send and :set (not out).
send allows for sending multiple values -atomically- to a given key.
receive can wait for multiple keys at once.
receive has a batched mode to consume more than one value from a single key, as in linda:receive( 1.0, linda.batched, "key", 3, 6).
individual keys' queue length can be limited, balancing speed differences in a producer/consumer scenario (making :send wait).
tostring( linda) returns a string of the form "Linda: <opt_name>"
several lindas may share the same keeper state. Since version 3.9.1, state assignation can be controlled with the linda's group (an integer). All lindas belonging to the same group will share the same keeper state. One keeper state may be shared by several groups.
h = lanes.linda( [opt_name, [opt_group]])
[true|lanes.cancel_error] = h:send( [timeout_secs,] [h.null,] key, ...)
[key, val]|[lanes.cancel_error] = h:receive( [timeout_secs,] key [, ...])
[key, val [, ...]]|[lanes.cancel_error] = h:receive( timeout, h.batched, key, n_uint_min[, n_uint_max])
[true|lanes.cancel_error] = h:limit( key, n_uint)
The send() and receive() methods use Linda keys as FIFO stacks (first in, first out). Timeouts are given in seconds (millisecond accuracy). If using numbers as the first Linda key, one must explicitly give nil as the timeout parameter to avoid ambiguities.
By default, stack sizes are unlimited but limits can be enforced using the limit() method. This can be useful to balance execution speeds in a producer/consumer scenario. Any negative value removes the limit.
A limit of 0 is allowed to block everything.
(Since version 3.7.7) if the key was full but the limit change added some room, limit() returns true and the linda is signalled so that send()-blocked threads are awakened.
Note that any number of lanes can be reading or writing a Linda. There can be many producers, and many consumers. It's up to you.
Hard cancellation will cause pending linda operations to abort execution of the lane through a cancellation error. This means that you have to install a finalizer in your lane if you want to run some code in that situation.
send() returns true if the sending succeeded, and false if the queue limit was met, and the queue did not empty enough during the given timeout.
(Since version 3.7.8) send() returns lanes.cancel_error if interrupted by a soft cancel request.
If no data is provided after the key, send() raises an error. Since version 3.9.3, if provided with linda.null before the actual key and there is no data to send, send() sends a single nil.
Also, if linda.null is sent as data in a linda, it will be read as a nil.
Equally, receive() returns a key and the value extracted from it, or nothing for timeout. Note that nils can be sent and received; the key value will tell it apart from a timeout.
Version 3.4.0 introduces an API change in the returned values: receive() returns the key followed by the value(s), in that order, and not the other way around.
(Since version 3.7.8) receive() returns lanes.cancel_error if interrupted by a soft cancel request.
Multiple values can be sent to a given key at once, atomically (the send will fail unless all the values fit within the queue limit). This can be useful for multiple producer scenarios, if the protocols used are giving data in streams of multiple units. Atomicity avoids the producers from garbling each others messages, which could happen if the units were sent individually.
When receiving from multiple slots, the keys are checked in order, which can be used for making priority queues.
bool|lanes.cancel_error = linda_h:set( key [, val [, ...]])
[[val [, ...]]|lanes.cancel_error] = linda_h:get( key [, count = 1])
The table access methods are for accessing a slot without queuing or consuming. They can be used for making shared tables of storage among the lanes.
Writing to a slot never blocks because it ignores the limit. It overwrites existing value and clears any possible queued entries.
Reading doesn't block either because get() returns whatever is available (which can be nothing), up to the specified count.
Table access and send()/receive() can be used together; reading a slot essentially peeks the next outcoming value of a queue.
set() signals the linda for write if a value is stored. If nothing special happens, set() returns nothing.
Since version 3.7.7, if the key was full but the new data count of the key after set() is below its limit, set() returns true and the linda is also signaled for read so that send()-blocked threads are awakened.
Since version 3.8.0, set() can write several values at the specified key, writing nil values is now possible, and clearing the contents at the specified key is done by not providing any value.
Also, get() can read several values at once. If the key contains no data, get() returns no value. This can be used to separate the case when reading stored nil values.
Since version 3.8.4, trying to send or receive data through a cancelled linda does nothing and returns lanes.cancel_error.
[val] = linda_h:count( [key[,...]])
Returns some information about the contents of the linda.
If no key is specified, and the linda is empty, returns nothing.
If no key is specified, and the linda is not empty, returns a table of key/count pairs that counts the number of items in each of the exiting keys of the linda. This count can be 0 if the key has been used but is empty.
If a single key is specified, returns the number of pending items, or nothing if the key is unknown.
If more than one key is specified, return a table of key/count pairs for the known keys.
Returns a table describing the full contents of a linda, or nil if the linda wasn't used yet.
void = linda_h:cancel("read"|"write"|"both"|"none")
(Starting with version 3.8.4) Signals the linda so that lanes waiting for read, write, or both, wake up. All linda operations (including get() and set()) will return lanes.cancel_error as when the calling lane is soft-cancelled as long as the linda is marked as cancelled.
"none" reset the linda's cancel status, but doesn't signal it.
If not void, the lane's cancel status overrides the linda's cancel status.
Granularity of using Lindas
A linda is a gateway to read and write data inside some hidden Lua states, called keeper states. Lindas are hashed to a fixed number of keeper states, which are a locking entity.
The data sent through a linda is stored inside the associated keeper state in a Lua table where each linda slot is the key to another table containing a FIFO for that slot.
Each keeper state is associated with an OS mutex, to prevent concurrent access to the keeper state. The linda itself uses two signals to be made aware of operations occuring on it.
Whenever Lua code reads from or writes to a linda, the mutex is acquired. If linda limits don't block the operation, it is fulfilled, then the mutex is released.
If the linda has to block, the mutex is released and the OS thread sleeps, waiting for a linda operation to be signalled. When an operation occurs on the same linda, possibly fufilling the condition, or a timeout expires, the thread wakes up.
If the thread is woken but the condition is not yet fulfilled, it goes back to sleep, until the timeout expires.
When a lane is cancelled, the signal it is waiting on (if any) is signalled. In that case, the linda operation will return lanes.cancel_error.
A single Linda object provides an infinite number of slots, so why would you want to use several?
There are some important reasons:
- Access control. If you don't trust certain code completely, or just to modularize your design, use one Linda for one usage and another one for the other. This keeps your code clear and readable. You can pass multiple Linda handles to a lane with practically no added cost.
- Namespace control. Linda keys have a "flat" namespace, so collisions are possible if you try to use the same Linda for too many separate uses.
- Performance. Changing any slot in a Linda causes all pending threads for that Linda to be momentarily awakened (at least in the C level). This can degrade performance due to unnecessary OS level context switches. The more keeper states you declared with lanes.configure() the less this should be a problem. On the other side, you need to use a common Linda for waiting for multiple keys. You cannot wait for keys from two separate Linda objects at the same time.
Actually, you can. Make separate lanes to wait each, and then multiplex those events to a common Linda, but... :).
Timers
void = lanes.timer( linda_h, key, date_tbl|first_secs [,period_secs])
Timers are implemented as a lane. They can be disabled by setting "with_timers" to nil or false to lanes.configure().
Timers can be run once, or in a reoccurring fashion (period_secs > 0). The first occurrence can be given either as a date or as a relative delay in seconds. The date table is like what os.date("*t") returns, in the local time zone.
Once a timer expires, the key is set with the current time (in seconds, same offset as os.time() but with millisecond accuracy). The key can be waited upon using the regular Linda :receive() method.
A timer can be stopped simply with first_secs=0|nil and no period.
local lanes = require "lanes"
lanes.configure()
local linda = lanes.linda()
-- First timer once a second, not synchronized to wall clock
--
lanes.timer( linda, "sec", 1, 1)
-- Timer to a future event (next even minute); wall clock synchronized
--
local t = os.date( "*t", os.time() + 60) -- now + 1min
t.sec = 0
lanes.timer( linda, "min", t, 60) -- reoccur every minute (sharp)
while true do
local key, v = linda:receive( "sec", "min")
print( "Timer "..key..": "..v)
end
NOTE: Timer keys are set, not queued, so missing a beat is possible especially if the timer cycle is extremely small. The key value can be used to know the actual time passed.
Design note: | Having the API as lanes.timer() is intentional. Another alternative would be linda_h:timer() but timers are not traditionally seen to be part of Lindas. Also, it would mean any lane getting a Linda handle would be able to modify timers on it. A third choice could be abstracting the timers out of Linda realm altogether (timer_h= lanes.timer( date|first_secs, period_secs )) but that would mean separate waiting functions for timers, and lindas. Even if a linda object and key was returned, that key couldn't be waited upon simultaneously with one's general linda events. The current system gives maximum capabilities with minimum API, and any smoothenings can easily be crafted in Lua at the application level. |
---|
{[{linda, slot, when, period}[,...]]} = lanes.timers()
The full list of active timers can be obtained. Obviously, this is a snapshot, and non-repeating timers might no longer exist by the time the results are inspected.
void = lanes.sleep( [seconds|false])
(Since version 3.9.7) A very simple way of sleeping when nothing else is available. Is implemented by attempting to read some data in an unused channel of the internal linda used for timers (this linda exists even when timers aren't enabled). Default duration is null, which should only cause a thread context switch.
Locks etc.
Lanes does not generally require locks or critical sections to be used, at all. If necessary, a limited queue can be used to emulate them. lanes.lua offers some sugar to make it easy:
lock_func|lanes.cancel_error = lanes.genlock( linda_h, key [,N_uint=1])
bool|lanes.cancel_error = lock_func( M_uint [, "try"] ) -- acquire
..
bool|lanes.cancel_error = lock_func( -M_uint) -- release
The generated function acquires M tokens from the N available, or releases them if the value is negative. The acquiring call will suspend the lane, if necessary. Use M=N=1 for a critical section lock (only one lane allowed to enter).
When passsing "try" as second argument when acquiring, then lock_func operates on the linda with a timeout of 0 to emulate a TryLock() operation. If locking fails, lock_func returns false. "try" is ignored when releasing (as it it not expected to ever have to wait unless the acquisition/release pairs are not properly matched).
Upon successful lock/unlock, lock_func returns true (always the case when block-waiting for completion).
Note: The generated locks are not recursive (A single lane locking several times will consume tokens at each call, and can therefore deadlock itself). That would need another kind of generator, which is currently not implemented.
Similar sugar exists for atomic counters:
atomic_func|lanes.cancel_error = lanes.genatomic( linda_h, key [,initial_num=0.0])
new_num|lanes.cancel_error = atomic_func( [diff_num=+1.0])
Each time called, the generated function will change linda[key] atomically, without other lanes being able to interfere. The new value is returned. You can use either diff 0.0 or get to just read the current value.
Note that the generated functions can be passed on to other lanes.
Other issues
Limitations on data passing
Data passed between lanes (either as starting parameters, return values, upvalues or via Lindas) must conform to the following:
- Booleans, numbers, strings, light userdata, Lua functions and tables of such can always be passed.
- Versions 3.4.1 and earlier had an undocumented limitation: Lua functions with an indirect recursive Lua function upvalue raised an error when transfered. This limitation disappeared with version 3.4.2.
- Cyclic tables and/or duplicate references are allowed and reproduced appropriately, but only within the same transmission.
- Using the same source table in multiple Linda messages keeps no ties between the tables (this is the same reason why tables can't be used as keys).
- Objects (tables with a metatable) are copyable between lanes.
- Metatables are assumed to be immutable; they are internally indexed and only copied once per each type of objects per lane.
- C functions (lua_CFunction) referring to LUA_ENVIRONINDEX or LUA_REGISTRYINDEX might not do what you expect in the target, since they will actually use a different environment.
- Lua 5.2 functions may have a special _ENV upvalue if they perform 'global namespace' lookups. Unless special care is taken, this upvalue defaults to the table found at LUA_RIDX_GLOBALS. Obviously, we don't want to transfer the whole global table along with each Lua function. Therefore, any upvalue equal to the global table is not transfered by value, but simply bound to the global table in the destination state. Note that this also applies when Lanes is built for Lua 5.1, as it doesn't hurt.
- Full userdata can be passed only if it's prepared using the deep userdata system, which handles its lifespan management
- In particular, lane handles cannot be passed between lanes.
- Lanes can either throw an error or attempt a light userdata demotion.
- Coroutines cannot be passed. A coroutine's Lua state is tied to the Lua state that created it, and there is no way the mixed C/Lua stack of a coroutine can be transfered from one Lua state to another.
- Starting with version 3.10.1, if the metatable contains __lanesignore, the object is skipped and nil is transfered instead.
Notes about passing C functions
Originally, a C function was copied from one Lua state to another as follows:
// expects a C function on top of the source Lua stack
copy_func( lua_State *dest, lua_State* source)
{
// extract C function pointer from source
lua_CFunction func = lua_tocfunction( source, -1);
// transfer upvalues
int nup = transfer_upvalues( dest, source);
// dest Lua stack contains a copy of all upvalues
lua_pushcfunction( dest, func, nup);
}
This has the main drawback of not being LuaJIT-compatible, because some functions registered by LuaJIT are not regular C functions, but specially optimized implementations. As a result, lua_tocfunction() returns NULL for them.
Therefore, Lanes no longer transfers functions that way. Instead, functions are transfered as follows (more or less):
// expects a C function on top of the source Lua stack
copy_func( lua_State *dest, lua_State* source)
{
// fetch function 'name' from source lookup database
char const* funcname = lookup_func_name( source, -1);
// lookup a function bound to this name in the destination state, and push it on the stack
push_resolved_func( dest, funcname);
}
The devil lies in the details: what does "function lookup" mean?
Since functions are first class values, they don't have a name. All we know for sure is that when a C module registers some functions, they are accessible to the script that required the module through some exposed variables.
For example, loading the string base library creates a table accessible when indexing the global environment with key "string". Indexing this table with "match", "gsub", etc. will give us a function.
When a lane generator creates a lane and performs initializations described by the list of base libraries and the list of required modules, it recursively scans the table created by the initialisation of the module, looking for all values that are C functions.
Each time a function is encountered, the sequence of keys that reached that function is contatenated in a (hopefully) unique name. The [name, function] and [function, name] pairs are both stored in a lookup table in all involved Lua states (main Lua state and lanes states).
Then when a function is transfered from one state to another, all we have to do is retrieve the name associated to a function in the source Lua state, then with that name retrieve the equivalent function that already exists in the destination state.
Note that there is no need to transfer upvalues, as they are already bound to the function registered in the destination state. (And in any event, it is not possible to create a closure from a C function pushed on the stack, it can only be created with a lua_CFunction pointer).
There are several issues here:
- Some base libraries register some C functions in the global environment. Because of that, Lanes must scan the global namespace to find all C functions (such as error, print, etc.).
- Nothing prevents a script to create other references to a C function. For example one could do When iterating over all keys of the global table, Lanes has no guarantee that it will hit "string" before or after "string2". However, the values associated to string.match and string2.match are the same C function. Lanes doesn't normally expect a C function value to be encountered more than once. In the event it occurs, the shortest name that was computed is retained. If Lanes processed "string2" first, it means that if the Lua state that contains the "string2" global name sends function string.match, lookup_func_name would return name "string2.match", with the obvious effect that push_resolved_func won't find "string2.match" in the destination lookup database, thus failing the transfer (even though this function exists, but is referenced under name "string.match").
- Lua 5.2 introduced a hash randomizer seed which causes table iteration to yield a different key order on different VMs even when the tables are populated the exact same way. When Lua is built with compatibility options (such as LUA_COMPAT_ALL), this causes several base libraries to register functions under multiple names. This, with the randomizer, can cause the first encountered name of a function to be different on different VMs, which breaks function transfer. Even under Lua 5.1, this may cause trouble if some module registers a function under several keys. To circumvent this, Lanes has to select one name among all candidates, and the rule for this is to keep the 'smaller' one: first in byte count, then in lexical order. Another more immediate reason of failed transfer is when the destination state doesn't know about the C function that has to be transferred. This occurs if a function is transferred in a lane before it had a chance to scan the module. If the C function is sent through a linda, it is enough for the destination lane body to have required the module before the function is sent. But if the lane body provided to the generator has a C function as upvalue, the transfer itself must succeed, therefore the module that imported that C function must be required in the destination lane before the lane body starts executing. This is where the .required options play their role.
Required of module makers
Most Lua extension modules should work unaltered with Lanes. If the module simply ties C side features to Lua, everything is fine without alterations. The luaopen_...() entry point will be called separately for each lane, where the module is require'd from.
If it, however, also does one-time C side initializations, these should be covered into a one-time-only construct such as below.
int luaopen_module( lua_State *L )
{
static char been_here; /* 0 by ANSI C */
// Calls to 'require' serialized by Lanes; this is safe.
if (!been_here)
{
been_here= 1;
... one time initializations ...
}
... binding to Lua ...
}
Clonable full userdata in your own apps
Starting with version 3.13.0, a new way of passing full userdata across lanes uses a new __lanesclone metamethod. When a deep userdata is cloned, Lanes calls __lanesclone once, in the context of the source lane.
The call receives the clone and original as light userdata, plus the actual userdata size, as in clone:__lanesclone(original,size), and should perform the actual cloning.
A typical implementation would look like (BEWARE, THIS CHANGED WITH VERSION 3.16.0):
static int clonable_lanesclone( lua_State* L) { switch( lua_gettop( L)) { case 3: { struct s_MyClonableUserdata* self = lua_touserdata( L, 1); struct s_MyClonableUserdata* from = lua_touserdata( L, 2); size_t len = lua_tointeger( L, 3); assert( len == sizeof(struct s_MyClonableUserdata)); *self = *from; } return 0;
default:
(void) luaL_error( L, "Lanes called clonable_lanesclone with unexpected parameters");
}
return 0;
}
NOTE: In the event the source userdata has uservalues, it is not necessary to create them for the clone, Lanes will handle their cloning.
Of course, more complex objects may require smarter cloning behavior than a simple memcpy. Also, the module initialisation code should make each metatable accessible from the module table itself as in:
int luaopen_deep_test(lua_State* L) { luaL_newlib( L, deep_module);
// preregister the metatables for the types we can instanciate so that Lanes can know about them
if( luaL_newmetatable( L, "clonable"))
{
luaL_setfuncs( L, clonable_mt, 0);
lua_pushvalue(L, -1);
lua_setfield(L, -2, "__index");
}
lua_setfield(L, -2, "__clonableMT"); // actual name is not important
if( luaL_newmetatable( L, "deep"))
{
luaL_setfuncs( L, deep_mt, 0);
lua_pushvalue(L, -1);
lua_setfield(L, -2, "__index");
}
lua_setfield(L, -2, "__deepMT"); // actual name is not important
return 1;
}
Then a new clonable userdata instance can just do like any non-Lanes aware userdata, as long as its metatable contains the aforementionned __lanesclone method.
int luaD_new_clonable( lua_State* L) { lua_newuserdata( L, sizeof( struct s_MyClonableUserdata)); luaL_setmetatable( L, "clonable"); return 1; }
Deep userdata in your own apps
The mechanism Lanes uses for sharing Linda handles between separate Lua states can be used for custom userdata as well. Here's what to do.
- Provide an identity function for your userdata, in C. This function is used for creation and deletion of your deep userdata (the shared resource), and for making metatables for the state-specific proxies for accessing it. The prototype is
void* idfunc( lua_State* L, DeepOp op_);
op_ can be one of:- eDO_new: requests the creation of a new object, whose pointer is returned. Starting with version 3.13.0, object should embed DeepPrelude structure as header and initialize its magic member with the current DEEP_VERSION.
- eDO_delete: receives this same pointer on the stack as a light userdata, and should cleanup the object.
- eDO_metatable: should build a metatable for the object. Don't cache the metatable yourself, Lanes takes care of it (eDO_metatable should only be invoked once per state). Just push the metatable on the stack.
- eDO_module: requests the name of the module that exports the idfunc, to be returned. It is necessary so that Lanes can require it in any lane state that receives a userdata. This is to prevent crashes in situations where the module could be unloaded while the idfunc pointer is still held.
Take a look at linda_id in lanes.c or deep_test_id in deep_test.c.
- Include "deep.h" and either link against Lanes or statically compile compat.c deep.c tools.c universe.c into your module if you want to avoid a runtime dependency for users that will use your module without Lanes.
- Instanciate your userdata using luaG_newdeepuserdata(), instead of the regular lua_newuserdata(). Given an idfunc, it sets up the support structures and returns a state-specific proxy userdata for accessing your data. This proxy can also be copied over to other lanes.
- Accessing the deep userdata from your C code, use luaG_todeep() instead of the regular lua_touserdata().
Deep userdata management will take care of tying to __gc methods, and doing reference counting to see how many proxies are still there for accessing the data. Once there are none, the data will be freed through a call to the idfunc you provided.
Deep userdata in transit inside keeper states (sent in a linda but not yet consumed) don't call idfunc(eDO_delete) and aren't considered by reference counting. The rationale is the following:
If some non-keeper state holds a deep userdata for some deep object, then even if the keeper collects its own deep userdata, it shouldn't be cleaned up since the refcount is not 0.
OTOH, if a keeper state holds the last deep userdata for some deep object, then no lane can do actual work with it. Deep userdata's idfunc() is never called from a keeper state.
Therefore, Lanes can just call idfunc(eDO_delete) when the last non-keeper-held deep userdata is collected, as long as it doesn't do the same in a keeper state after that, since any remaining deep userdata in keeper states now hold stale pointers.
NOTE: The lifespan of deep userdata may exceed that of the Lua state that created it. The allocation of the data storage should not be tied to the Lua state used. In other words, use malloc()/free() or similar memory handling mechanism.
Lane handles don't travel
Lane handles are not implemented as deep userdata, and cannot thus be copied across lanes. This is intentional; problems would occur at least when multiple lanes were to wait upon one to get ready. Also, it is a matter of design simplicity.
The same benefits can be achieved by having a single worker lane spawn all the sublanes, and keep track of them. Communications to and from this lane can be handled via a Linda.
Beware with print and file output
In multithreaded scenarios, giving multiple parameters to print() or file:write() may cause them to be overlapped in the output, something like this:
A: print( 1, 2, 3, 4 )
B: print( 'a', 'b', 'c', 'd' )
1 a b 2 3 c d 4
Lanes does not protect you from this behaviour. The thing to do is either to concentrate your output to a certain lane per stream, or to concatenate output into a single string before you call the output function.
Performance considerations
Lanes is about making multithreading easy, and natural in the Lua state of mind. Expect performance not to be an issue, if your program is logically built. Here are some things one should consider, if best performance is vital:
- Data passing (parameters, upvalues, Linda messages) is generally fast, doing two binary state-to-state copies (from source state to hidden state, hidden state to target state). Remember that not only the function you specify but also its upvalues, their upvalues, etc. etc. will get copied.
- Lane startup is fast (1000's of lanes a second), depending on the number of standard libraries initialized. Initializing all standard libraries is about 3-4 times slower than having no standard libraries at all. If you throw in a lot of lanes per second, make sure you give them minimal necessary set of libraries.
- Waiting Lindas are woken up (and execute some hidden Lua code) each time any key in the Lindas they are waiting for are changed. This may give essential slow-down (not measured, just a gut feeling) if a lot of Linda keys are used. Using separate Linda objects for logically separate issues will help (which is good practise anyhow).
- Linda objects are light. The memory footprint is two OS-level signalling objects (HANDLE or pthread_cond_t) for each, plus one C pointer for the proxies per each Lua state using the Linda. Barely nothing.
- Timers are light. You can probably expect timers up to 0.01 second resolution to be useful, but that is very system specific. All timers are merged into one main timer state (see timer.lua); no OS side timers are utilized.
- If you are using a lot of Linda objects, it may be useful to try having more of these keeper states. By default, only one is used (see lanes.configure()).
Cancelling cancel
Cancellation of lanes uses the Lua error mechanism with a special lightuserdata error sentinel. If you use pcall in code that needs to be cancellable from the outside, the special error might not get through to Lanes, thus preventing the Lane from being cleanly cancelled. You should throw any lightuserdata error further.
This system can actually be used by application to detect cancel, do your own cancellation duties, and pass on the error so Lanes will get it. If it does not get a clean cancellation from a lane in due time, it may forcefully kill the lane.
The sentinel is exposed as lanes.cancel_error, if you wish to use its actual value.
Change log
See CHANGES.
For feedback, questions and suggestions: