Rscheme ( http://www.rscheme.org ) was originally choosen
because a code review showed quite clean coding.
It had not best, but reasonable
and sometime surprising (PerformanceConsiderations) performace,
an usable foreign function interface to C code
and most of all the texas persistent store.
Meanwhile the persistent store is still worth it's salt,
but rscheme development is slow (aka stable).
Askemos tries to get along with a minimum
of rscheme specific features.
Problems with rscheme: the compiler is not especially smart,
the executable is heavy and sometimes the signal handling get's mixed
up. Otherwise it's great.
TODO a long standing issue was the broken loop optimisation.
This is fixed since build 14 (5th Nov. 2003), now we can
roll back the rewritten match code and integrate it with the
From Kakkad's work on the Texas persistant store it should be possible to
interface to C++ code. "With a little help" of Manuel Seranno bigloo could
be an alternative. But bigloo doesn't have threads yet...
reachability based persistent memory (the best I know of)
medium size code base
- own foreign function interface is sorta arkward
- setup could be easier
>RScheme global and module environments are implemented with hash
> tables. I don't now how local environments are compiled.
The hash tables map names to binding objects, but at runtime a direct
pointer to the binding object is present in the "literal frame" (also
called the <template>), so lookups and side-effects are constant-time.
At compile time, local environments are essentially nested lists (list of
depths, and, per depth, a list of variable -> compile-time-binding
objects). During compilation, references are transformed into "lexical
addresses" which are essentially a < frame # , slot # > tuple, so lookups
(and side-effects) are constant-time in the slot # and linear time in the
> But this
> knowledge could effect performance. Will differences as shown in the
> following examples effect performance or will the compiler do the
> right thing?
> a) localizing global variable bindings
> (define (foo) ...)
> (define (bar) ... (foo) ...) ; global reference
> (define (foo) ...)
> (define bar (let ((foo foo))
> (lambda () ...(foo)...))) ; local reference
The first is probably faster because the `foo' lookup is closer to the
> b) "lifting" constant definitions (forgot the right term)
> (define (foo)
> (define bar (cons 1 2)) ; simple implementation: one cons per call of foo
> ... bar ...)
> (define foo
> (let ((bar (cons 1 2))) ; one cons in initialization
> ... bar ...))
The semantics are different, but if you only want structural equivalence,
then certainly the latter should be faster because there is no run-time
If you want to see some details, try turning on AML (Abstract Machine
Language) printing and compiling some definitions:
top=>(define (foo x) (cons x 1))
===== wrapping aml =====
(set! (reg 0 #f) (<obj> ref (reg 0 #f)))
(set! (reg 0 #f) (<obj> ref (reg 0 x)))
(set! (reg 1 #f) (<fixnum> primop #[primop raw-int->fixnum]
(<raw-int> int 1)))
(applyf 2 (<function> ref (tl-var/b 0 cons)))
value := foo
You can see pretty much exactly what is going on here, when things get
register-allocated, when you have a top-level-var lookup, etc.
I see... I didn't realize you were using RScheme in this pthreads
In that case, the signalling into RScheme would be via
rscheme_intr_call*() (there are three versions, corresponding to zero,
one, or two arguments to a procedure):
void rscheme_intr_call0( obj thunk );
void rscheme_intr_call1( obj thunk, obj a );
void rscheme_intr_call2( obj thunk, obj a, obj b );
As you mention, you have to make sure that the values that the pthread
side is holding on to (e.g., the procedure or any non-immob values) will
not get GC'd.
If the basic model is that the RScheme side "calls out" to get some work
done on the pthread side, and when it's done the answer comes back, then I
don't think you'd need a pthread-level sync structure. The pthread side
would just use rscheme_intr_call*() to return the answer.
You said you have a pool of pthreads; they're sitting around waiting to
handle requests from the scheme side? I guess you would need a
pthread-level sync structure to push stuff into the pthread side. It's a
bit more complicated than one-thread-per-request, but more efficient if
you've got a bunch of them and they only do a little bit of work per
What kind of data objects are flying between the scheme and pthread side?
Note that that pthread side can't perform GC-sensitive operations like
allocation on the GC'd heap or storing pointers, etc.
On Wed, 2 Jun 2004 Joerg.Wittenberger@Extern.Sparkassen-Informatik.de wrote:
> I'm now sitting in front of two days work: a pool of pthreads basically
> patterned after rscheme thread pools ready to do some work.
> This could make an excellent start. The idea is to have one of them run the
> rscheme side of things. Just how would I signal into
> the rscheme mechanism?
> Thanks for the rscheme_intr_call*, that's way easier than I expected.
No problem; note that I haven't tested it much, and certainly not in
a true multithreaded environment (in fact, it is known that the osglue
needs additional synchronization to support multithreaded interaction;
currently, it just blocks signals which is insufficient)