Tuesday, February 17, 2009

Ch8, Ex2

I.Well, have a look at the kernel transform of the given program
local X in
local Y in
Y = X+1
{Exchange C X Y}
end
end

Now its clear to see that X is unbound and above program will keep on waiting for X to bind(that'll never happen) at line Y=X+1 and will never execute Exchange. Here is the program with a simple fix:
L = {NewLock}
lock L then X Y in
{Exchange C X Y}
Y = X+1
end

II. The simple fix that I have given above will not work in a language that doesn't have dataflow variables because it is *using* dataflow behavior of variable Y to work.

III. Another fix that doesn't use dataflow variables and hence, is possible in a language without dataflow variables:
L = {NewLock}
lock L then
C:=@C+1
end

Ch8, Ex1

Possible number of interleaving for n threads with k operations each:

(nk)! / (k!)^n

n^(nk + 1/2)
= -----------------
(2k.PI)^(n-1)/2

Ch7, Ex8 (todo)

todo

Ch7, Ex7 (tbd)

todo

Ch7, Ex6

Sequential stateful without short circuiting:
An array N elements is created, all initialized to true. We start iterating over the array from index one, keep on setting values to false whenever (Count mod K) is 0. We keep track of number of survivors also and as soon as it reaches 1, we return the closest array index which is still true( or alive). As we never remove elements from array (no short circuiting).
declare
fun {Josephus N K}
A = {NewArray 1 N true}
fun {NextI I} (I mod N)+1 end
fun {Iter N C I}
if N==1 andthen {Array.get A I}==true then I
elseif N==1 then {Iter N C {NextI I}}
elseif {Array.get A I}==true andthen (C mod K)==0 then
{Array.put A I false}
{Iter N-1 C+1 {NextI I}}
elseif {Array.get A I}==true then
{Iter N C+1 {NextI I}}
else {Iter N C {NextI I}}
end
end
in
{Iter N 1 1}
end

Sequential stateful with short circuiting:
The only difference here is that, we use a dictionary instead of array and remove(rather than setting value to false) whenever (Count mod K) is 0. Structurally both these programs are the same.
declare
fun {Josephus N K}
D = {NewDictionary}
for I in 1..N do D.I:=t end
fun {NextI I} (I mod N)+1 end
fun {Iter N C I}
if N==1 andthen {Dictionary.condGet D I f}==t then I
elseif N==1 then {Iter N C {NextI I}}
elseif {Dictionary.condGet D I f}==t
andthen (C mod K)==0 then
{Dictionary.remove D I}
{Iter N-1 C+1 {NextI I}}
elseif {Dictionary.condGet D I f}==t then
{Iter N C+1 {NextI I}}
else {Iter N C {NextI I}}
end
end
in
{Iter N 1 1}
end

Structure of Active object based version from the book:
Here we basically create a doubly linked list of Victim active objects and pass the kill(I) token. Every Vicim on the list passes the token to next object in the list(and also is removed if I mod K is 0). This way the token keeps on circulating untill there is just one item(survivor) remaining in the list.

Apart from the models used, The main differences between the above versions and those there in the book, are in the code size and the iteration. In above versions iterations is done explicitly while in the versions from the book iteration is implicit due to the message passing technique.

b) I'm passing the analysis of finding the (n,k) region for which short circuiting is helpful because It's easy to understand by logic only. Larger the N and smaller the K(but greater than 1 or else there won't be second iteration and hence no use of shortcircuiting), more is the advantage of short circuiting as less number of items are to be traversed in subsequenct iterations one after another. If short circuiting was not there then we have to traverse all N items in all the iterations no matter how many of them are alive.
However, in above two versions the version with short circuiting is actually slower but that is because Dictionary operations are much more expensive than Array operations and that shadows the advantage bought by short circuiting.

I used following code to find about the timings of the three versions.
for N in 1000..1050 do
for K in 100..110 do T1 T2 R in
T1 = {Property.get 'time.total'}
R = {Josephus N K}
{Wait R}
T2 = {Property.get 'time.total'}
{Show josephus(n:N k:K result:R timeTaken:(T2-T1))}
end
end

The version that uses OO model was hovering around 130ms, declarative concurrent version(fastest) was hovering aroung 70ms and declarative sequential model(slowest) was taking around 210ms. Declarative sequential model is slow because of the Dictionary operations as they're expensive.

Asymptotically all three versions have the same time complexity as they're effectively doing the same thing which is to keep on following the list(killing the unfortunates on the way and removing them from the list) circularly unless there is only one survivor left. Note: The word list is used in a very loose sense, its not the list type of oz.

=========================================================================
A declarative sequential solution to josephus problem...While reading the book I wrote this one(nothing fancy).We start with a list [1..N], keep on iterating over it in circular fashion and keep on removing every Kth item untill there is just one element left and that last element is the survivor.
declare Josephus
local
%Generates list [1 2 3..N]
%Usage: {GenerateList N nil}
fun {GenerateList N A}
if N>0 then {GenerateList N-1 N|A}
else A end
end
in
fun {Josephus N K}
% N: number of survivors
% C: count
% Xs: Active list of survivors
% Ys: ongoing list of survivors from Xs
fun {Iter N C Xs Ys}
if N==1 then Xs.1
else
case Xs of X|nil andthen (C mod K)==0 then
{Iter N-1 C+1 {Reverse Ys} nil}
[] X|nil then
{Iter N C+1 {Reverse X|Ys} nil}
[] X|Xr andthen (C mod K)==0 then
{Iter N-1 C+1 Xr Ys}
[] X|Xr then
{Iter N C+1 Xr X|Ys}
end
end
end
in
{Iter N 1 {GenerateList N nil} nil}
end
end

Ch7, Ex5

I- Synchronous RMI
declare
class ServerProc
meth init
skip
end
meth send(Msg)
case Msg of calc(X Y) then
Y=X*X+2.0*X+2.0
end
end
end
Server = {NewActive ServerProc init}

declare
class ClientProc
meth init
skip
end
meth send(Msg)
case Msg
of work(Y) then
Y1 Y2 in
{Server send(calc(10.0 Y1))}
{Wait Y1}
{Server send(calc(20.0 Y2))}
{Wait Y2}
Y=Y1+Y2
end
end
end
Client = {NewActive ClientProc init}

{Browse {Client send(work($))}}

II- Asynchronous RMI
Only ClientProc is changed, everything else remains same as the synchronous RMI.
declare
class ClientProc
meth init
skip
end
meth send(Msg)
case Msg
of work(Y) then
Y1 Y2 in
{Server send(calc(10.0 Y1))}
{Server send(calc(20.0 Y2))}
Y=Y1+Y2
end
end
end

III- RMI with callback(with thread)
declare
class ServerProc
meth init
skip
end
meth send(Msg)
%{Browse server#Msg}
case Msg
of calc(X Y Client) then
X1 D in
{Client send(delta(D))}
X1=X+D
Y=X1*X1+2.0*X1+2.0
end
end
end
Server = {NewActive ServerProc init}

declare
class ClientProc
meth init
skip
end
meth send(Msg)
%{Browse client#Msg}
case Msg
of work(Z) then
Y in
{Server send(calc(10.0 Y self))}
%here we don't need to create another
%thread as we're passing self and not
%Client
Z=Y+100.0
[] delta(D) then
D=1.0
end
end
end
Client = {NewActive ClientProc init}

{Browse {Client send(work($))}}

IV- RMI with callback (using record continuation)
declare
class ServerProc
meth init
skip
end
meth send(Msg)
%{Browse server#Msg}
case Msg
of calc(X Client Cont) then
X1 D Y in
{Client send(delta(D))}
X1=X+D
Y=X1*X1+2.0*X1+2.0
{Client send(Cont#Y)}
end
end
end
Server = {NewActive ServerProc init}

declare
class ClientProc
meth init
skip
end
meth send(Msg)
%{Browse client#Msg}
case Msg
of work(?Z) then
{Server send(calc(10.0 self cont(Z)))}
[] cont(Z)#Y then
Z=Y+100.0
[] delta(?D) then
D=1.0
end
end
end
Client = {NewActive ClientProc init}

{Browse {Client send(work($))}}

V- RMI with callback (using procedure continuation)
Only ClientProc is changed, everything else remains same as the record continuation technique
class ClientProc
meth init
skip
end
meth send(Msg)
%{Browse client#Msg}
case Msg
of work(?Z) then
C = proc {$ Y} Z=Y+100.0 end
in
{Server send(calc(10.0 self cont(C)))}
[] cont(C)#Y then
{C Y}
[] delta(?D) then
D=1.0
end
end
end

VI- Error Reporting
class ServerProc
meth init
skip
end
meth send(Msg)
case Msg
of sqrt(X Y E) then
try
Y={Sqrt X}
E=normal
catch Exc then
E=exception(Exc)
end
end
end
end
Server = {NewActive ServerProc init}

%It can be used as
{Server send(sqrt X Y E)}
case E of exception(Exc) then
raise Exc end end

VII- Asynchronous RMI with callback
declare
class ServerProc
meth init
skip
end
meth send(Msg)
case Msg
of calc(X ?Y Client) then
X1 D in
{Client send(delta(D))}
thread
X1=X+D
Y=X1*X1+2.0*X1+2.0
end
end
end
end
Server = {NewActive ServerProc init}

declare
class ClientProc
meth init
skip
end
meth send(Msg)
case Msg
of work(?Y) then
Y1 Y2 in
{Server send(calc(10.0 Y1 self))}
{Server send(calc(20.0 Y2 self))}
thread Y=Y1+Y2 end
[] delta(?D) then
D=1.0
end
end
end
Client = {NewActive ClientProc init}

{Browse {Client send(work($))}}

VIII-Double Callbacks
declare
class ServerProc
meth init
skip
end
meth send(Msg)
case Msg
of calc(X ?Y Client) then
X1 D in
{Client send(delta(D))}
thread
X1=X+D
Y=X1*X1+2.0*X1+2.0
end
[] serverdelta(?S) then
S=0.01
end
end
end
Server = {NewActive ServerProc init}

declare
class ClientProc
meth init
skip
end
meth send(Msg)
case Msg
of work(Z) then
Y in
{Server send(calc(10.0 Y self))}
thread Z=Y+100.0 end
[] delta(?D) then S in
{Server send(serverdelta(S))}
thread D=1.0+S end
end
end
end
Client = {NewActive ClientProc init}

{Browse {Client send(work($))}}

Ch7, Ex4

To allow multiple inheritance(for 1 or more), I have just generalized the "From" function from the book. Instead of taking fixed number of classes to be inherited it now takes a list of classes to be inherited.
I'm redoing the Account, LoggedAccount example from section 7.3.2 to show how to achieve static binding with the object system of section 7.6 . However, It doesn't look very clean solution... will change it once I can think something else.
For better understanding, Please read the code below along with the embedded comments.

%Helpers-----------------------------------------------------
declare NewWrapper Wrap Unwrap Union Inter Minus LogObj

LogObj = {New class $
attr entries
meth init
entries:=nil
end
meth addEntry(X)
entries:=X|(@entries)
end
meth getEntries(?X)
X={List.reverse @entries}
end
meth clear
entries:=nil
end
end
init}

proc {NewWrapper ?W ?U}
Key={NewName}
in
fun {W X}
fun {$ K} if K==Key then X end end
end
fun {U Y}
{Y Key}
end
end

{NewWrapper Wrap Unwrap}

%Download Set.oz from book's suppliment site then
%create ozc file by running following command
%$ozc -c Set.oz
%CHANGE PATH
declare
[Set]= {Module.link ['/path/Set.ozf']}
Union=Set.union
Inter=Set.inter
Minus=Set.minus

declare New2
fun {New2 WClass InitialMethod}
State Obj Class
in
Class={Unwrap WClass}
State = {MakeRecord s Class.attrs}
{Record.forAll State proc {$ A} {NewCell _ A} end}
proc {Obj M}
{Class.methods.{Label M} M State Obj}
end
{Obj InitialMethod}
Obj
end
%Helpers finished-------------------------------------------------

%New From to allow multiple inheritence,
%Returns new class record whose base defn
%is BaseC and inherits all classes in
%list Classes
declare
fun {From BaseC Classes}
c(methods:M attrs:A) = {Unwrap BaseC}
ParentMeths = {Map Classes
fun {$ X}
c(methods:M attrs:_)={Unwrap X}
in M
end}
ParentMethArities = {Map ParentMeths
fun {$ M} {Arity M} end}
ParentAttrs = {Map Classes
fun {$ X}
c(methods:_ attrs:A)={Unwrap X}
in A
end}
ConfMeth=if {List.length Classes}>1 then
{Minus {FoldL ParentMethArities
fun {$ MA1 MA2} {Inter MA1 MA2} end
ParentMeths.1} {Arity M}}
else
nil end
ConfAttr=if {List.length Classes}>1 then
{Minus {FoldL ParentAttrs
fun {$ A1 A2} {Inter A1 A2} end
ParentAttrs.1} A}
else
nil end
in
if ConfMeth\=nil then
raise illegalInheritance(methConf:ConfMeth) end
end
if ConfAttr\=nil then
raise illegalInheritance(attrConf:ConfAttr) end
end
{Wrap c(methods:{Adjoin {FoldL ParentMeths
fun {$ M1 M2}
{Adjoin M1 M2}end
nil} M}
attrs:{Union {FoldL ParentAttrs
fun {$ A1 A2}
{Union A1 A2} end
nil} A})}
end

%Account,LoggedAccount example
declare Account
local
Attrs = [balance]
MethodTable = m(init:Init
transfer:Transfer
getBal:GetBal
batchTransfer:BatchTransfer)
proc {Init M S Self}
init(Bal) = M
in
(S.balance):=Bal
end

proc {Transfer M S Self}
transfer(Amt) = M
in
(S.balance):=Amt+@(S.balance)
end

proc {GetBal M S Self}
getBal(Bal) = M
in
Bal = @(S.balance)
end

proc {BatchTransfer M S Self}
batchTransfer(AmtList) = M
in
for A in AmtList do
{Self transfer(A)} end
end
in
Account = {Wrap c(methods:MethodTable attrs:Attrs)}
end

% LoggedAccount inherits Account
declare LoggedAccountBase
local
Attrs = nil
MethodTable = m(transfer:Transfer)
proc {Transfer M S Self}
{LogObj addEntry(M)}
%----------Static Binding----------
%its fair to know about the superclasses at
%the time of defining the subclass
{{Unwrap Account}.methods.transfer M S Self}
end
in
LoggedAccountBase = {Wrap c(methods:MethodTable
attrs:Attrs)}
end

%Test
local
LoggedAccClass
LoggedAccObj
in
{LogObj clear}
LoggedAccClass = {From LoggedAccountBase [Account]}
LoggedAccObj = {New2 LoggedAccClass init(0)}
{LoggedAccObj transfer(100)}
{LoggedAccObj transfer(200)}
{LoggedAccObj transfer(300)}
%This prints the above transactions that means the
%Transfer code from LoggedAccount is executed
{Browse {LogObj getEntries($)}}
%This prints the balance=600(100+200+300) that means
%static binding worked and Transfer code from Account
%was executed.
{Browse {LoggedAccObj getBal($)}}
end

Ch7, Ex3

declare
fun {TraceNew2 Class Init}
TInit={NewName}
class Tracer
attr obj:{New Class Init}
meth !TInit skip end
meth otherwise(M)
{Browse entering({Label M})}
{@obj M}
{Browse exiting({Label M})}
end
end
in {New Tracer TInit} end

Ch7, Ex2

Make a functor to represent the package as follow:
functor
export
%export the all defined classes
a:A
%......
define
%create names for all the protected methods
P1 = {NewName}
%......

%all the above created names are available to *any*
%class inside this functor(or say package)

class A
%put all protected method as a record
%inside an attribute
attr setOfAllProtectedMethods:p(p1:P1)

meth init
skip
end

meth !P1(?Msg)
Msg='I am protected'
end
end

%......
end
This is how a class inheriting A can access protected methods.
%Create an instance of above functor, bind it to an
%identifier, lets call it TestPackage. Then this is how protected
%method P1 in class A can be used from a class inheriting it.
declare
class B from TestPackage.a
meth callP1(Msg)
P1 = (@setOfAllProtectedMethods).p1 in
{self P1(Msg)}
end
end

%Test
local O in
O = {New B init}
{Browse {O callP1($)}}
end

Ch7, Ex1

declare
fun {New2 Class}
K = {NewName}
class Mixin from Class
meth !K
skip
end
end
in
{New Mixin K}
end

Thursday, February 5, 2009

Ch6, Ex18

functor
import
File %Dict, custom Dict module is no more needed
QTk at 'x-oz://system/wp/QTk.ozf'
define
fun {WordChar C}
(&a=<C andthen C=<&z) orelse
(&A=<C andthen C=<&Z) orelse (&0=<C andthen C=<&9) end
fun {WordToAtom PW} {StringToAtom {Reverse PW}} end
Put=Dictionary.put
CondGet=Dictionary.condGet
proc {IncWord D W}
{Put D W {CondGet D W 0}+1}
end
fun {CharsToWords PW Cs}
case Cs
of nil andthen PW==nil then
nil
[] nil then
[{WordToAtom PW}]
[] C|Cr andthen {WordChar C} then
{CharsToWords {Char.toLower C}|PW Cr}
[] _|Cr andthen PW==nil then
{CharsToWords nil Cr}
[] _|Cr then
{WordToAtom PW}|{CharsToWords nil Cr}
end
end
proc {CountWords D Ws}
case Ws
of W|Wr then
{IncWord D W}
{CountWords D Wr}
[] nil then skip
end
end
fun {WordFreq Cs}
D={NewDictionary}
in
{CountWords D {CharsToWords nil Cs}}
D
end
fun {Entries D}
Ws = {Arity {Dictionary.toRecord label D}}
Ns = {Record.toList {Dictionary.toRecord label D}}
proc {Iter Xs Ys ?A}
case Xs#Ys of (X|Xr)#(Y|Yr) then Z in
A = (X#Y)|Z
{Iter Xr Yr Z}
[] nil#nil then A=nil
end
end
in
{Iter Ws Ns $}
end
L={File.readList stdin}
E={Entries {WordFreq L}}
S={Sort E fun {$ A B} A.2>B.2 end}
H Des=td(title:'Word frequency count'
text(handle:H tdscrollbar:true glue:nswe))
W={QTk.build Des} {W show}
for X#Y in S do {H insert('end' X#': '#Y#' times\n')} end
end

Ch6, Ex17

Following are the changes:
%Add  a variable for threhold number of users on one site
declare N=10000 M=500000 T=200 Threshold=100

%Add methods for incrementing/decrementing hits
%Procedures for incrementing/decrementing by 1 user on site
proc {IncrementHits S}
C = 1.0
in
Sites.S.hits := Sites.S.hits + 1
if Sites.S.hits>Threshold then
Sites.S.performance := Sites.S.performance - C
end
end
proc {DecrementHits S}
C = 1.0
in
if Sites.S.hits>Threshold then
Sites.S.performance := Sites.S.performance + C
end
Sites.S.hits := Sites.S.hits - 1
end

%Never increment/decrement hits manualy but use above methods
declare
Users={MakeTuple users M}
for I in 1..M do
S={UniformI 1 N}
in
Users.I={Record.toDictionary o(currentSite:S)}
{IncrementHits S}
end
proc {UserStep I}
U = Users.I
% Ask three users for their performance information
L = {List.map [{UniformI 1 M} {UniformI 1 M} {UniformI 1 M}]
fun{$ X}
(Users.X.currentSite) #
Sites.(Users.X.currentSite).performance
+ {Gauss}*{IntToFloat N}
end}
% Calculate the best site
MS#MP = {List.foldL L
fun {$ X1 X2} if X2.2>X1.2 then X2 else X1 end end
U.currentSite #
Sites.(U.currentSite).performance
+ {Abs {Gauss}*{IntToFloat N}}
}
in
if MS\=U.currentSite then
{DecrementHits U.currentSite}
U.currentSite := MS
{IncrementHits MS}
end
end

Ch6, Ex16

We can divide M users in K groups of equal size and in UserStep, user I can ask for performance information from the 3 users belonging to his group only.

These are the changes:
%We add another variable K that
%denotes the number of groups. K should be chosen
%so that (M mod K) is 0 i.e. It should be able
%to divide M users in K groups of equal size
declare
N=10000 M=500000 T=200 K=100 %K is the number of groups

%Sets A and B to the extremes of the group in
%which I fall into
proc {GroupEdges I ?A ?B}
X1 = (K*I) div M
X2 = (K*I) mod M
in
if X2==0 then
A = (X1-1)*(M div K)+1
B = X1*(M div K)
else
A = X1*(M div K) + 1
B = (X1+1)*(M div K)
end
end

%Chose users from the same group only
proc {UserStep I}
U = Users.I
local A B in
{GroupEdges I A B}
% Ask three users(belonging to I's group only) for
% their performance information
L = {List.map [{UniformI A B} {UniformI A B}
{UniformI A B}]
fun{$ X}
(Users.X.currentSite) #
Sites.(Users.X.currentSite).performance
+ {Gauss}*{IntToFloat N}
end}
end
% Calculate the best site
MS#MP = {List.foldL L
fun {$ X1 X2} if X2.2>X1.2 then X2 else X1 end end
U.currentSite #
Sites.(U.currentSite).performance
+ {Abs {Gauss}*{IntToFloat N}}
}
in
if MS\=U.currentSite then
Sites.(U.currentSite).hits :=
Sites.(U.currentSite).hits - 1
U.currentSite := MS
Sites.MS.hits := Sites.MS.hits + 1
end
end

==================================================================
Complete Code to run word-of-mouth simulation:
%Download File.oz from book's suppliment site then
%create ozc file by running following command
%$ozc -c File.oz

%CHANGE THESE PATHS
declare FileOzrc SimResults
FileOzrc=
'/path/File.ozf'
SimResults=
'/path/wordofmouth.txt'
declare
[File]= {Module.link [FileOzrc]}

declare NewRand
local
A=333667
B=213453321
M=1000000000
in
proc {NewRand ?Rand ?Init ?Max}
X={NewCell 0}
in
proc {Init Seed} X:=Seed end
fun {Rand} X:=(A*@X+B) mod M in @X end
Max=M
end
end

declare Rand Init Max Uniform UniformI Gauss
{NewRand Rand Init Max}
local
FMax={IntToFloat Max}
in
fun {Uniform}
{IntToFloat {Rand}}/FMax
end
fun {UniformI A B}
A+{FloatToInt {Floor {Uniform}*{IntToFloat B-A+1}}}
end
end
local
TwoPi=4.0*{Float.acos 0.0}
GaussCell={NewCell nil} in
fun {Gauss}
Prev={Exchange GaussCell $ nil}
in
if Prev\=nil then Prev
else R Phi in
R={Sqrt ~2.0*{Log {Uniform}}}
Phi=TwoPi*{Uniform}
GaussCell:=R*{Cos Phi}
R*{Sin Phi}
end
end
end

declare
N=10000 M=500000 T=200
{Init 0}
{File.writeOpen SimResults}
proc {Out S}
{File.write {Value.toVirtualString S 10 10}#"\n"}
end


declare
Sites={MakeTuple sites N}
for I in 1..N do
Sites.I={Record.toDictionary
o(hits:0 performance:{IntToFloat
{UniformI 1 80000}})}
end

declare
Users={MakeTuple users M}
for I in 1..M do
S={UniformI 1 N}
in
Users.I={Record.toDictionary o(currentSite:S)}
Sites.S.hits := Sites.S.hits + 1
end

proc {UserStep I}
U = Users.I
% Ask three users for their performance information
L = {List.map [{UniformI 1 M} {UniformI 1 M} {UniformI 1 M}]
fun{$ X}
(Users.X.currentSite) #
Sites.(Users.X.currentSite).performance
+ {Gauss}*{IntToFloat N}
end}
% Calculate the best site
MS#MP = {List.foldL L
fun {$ X1 X2} if X2.2>X1.2 then X2 else X1 end end
U.currentSite #
Sites.(U.currentSite).performance
+ {Abs {Gauss}*{IntToFloat N}}
}
in
if MS\=U.currentSite then
Sites.(U.currentSite).hits :=
Sites.(U.currentSite).hits - 1
U.currentSite := MS
Sites.MS.hits := Sites.MS.hits + 1
end
end

for J in 1..N do
{Out {Record.adjoinAt {Dictionary.toRecord site Sites.J}
name J}}
end
{Out endOfRound(time:0 nonZeroSites:N)}
for I in 1..T do
X = {NewCell 0}
in
for U in 1..M do {UserStep U} end
for J in 1..N do
H=Sites.J.hits in
if H\=0 then
{Out {Record.adjoinAt
{Dictionary.toRecord site Sites.J} name J}}
X:=1+@X
end
end
{Out endOfRound(time:I nonZeroSites:@X)}
end
{Out finished}
{File.writeClose}

Ch6, Ex15

declare
proc {Break}
raise breakException end
end
proc {Block P}
try {P Break}
catch X then
if X==breakException then skip
else raise X end end
end
end

%usage

%no break in statements, so prints one, two
{Block proc{$ Break} {Browse one} {Browse two} end}

%prints one only as after it {Break} is called
{Block proc{$ Break} {Browse one} {Break} {Browse two} end}

%nested blocks, it prints one, two and four... notice that
%calling Break in inner block breaks it only
{Block proc{$ Break1}
{Browse one}
{Block proc{$ Break2}
{Browse two} {Break2} {Browse three} end}
{Browse four} end}

Ch6, Ex14 (tbd)

tbd

Ch6, Ex13

declare
fun {NewDictionary}
L={NewCell nil}
proc {Put K V}
{Remove K}
L:=(K#V)|@L
end
fun {Get K}
fun {Iter Xs}
case Xs of
(Key#Val)|Xr andthen K==Key then Val
[] (Key#Val)|Xr then {Iter Xr}
[] nil then raise keyNotFound end
end
end
in
{Iter @L}
end
fun {CondGet K A}
try {Get K}
catch X then A
end
end
fun {Member K}
fun {Iter Xs}
case Xs of
(Key#Val)|Xr andthen K==Key then true
[] (Key#Val)|Xr then {Iter Xr}
[] nil then false
end
end
in
{Iter @L}
end
proc {Remove K}
fun {Iter Xs}
case Xs of
(Key#Val)|Xr andthen K==Key then {Iter Xr}
[] (Key#Val)|Xr then (Key#Val)|{Iter Xr}
[] nil then nil
end
end
in
L:={Iter @L}
end
fun {Keys}
proc {Iter Xs ?R}
case Xs of
(Key#Val)|Xr then Y in
R=Key|Y
{Iter Xr Y}
[] nil then R=nil
end
end
in
{Iter @L $}
end
%for debugging only
fun {ListContent}
@L
end
in
dict(put:Put get:Get condGet:CondGet member:Member
remove:Remove list:ListContent keys:Keys)
end

Note: We can not support Dictionary.toRecord operation for this dictionary as record feature can only be atom, bool or int and in this dictionary keys can be any value.

Ch6, Ex12

declare
fun {NewExtensibleArray L H Init}
A={NewCell {NewArray L H Init}}#Init
proc {CheckOverOrUnderflow I}
Arr=@(A.1)
Low={Array.low Arr}
High={Array.high Arr}
in
if I>High then
High2=Low+{Max I 2*(High-Low)}
Arr2={NewArray Low High2 A.2}
in
for K in Low..High do Arr2.K:=Arr.K end
(A.1):=Arr2
elseif I<Low then
Low2 = Low - {Max (High-Low) (Low-I)}
Arr2={NewArray Low2 High A.2}
in
for K in Low..High do Arr2.K:=Arr.K end
(A.1):=Arr2
end
end
proc {Put I X}
{CheckOverOrUnderflow I}
@(A.1).I:=X
end
fun {Get I}
{CheckOverOrUnderflow I}
@(A.1).I
end
in extArray(get:Get put:Put)
end

Ch6, Ex11 (tbd)

tbd

Ch6, Ex10

Swap code impl with call by need:
declare
proc {Swap X Y}
Xarg = {X}
Yarg = {Y}
T = {NewCell @Xarg}
in
Xarg:=@Yarg
Yarg:=@T
end

Counterintuitive behavior does not occur with this version.

Sqr code call by need impl with using laziness:
declare
proc {Sqr A}
B={A}
in
B:=@B*@B
end

local C={NewCell 0} in
C:=25
%made the argument function lazy
{Sqr fun lazy {$} C end}
{Browse @C}
end

Ch6, Ex9

With "Call By Name", argument is evaluated everytime its needed, It doesn't work because in the code we need a[i] twice, first time i is one so we refer to a[1], second time when we refer to a[i] then i is 2 so we refer to a[2] though we intended to refer to a[1].
It'll be clear from the code and explaination below.
Code in stateful computation model:
declare
proc {Swap X Y}
T = {NewCell @{X}}
in
{X}:=@{Y}
{Y}:=@T
end
local A I in
A={MakeTuple array 10}
for J in 1..10 do A.J={NewCell 0} end
I = {NewCell 1}
(A.1):=2
(A.2):=1
{Swap fun {$} I end fun {$} A.(@I) end}
{Browse @I}
{Browse @(A.1)}
{Browse @(A.2)}
end

Explaination:
STEP1:
T = {NewCell @{X}} , here @{X} returns @I i.e. 1

STEP2:
{X}:=@{Y} , here {X} returns I and {Y} returns A.(@I) i.e. A.1
so this step effectively does I:=2

STEP3:
{Y}:=@T , here {Y} returns A.(@I) i.e. A.2
so this step effectively does A.2:=2

Its clear that when, in STEP3, it was intended to change A.1, we unknowingly did it to A.2 .

Ch6, Ex8

%First representation
declare
fun {NewCollector}
H in
{NewCell H|H}
end
proc {Collect C X}
H T in
{Exchange C H|(X|T) H|T}
end
fun {EndCollect C}
X|nil = @C
in
X
end

%Second representation
declare
fun {NewCollector}
H in
H|{NewCell H}
end
proc {Collect C X}
T in
{Exchange C.2 X|T T}
end
fun {EndCollect C}
X|Y = C
in
@Y = nil
X
end

%Test
local C in
C = {NewCollector}
{Collect C 1}
{Collect C 2}
{Collect C 3}
{Browse {EndCollect C}} %[1 2 3]
end

Notice that for both the implementations a collector C becomes useless once EndCollector has been called on it, I don't know if this is not supposed to happen for the actual solution.

Memory Usage: To find out memory usage of both versions of Collect we'll have to transform their body in kernel language.
%First Version
declare
proc {Collect C X}
local H in
local T in
local Tmp1 in
Tmp1 = H|(X|T)
local Tmp2 in
Tmp2 = H|T
{Exchange C Tmp1 Tmp2}
end
end
end
end
end

Memory Usage-
For Tmp1 = H|(X|T) it is (1+2)+(1+2)+m(X)= 6+m(X)
For Tmp2 = H|T it is 1+2 = 3
And assume that for Exchange operation the memory used be E0
Total Memory Used = 1+1+1+6+m(X)+1+3+E0 = 13+E0 + m(X)
13+E0 words become inactive after the call
%Second Version
declare
proc {Collect C X}
local T in
local Tmp1 in
Tmp1 = C.2 %Let this causes m0 memory
local Tmp2 in
Tmp2 = X|T
{Exchange Tmp1 Tmp2 T}
end
end
end
end


Memory Usage-
Total: 1+1+m0+1+(1+2)+m(X)+E0 = 6+m0+E0 +m(X)
and 6+m0+E0 words become inactive after the call. If m0=2(don't know for a fact though) then 8+E0 becomes inactive.

Its clear that second version will leave less work for garbage collector and will take less time allocating the memory and hence better from the performance perspective.

Ch6, Ex7 (tbd)

tbd

Ch6, Ex6

I don't think its possible to maintain the same identity for an object after state changes using declarative model because of the very fact of declarativeness that result of an operation should always be same i.e. it can't have a hidden state.

Ch6, Ex5

It seems to be true as is evident by variations of stack implementation in the book, in the implementation of Ports in ex3, explicit state is needed to keep track of the end of the stream and not for security.

Ch6, Ex4

An open unbundled stateful version
declare
proc {NewPort S P}
P = {NewCell S}
end
proc {Send P X}
S=@P S1 in
S = X|S1
P:=S1
end

A Secure unbundled stateful version
declare
proc {NewWrapper ?Wrap ?Unwrap}
Key={NewName}
in
fun {Wrap X}
fun {$ K} if K==Key then X end end
end
fun {Unwrap W}
{W Key}
end
end

declare NewPort Send
local Wrap Unwrap in
{NewWrapper Wrap Unwrap}
proc {NewPort S P}
P = {Wrap {NewCell S}}
end
proc {Send P X}
C={Unwrap P} S=@C S1 in
S = X|S1
C:=S1
end
end

A Secure Bundled stateful version
declare
proc {NewPort S P}
C = {NewCell S}
proc {Send X}
S = @C S1 in
S = X|S1
C := S1
end
in
P = port(send:Send)
end

%Usage
declare P S
{NewPort S P}
{Browse S}
{P.send a}
{P.send b}

Note: In chapter 5 it is said that end of port stream is a readonly variable, I don't completely understand the meaning of that and hence above versions don't support that statement, Someday I'll give more thought to it.

Ch6, Ex3

I could not find a way to have encapsulated sum count without using explicit state and also had to change the arguments in SumList. The difficulty is that count variable needs to change with each call to SumList but in declarative model we can not change a variable value once its bound,so we have to keep track of change by creating new variables and binding them to new values.
This is my new SumList
declare
fun {SumList Xs S CountS CountSnext}
CurrentCount Cs
in
CountS = access(CurrentCount)|assign(CurrentCount+1)|Cs
%{Browse currentcount} {Browse CurrentCount}
case Xs
of nil then
CountSnext = Cs
S
[] X|Xr then
{SumList Xr X+S Cs CountSnext}
end
end
fun {MakeState Init}
proc {Loop S V}
case S of access(X)|S2 then
X=V {Loop S2 V}
[] assign(X)|S2 then
{Loop S2 X}
else skip end
end
S
in
thread {Loop S Init} end
S
end

%usage, notice how the state is wind up in C0, C1,...
%calling SumList the first time
declare C0 S={MakeState 0}
{Browse {SumList [1 2 3 4] 0 S C0}}

%Accessing current count
declare C1 in
local X in
C0=access(X)|C1
{Browse X} %5
end

%Calling SumList again
declare C2 in
{Browse {SumList [1 2] 0 C1 C2}}

%Accessing current count
declare C3 in
local X in
C2=access(X)|C3
{Browse X} %8
end

Clearly this version is not readable, not easy to use(because counting functionality is not encapsulated) as caller needs to keep track of variables.. hence declarative model alone is not expressive enough to give this functionality

Ch6, Ex2

declare
fun {SumList Xs}
C={NewCell 0}
fun {Recurse Xs}
case Xs of X|Xr then
C := @C + X
{Recurse Xr}
else @C
end
end
in
{Recurse Xs}
end
%Test
{Browse {SumList [1 2 3 4]}} %10

Another version which is slightly more readable but works only for sequential computation model, it breaks if multiple threads use it(Why? Hint: All threads can change the contents in the cell)
declare SumList
local C={NewCell 0}
in
fun {SumList Xs}
case Xs of X|Xr then
C := @C + X
{SumList Xr}
else Y=@C in
C := 0
Y
end
end
end

%Code that shows that this fails with concurrent model
%MakeList creates a list of 1s of length N
declare
fun {MakeList N}
fun {Iter X N}
if N>0 then {Iter 1|X N-1}
else X
end
end
in
{Iter nil N}
end

%Test
%should print 100000
{Browse {SumList {MakeList 100000}}}
%Both of these should also print 100000
%but they mostly don't
thread {Browse {SumList {MakeList 100000}}} end
thread {Browse {SumList {MakeList 100000}}} end

Ch6, Ex1

In case of state we're just interested in the final result of the sequence and not the intermediate values whereas in case of comics we're interested in all the values. State sequence exists in time and not in space whereas comics sequence exists in space only. Transition between the sequence elements is not important in State but in Comics.

Note: This exercise is not there in the pdf version of the book.

Ch5, Ex8

Well,it works but not practical because it doesn't seem to be easy to use. Moreover as specified in the footnote, its not possible to name the NameServer object and it has to be passed around.

(I bet there is more that can be added, please drop your thoughts..)

Ch5, Ex7 (tbd)

dint follow the section on erlang as it felt like really hard to get into understanding another language at this point, will come back to it later

Ch5, Ex6 (tbd)

dint follow the section on erlang as it felt like really hard to get into understanding another language at this point, will come back to it later

Tuesday, February 3, 2009

Ch5, Ex5

Code that can be used to run the various code snippets in the problem:
declare
proc {Barrier Ps}
fun {BarrierLoop Ps L}
case Ps of P|Pr then M in
thread {P} M=L end
{BarrierLoop Pr M}
[] nil then L
end
end
S={BarrierLoop Ps unit}
in
{Wait S}
end
proc {NewPortClose ?S ?Send ?Close}
PC={NewCell S}
in
proc {Send M}
S in
{Exchange PC M|S S}
end
proc {Close}
nil=@PC
end
end
proc {ConcFilter L F ?L2}
Send Close
in
{NewPortClose L2 Send Close}
{Barrier
{Map L
fun {$ X}
proc {$}
if {F X} then {Send X} end
end
end}}
{Close}
end

a:)It shows [5 4] but [4 5] is also possible and depends upon the thread execution order put down by the thread scheduler which are created by Barrier. Clearly there is an observable nondeterminism.
BTW here is a snippet that *almost* makes sure that [4 5] is printed
declare Out A
thread {Delay 500} A = 5 end
%ConcFilter doesn't wait for A and carries on with other
%elements, A is processed later when its bound, Regular
%Filter in this case will wait for A to bound and still
%print [5 4]
{ConcFilter [A 1 2 4 0] fun {$ X} X>2 end Out}
{Show Out}

b:)Same as a.

c:)Nothing is displayed as Barrier blocks because one of the child thread is blocked at A.
Once A is bound to 3, Out can have two values [5 4 3] and [4 5 3]. 3 always comes last because all the threads except that needing A finish there execution.

d:)Well complexity of both is O(n). But execution time depends upon following cases
I. If all the elements in input list are bound then Filter is much faster because ConcFilter does exactly the same things as Filter but with the overhead of creating threads for processing each element.
II.If let say first element of the list is not bound, in this case Filter will not be able to process the other elements also unless first one is bound while ConcFilter can carry on processing other elements even though first is not bound, so if list length is sufficiently large and predicate function is time consuming then ConcFilter will do a better job.

Ch5, Ex4

PartI- Well, One way to understand it is by seeing that the assertion, which said "The sum of elements on the port's stream is greater than or equal to the number of active threads." ,that was stated to prove the correctness of NewThread symantics is no longer always true with this new definition of SubThread. Let us look carefully at the two definitions of SubThread and the embedded comments that provide the explaination.
% Correct Version
proc {SubThread P}
%Notice that +1 is sent to the Port's stream before
%starting the new thread, this makes sure that sum of
%elements is *incremented before* any new thread is started
{Send Pt 1}
thread
{P} {Send Pt ~1}
end
end
% Incorrect Version
proc {SubThread P}
%thread is started before sending +1 to the port's stream
%so right after creation of this thread, if scheduler
%doesn't execute it then a thread is created but sum of
%elements is not incremented and that means assertion is
%no longer true.
thread
{Send Pt 1}
{P} {Send Pt ~1}
end
end


PartII- Required Example
declare NewThread
local
proc {ZeroExit N Is}
case Is of I|Ir then
if N+I\=0 then {ZeroExit N+I Ir}
else
% 'ZeroExit Finished' is displayed when call to
% ZeroExit is over.
{Browse 'ZeroExit Finished'}
end
end
end
in
proc {NewThread P ?SubThread}
Is Pt={NewPort Is}
in
proc {SubThread P}
thread
{Send Pt 1}
{P} {Send Pt ~1}
end
end
{SubThread P}
{ZeroExit 0 Is}
end
end

local SubThread in
{NewThread
proc {$}
{Browse 'T1 started'}
{SubThread
proc {$}
{Browse 'T2 started'} {Delay 2000}
{Browse 'T2 finished'}
end} %T2
{Browse 'T1 finished'}
end
SubThread}
{Browse 'NewThread Finished'}
end


Notice that "ZeroExit Finished" is printed before "T2 finished" that is sum of elemtents on the port stream became zero though T2 was not finished.

Ch5, Ex3 (TBD)

----WORK IN PROGRESS-----------

We can add one more state to Floor, called "keepopen", when Floor receives a call msg while door is open, then Floor starts another timer and goes to state "keepopen", all the call msgs in keepopen state are ignored and getting stoptimer puts the Floor back in doorsopen state. Add state "down" to Lift, when it receives any msg in this state it simply replies with down(id). When Floor receives a down msg it simply tries to call another lift. When a lift sends arrived msg to a Floor, it also sets a timer, on receipt of stoptimer, if its still waiting for Floor to close the doors then it goes into down state.

Ch5, Ex2

a.
Well, In the current implementation, one controller can be associated with one lift only. If we have one controller and every lift send it step(N), controller will keep on moving one lift only. Obviously its a pretty bad idea.

b.
fun {Controller Init}
Tid={Timer}
Cid={NewPortObject Init
fun {$ Msg state(Motor F Lid)}
case Motor
of running then
case Msg
of stoptimer then
{Send Lid 'at'(F)}
state(stopped F Lid)
end
[] stopped then
case Msg
of step(Dest) then
if F==Dest then
state(stopped F Lid)
elseif F < Dest then X=Dest-F in %CHANGED HERE
{Send Tid starttimer(5000*X Cid)}
state(running F+X Lid)
else X=F-Dest in % F>Dest
{Send Tid starttimer(5000*X Cid)}
state(running F-X Lid)
end
end
end
end}
in Cid end

fun {Lift Num Init Cid Floors}
{NewPortObject Init
fun {$ Msg state(Pos Sched Moving)}
case Msg
of call(N) then
{Browse 'Lift '#Num#' needed at floor '#N}
if N==Pos andthen {Not Moving} then
{Wait {Send Floors.Pos arrive($)}}
state(Pos Sched false)
else Sched2 in
Sched2={ScheduleLast Sched N}
if {Not Moving} then
{Send Cid step(N)} end
state(Pos Sched2 true)
end
[] 'at'(NewPos) then
{Browse 'Lift '#Num#' at floor '#NewPos}
case Sched
of S|Sched2 then
if NewPos==S then
{Wait {Send Floors.S arrive($)}}
if Sched2==nil then
state(NewPos nil false)
else
{Send Cid step(Sched2.1)}
state(NewPos Sched2 true)
end
else %NOT CHANGED REALLY
{Browse 'ERROR: this should not happen'}
{Send Cid step(S)}
state(NewPos Sched Moving)
end
end
end
end}
end

Ch5, Ex1

Interleaving is in lockstep.

Message sequence sent is
ping(0) pong(100000) pong(1) ping(100001) ping(2) pong(100002)...

because we first send ping(0) and then immediately pong(100000),
processing of first msg sends pong(1) and processing of second msg
sends ping(100001) and this goes on....

its possible to see by running following code:
declare [QTk]={Module.link [ 'x-oz://system/wp/QTk.ozf']}
declare
proc {NewPortObjects ?AddPortObject ?Call}
Sin P={NewPort Sin}
proc {MsgLoop S1 Procs}
case S1
of msg(I M)|S2 then
try {Procs.I M} catch _ then skip end
{MsgLoop S2 Procs}
[] add(I Proc Sync)|S2 then Procs2 in
Procs2={AdjoinAt Procs I Proc}
Sync=unit
{MsgLoop S2 Procs2}
[] nil then skip end
end
in
proc {AddPortObject I Proc}
Sync in
{Send P add(I Proc Sync)}
{Wait Sync}
end
proc {Call I M}
{Send P msg(I M)}
end
thread {MsgLoop Sin procs} end
end
fun {NewProgWindow CheckMsg}
InfoHdl See={NewCell true}
H D=td(label(text:nil handle:InfoHdl)
checkbutton(
text:CheckMsg handle:H init:true
action:proc {$} See:={H get($)} end))
in
{{QTk.build D} show}
proc {$ Msg}
if @See then {Delay 50} {InfoHdl set(text:Msg)} end
end
end

declare AddPortObject Call
{NewPortObjects AddPortObject Call}
InfoMsg={NewProgWindow "See ping-pong"}
fun {PingPongProc Other}
proc {$ Msg}
case Msg
of ping(N) then
{InfoMsg "ping("#N#")"}
{Delay 1000}
{Call Other pong(N+1)}
[] pong(N) then
{InfoMsg "pong("#N#")"}
{Delay 1000}
{Call Other ping(N+1)}
end
end
end
{AddPortObject pingobj {PingPongProc pongobj}}
{AddPortObject pongobj {PingPongProc pingobj}}
{Call pingobj ping(0)}
{Call pongobj pong(100000)}

Sunday, February 1, 2009

Ch4, Ex22 (todo)

tbd

Ch4, Ex21 (todo)

tbd

Ch4, Ex20 (tbd)

tbd

Ch4, Ex19

In case of Merge, recise order of reading is clear. So result is always deterministic while in client/server example we can have observable nondeterminism due to no restrictions on the order of read of various client streams.

Ch4, Ex18

Results Possible:
1.
bing
bong
2.
bong
bing

Multiple executions are possible.

Ch4, Ex17

% First generate primes then solve hamming problem
declare
proc {Touch L N}
if N > 0 then
{Touch L.2 N-1}
else skip
end
end
fun lazy {Filter Xs F}
case Xs of nil then nil
[] X|Xr andthen {F X} then X|{Filter Xr F}
[] X|Xr then {Filter Xr F}
end
end
fun lazy {Integers N}
N|{Integers N+1}
end
fun lazy {Sieve Xs}
case Xs
of nil then nil
[] X|Xr then Ys in
thread Ys={Filter Xr fun {$ Y} Y mod X \= 0 end} end
X|{Sieve Ys}
end
end
% Generate list of first k primes; k >=1
fun {GeneratePrimes K}
Ps = {Sieve {Integers 2}}
fun {Iter K Ps}
if K==0 then nil
else
case Ps of P|Pr then P|{Iter K-1 Pr} end
end
end
in
{Iter K Ps}
end

% Test, Generating first 10 primes
{Browse {GeneratePrimes 10}}

% Generalized Hamming Problem
declare
fun lazy {Times N H}
case H of X|H2 then N*X|{Times N H2} end
end
fun lazy {Merge Xs Ys}
case Xs#Ys of (X|Xr)#(Y|Yr) then
if X<Y then X|{Merge Xr Ys}
elseif X>Y then Y|{Merge Xs Yr}
else X|{Merge Xr Yr}
end
end
end
fun {Hamming K}
fun {Recur Ps H}
case Ps of P|nil then {Times P H}
[] P|Pr then
{Merge {Times P H} {Recur Pr H}}
end
end
H
in
H = 1|{Recur {GeneratePrimes K} H}
H
end

% Test, Compare results from both, they should be equal
local X in
X = {Hamming 3}
{Touch X 10}
{Browse X}
end

local H in
H=1|{Merge {Times 2 H}
{Merge {Times 3 H}
{Times 5 H}}}
{Browse H}
{Touch H 10}
end

Ch4, Ex16

Value of a variable is "needed" when some thread has to wait for it to determine or the variable is determined. So clearly in second case thread won't wait and X will be computed.

Required Example:
X = {ALazyFunc}
X = 3

Ch4, Ex15

I'm putting a regular and a complete lazy version of iterative factorial.
declare Fact LazyFact
%Regular factorial
fun {Fact N}
fun {Iter X A}
if X>0 then
{Iter X-1 X*A}
else A end
end
in
{Iter N 1}
end
%Lazy Factorial
local
fun lazy {LazyMul X Y}
X*Y
end
fun lazy {LazySubstract X Y}
X-Y
end
in
fun lazy {LazyFact N}
fun lazy {Iter X A}
if X>0 then
{Iter {LazySubstract X 1} {LazyMul X A}}
else A end
end
in
{Iter N 1}
end
end

%Use following code to find elasped time
for I in 1..5 do
N = 30000
R1 T1 T2 in
T1 = {Property.get 'time.total'}
% R1 = {Fact N} %COMMENT FOR LAZY FACT
% doing +/-1 to trigger the lazy fact, assuming it
% takes negligible time
R1 = {LazyFact N}+1-1 %COMMENT FOR REGULAR FACT
T2 = {Property.get 'time.total'}
{Show fact(n:N timeInMs:(T2-T1))}
end


Results with N=20000, 25000, 30000:

Regular Fact:
fact(n:20000 timeInMs:2038)
fact(n:20000 timeInMs:763)
fact(n:20000 timeInMs:1914)
fact(n:20000 timeInMs:919)
fact(n:20000 timeInMs:1976)
fact(n:25000 timeInMs:1976)
fact(n:25000 timeInMs:2553)
fact(n:25000 timeInMs:2459)
fact(n:25000 timeInMs:2505)
fact(n:25000 timeInMs:2522)
fact(n:30000 timeInMs:4373)
fact(n:30000 timeInMs:3160)
fact(n:30000 timeInMs:3066)
fact(n:30000 timeInMs:3922)
fact(n:30000 timeInMs:3034)

Lazy Fact:
fact(n:20000 timeInMs:2879)
fact(n:20000 timeInMs:1105)
fact(n:20000 timeInMs:2148)
fact(n:20000 timeInMs:2521)
fact(n:20000 timeInMs:2692)
fact(n:25000 timeInMs:3455)
fact(n:25000 timeInMs:2801)
fact(n:25000 timeInMs:3986)
fact(n:25000 timeInMs:3334)
fact(n:25000 timeInMs:3380)
fact(n:30000 timeInMs:5577)
fact(n:30000 timeInMs:5468)
fact(n:30000 timeInMs:5343)
fact(n:30000 timeInMs:5592)
fact(n:30000 timeInMs:5001)

Conclusion:
Lazy version is usually slow due to the thread overhead caused by implementation of laziness.

Ch4, Ex14

This is iterative even without dataflow variables due to the lazy nature and hence incremental computation of successive elements of append result.
fun lazy {LAppend Xs Ys}
case Xs of nil then Ys
[] X|Xr then X|{LAppend Xr Ys}
end
end

When first element is needed it returns X|<promised next elems> and it calculates the next elements on the need basis.

Ch4, Ex13

Execute Following and notice both are monolithic
declare
X = {Reverse1 [a b c]}
Y = {Reverse2 [a b c]}
{Browse X}
{Browse Y}
%First execute till here only
% then execute this
{Browse X.1}
% and then this
{Browse Y.1}

%Notice that whole [c b a] is displayed as soon as
%you display X.1 or Y.1

There is no difference in the results produced. Both are monolithic, that is as soon as the first element of the reversed list is needed, reverse of whole input list is calculated.

Second version is not good in terms of execution efficiency because making Rev lazy doesn't change anything but incurs the overhead of threads due to lazyness implementation.

Ch4, Ex12

- In concurrent model, there is practically no flow control. Producer can freely generate the stream far ahead than actually needed by Sum at any point in time. While in lazy model its the other extreme where Producer only produces the next element when Sum explicitly asks for it. If we talk in terms of bounded buffer flow control then first case is where the buffer length is infinite whereas in second case its 0.

- It wouldn't have any effect. It'll behave same as the lazy version.

Ch4, Ex11

It happens because the way the expression (X+Y)+Z is evaluated by oz implementation. It'll actually compute (X+Y)+Z by doing following:
local T R in
T=X+Y
R=T+Z
{Browse R}
end

Now T=X+Y immediately needs X and Y, {MakeX} and {MakeY} are called immediately which in turn print x, y. Need for Z arises only when execution reaches R=T+Z (which is after ~6 secs due to the explicit delay in MakeY) and hence z is printed only after ~6 secs. Now MakeZ also puts in explicit delay of 9 secs so R is only evaluated in 9+6=15 seconds.

Time Taken can be measure by following:
declare
fun lazy {MakeX} {Browse x} {Delay 3000} 1 end
fun lazy {MakeY} {Browse y} {Delay 6000} 2 end
fun lazy {MakeZ} {Browse z} {Delay 9000} 3 end
X={MakeX}
Y={MakeY}
Z={MakeZ}

local T1 T2 in
T1 = {Property.get 'time.total'}
%CHANGE HERE
%{Browse thread X+Y end +Z}
{Browse (X+Y)+Z}
T2 = {Property.get 'time.total'}
{Browse 'timeTaken:'#(T2-T1)}
end

For non-concurrent version it takes about 15 secs while for the concurrent version it takes around 9 secs(because Max of all explicit delays is 9 secs in MakeZ).

From above results it may seem that we should always use concurrency for adding integers but that is not true because here the evaluation of x, y and Z takes some time due to explicit delay but in adding simple integer values its practically 0 and if we then use concurrent version then we're unnecessarily burdening our program with thread management overhead and also spoiling the program structure.

Ch4, Ex10

Well this is implementation dependent, Current behavior is completely in compliance with the symantics given in section 4.5.1 and lazy function is translated to ByNeed. Each call to {Three} creates a new trigger which is activated upon the need and in turn a thread is created that executes statements defined in function Three. But, in principle, its indeed possible to implement lazyness in a way so that value of a lazy function is calculated only ones upon the first time need and stored for later also.... as far as I know its called "memoization" .

Ch4, Ex9

declare GateMaker
fun {GateMaker F}
fun {$ Xs Ys}
fun {GateLoop Xs Ys}
case Xs#Ys of (X|Xr)#(Y|Yr) then
{F X Y}|{GateLoop Xr Yr}
end
end
in
thread {GateLoop Xs Ys} end
end
end

declare AndG OrG NandG NorG XorG
AndG ={GateMaker fun {$ X Y} X*Y end}
OrG ={GateMaker fun {$ X Y} X+Y-X*Y end}
NandG={GateMaker fun {$ X Y} 1-X*Y end}
NorG ={GateMaker fun {$ X Y} 1-X-Y+X*Y end}
XorG ={GateMaker fun {$ X Y} X+Y-2*X*Y end}

declare FullAdder
proc {FullAdder X Y Z ?C ?S}
K L M
in
K={AndG X Y}
L={AndG Y Z}
M={AndG X Z}
C={OrG K {OrG L M}}
S={XorG Z {XorG X Y}}
end

% A number is represented by x0|x1|x2|...xn|_
% where x0 is lowest order bit while xn is
% that of highest order
declare BAdder %Binary Adder
local
proc {SingleElemFullAdder X Y Z ?C ?S}
L M in
{FullAdder X|_ Y|_ Z|_ L M}
C = L.1
S = M.1
end
X Y
in
fun {BAdder Xs Ys}
fun {Adder Xs Ys C}
case Xs#Ys of (X|Xr)#(Y|Yr) then Cnext S in
{SingleElemFullAdder X Y C Cnext S}
S|{Adder Xr Yr Cnext}
end
end
in
thread {Adder Xs Ys 0} end
end
end

local
X=1|1|0|_ % 3
Y=0|0|1|_ % 4
in
{Browse 'BAdder Test: 3+4: 1|1|1|_'}
{Browse {BAdder X Y}}
end

local
X=1|1|1|0|_ % 3
Y=1|1|1|0|_ % 4
in
{Browse 'BAdder Test: 7+7: 0|1|1|1|_'}
{Browse {BAdder X Y}}
end

Ch4, Ex8

a)It'll block as {F A} can't be evaluated unless A is bound and hence call to {F A} blocks, which in turn blocks the if statement.Since Filter is blocked so Show is never executed and hence nothing comes on the display.

b) On my computer, Running it always shows "Out<optimized>". But actually it can also show 5|_<optimized> depending upon how busy the computer is, which introduces a delay between thread starting and {Show Out} call. Filter is deterninistic because any consumer using the stream produced by it will *always* get the *same* stream. The reason that we see various values coming out of Show is probably not because of Filter but because of Show. Show is not deterministic.

c)This always shows 5|_<optimized> because of the explicit delay introduced.

d) This shows [5 6 4] because both the threads finish running before {Show Out} is called because of the explicit delay introduced.

Note: {Show arg} shows arg on buffer *Oz Emulator* and not in Oz Browser.

Ch4, Ex7

declare DGenerate DSum
fun {DGenerate N}
fun {$}
N|{DGenerate N+1}
end
end

fun {DSum F A Limit}
if Limit>0 then
X|Xr={F}
in
{DSum Xr A+X Limit-1}
else A end
end
local S in
S={DSum {DGenerate 0} 0 15}
{Browse S}
end

Ch4, Ex6

fun {Sum Xs A}
case {Skip1 Xs}
of X|Xr then {Sum Xr A+X}
[] nil then A
end
end
local Xs S in
thread Xs={Generate 0 150000} end %T1
thread S={Sum Xs 0} end %T2
{Browse S}
end

Let us say thread scheduler does the following schedule:

T1 is executed some time and Xs=0|1|2|3|4|_ at the time of its preemption.
T2 is exexuted and Skip1 skips 0,1,2,3... which effectively means they are lost and not included in the sum. This goes on and we keep on losing everything but last generated element in Xs everytime T2 is executed again after preemption of T1 and that results in calculated value less than actual value.

Ch4, Ex5

This works because "==" i.e. entailment algorithm will block untill X is bound.

Ch4, Ex4

In First code snippet, Threads are created in the order same as their textual order in the source program.
Result(4) is always the same for both codes because in concurrent version runtime execution order of threads is implicitly restricted by the dataflow behavior of the variables and is same as that of sequential version.

Ch4, Ex3

Following code fragment can be used to measure the time elasped
for N in 1..40 do 
local T1 T2 R in
T1 = {Property.get 'time.total'}
R = {Fib N}
T2 = {Property.get 'time.total'}
{Show fib(num:N result:R timeTaken:(T2-T1))}
end
end


Results with Sequential definition of Fib:
fib(num:10 result:55 timeTaken:0)
fib(num:11 result:89 timeTaken:0)
fib(num:12 result:144 timeTaken:0)
fib(num:13 result:233 timeTaken:0)
fib(num:14 result:377 timeTaken:0)
fib(num:15 result:610 timeTaken:0)
fib(num:16 result:987 timeTaken:0)
fib(num:17 result:1597 timeTaken:0)
fib(num:18 result:2584 timeTaken:0)
fib(num:19 result:4181 timeTaken:16)
fib(num:20 result:6765 timeTaken:0)
fib(num:21 result:10946 timeTaken:0)
fib(num:22 result:17711 timeTaken:15)
fib(num:23 result:28657 timeTaken:16)
fib(num:24 result:46368 timeTaken:15)
fib(num:25 result:75025 timeTaken:63)
fib(num:26 result:121393 timeTaken:31)
fib(num:27 result:196418 timeTaken:31)
fib(num:28 result:317811 timeTaken:93)
fib(num:29 result:514229 timeTaken:1123)
fib(num:30 result:832040 timeTaken:233)
fib(num:31 result:1346269 timeTaken:296)
fib(num:32 result:2178309 timeTaken:515)
fib(num:33 result:3524578 timeTaken:1963)
fib(num:34 result:5702887 timeTaken:2447)
fib(num:35 result:9227465 timeTaken:3554)
fib(num:36 result:14930352 timeTaken:5579)
fib(num:37 result:24157817 timeTaken:9398)
fib(num:38 result:39088169 timeTaken:15413)
fib(num:39 result:63245986 timeTaken:26198)
fib(num:40 result:102334155 timeTaken:42329)

Results with Concurrent definition of Fib:
fib(num:10 result:55 timeTaken:0)
fib(num:11 result:89 timeTaken:0)
fib(num:12 result:144 timeTaken:16)
fib(num:13 result:233 timeTaken:0)
fib(num:14 result:377 timeTaken:0)
fib(num:15 result:610 timeTaken:0)
fib(num:16 result:987 timeTaken:0)
fib(num:17 result:1597 timeTaken:0)
fib(num:18 result:2584 timeTaken:0)
fib(num:19 result:4181 timeTaken:138)
fib(num:20 result:6765 timeTaken:0)
fib(num:21 result:10946 timeTaken:16)
fib(num:22 result:17711 timeTaken:30)
fib(num:23 result:28657 timeTaken:47)
fib(num:24 result:46368 timeTaken:76)
fib(num:25 result:75025 timeTaken:1139)
fib(num:26 result:121393 timeTaken:200)
fib(num:27 result:196418 timeTaken:324)
fib(num:28 result:317811 timeTaken:538)
fib(num:29 result:514229 timeTaken:1662)
fib(num:30 result:832040 timeTaken:2847)
fib(num:31 result:1346269 timeTaken:4417)
fib(num:32 result:2178309 timeTaken:25378)
fib(num:33 result:3524578 timeTaken:10957)
fib(num:34 result:5702887 timeTaken:17760)
*** Warning: Mozart: virtual memory exhausted.
.......It couldn't even go beyond 34.

Conclusion:
It is evident from above results that as N goes high the concurrent version becomes slower and slower. The reason is the threads overhead, moreover significantly more active memory needed to run the concurrent version and gc needs to do alot of work.

Number of Threads created by concurrent Fib:
Number of threads created by {Fib N}
= 1 + threads created by {Fib N-1} + threads created by {Fib N-2}

N = 3, threads = 1
N = 4, threads = 1 + created by {Fib 3} + created by {Fib 2}
= 1 + 1 + 0 = 2
similarly
N = 5, threads = 1 + 2 + 1 = 4
N = 6, threads = 1 + 4 + 2 = 7
N = 7, threads = 12
N = 8, threads = 20
.....
....
Number of threads created by {Fib N} = {Fib N} - 1

Note: {Show arg} shows arg on buffer *Oz Emulator* and not in Oz Browser.

Ch4, Ex2(tbd)

tbd

Ch4, Ex1

    local B in
<S1> thread B=true end
<S2> thread B=false end
<S3> if B then {Browse yes} end
end

a) <S3> can complete only after either of <S1>, <S2> has been executed(or else it'll keep on waiting for B to bind.)
Possible orders:

Order1: <S1><S3><S2>
Order2: <S2><S3><S1>
In the end it'll fail as last stmt tries to rebind B, it can be checked by executing following
 local B in
thread B=true end
thread {Delay 500} B=false end
if B then {Browse yes} end
end

Order3: <S1><S2><S3>
Order4: <S2><S1><S3>
in both of these program fails after second statement execution it fails as it tries to bind B the second time, in fact <S3> is not even executed as can be checked by running following
local B in
thread B=true end
thread B=false end
thread {Delay 500} if B then {Browse yes}
else {Browse No} end end
end

b) I don't know what is the right answer but this one will never fail atleast
local B in
thread B=true end
if B then {Browse yes} end
end