*Post by Jon Harrop**Post by Richard Fateman**Post by Jon Harrop**Post by vippstar*Except that fons is useless while I can see use in cons - I can't

imagine lisp without it. Perhaps you can do that though, so please

explain or link me to an article of yours if you've already done so,

how do you imagine lisp without cons cells.

Easy: just treat cons as the special case of a 2-element array. That is

essentially what Mathematica does and it works just fine.

1. Mathematica, last I looked, uses the name List for something that

most of us would call a vector. Prepending a value to the front of a

List L, the O(1) operation in Lisp called cons, takes, in Mathematica,

O(n) time where n is the length of L. This is not "fine" to many

people. Another operation that Mathematica associates with creating such

a new data structure is that it goes through all the elements to update

them -- to see if they depend on some value that has changed recently.

This is not "fine" to many people.

That is neither relevant nor accurate. We are talking about 2-element arrays

which are obviously O(1).

Well, 4 element arrays would also be O(1).

As for the timing, to do Prepend, consider this, in Mathematica:

b1= Table[a[i],{i,1,10}];

b2= Table[a[i],{i,1,100000}];

b3= Table[a[i],{i,1,1000000}];

Timing[Prepend[b1,zero];] --> {0., Null}

/* that's time in seconds, also the suppressed [not printed] answer

because of the ";"*/

Timing[Prepend[b2,zero];] --> {0.032, Null}

Timing[Prepend[b2,zero];] --> {0.203, Null}

The relevance is that Lists in Mathematica are arrays, and arrays are,

generally speaking, unsuitable for making conses because every time you

construct a CONS-like object Mathematica checks to see if it has to

re-evaluate the CAR and CDR, so to speak. Recursively.

So this is not, as I said, just fine. Making a CONS cell in Mathematica

out of this structure would be a very bad idea.

*Post by Jon Harrop*If you want to build sequences efficiently in

Mathematica use Sow and Reap as the documentation describes.

Sow and Reap were introduced in Mathematica version 5.0.

I see no reason for you to think that this is efficient, compared to

some other method (unspecified!) to produce the same sequence.

The AppendTo

*Post by Jon Harrop*and PrependTo functions do not rewrite their sequence inputs as you claim.

I'm not sure what you mean by "rewrite". They construct entirely new

sequences.

*Post by Jon Harrop*If Mathematica were "not fine to many people" it would not have orders of

magnitude more users than Lisp or anything you have ever written (given

away for free, even).

Hm, 1 order of magnitude is, in common usage, a factor of ten.

"Orders of magnitude" would, I suppose, be a factor of 100 or 1000 or more.

According to sourceforge

http://sourceforge.net/projects/maxima/files/

The latest version (April 2009) of windows Maxima (a program written in

Lisp, descendant of Macsyma) and which I wrote parts of, has been

downloaded 27,691 times directly. Who knows how many times it has been

transferred. Another 2500 or so for Linux and Mac OSX.

The previous version (Dec. 2008) was downloaded 36,000 times or so.

Now I don't know how one counts "users" -- Mathematica could count "paid

up licenses" but do you think Mathematica has 100 X 30,000 paid up

licenses? That's 3 million. If each licensee pays on average, say $500,

that means the Mathematica earns $1.5 billion/year. Does this jibe

with your understanding of that company?

That's just Maxima compared to Mathematica.

While I have nothing to do with the implementation of common lisp called

CLISP, I note that sourceforge says it has been downloaded 317,000 times.

This could, of course, be one user downloading it 317,000 times, but I

doubt it. Then there are also scheme implementations, and programs that

link lisp to graphics, java, lapack, ...

And those are just a few of the free programs, add to that the

commercial sales, autocad, and the possible claims that anyone who uses

emacs is a lisp user.

Jon, when you make statements that are so easily disproven, you give

trolling a bad name.

*Post by Jon Harrop**Post by Richard Fateman*As for storing a cons cell as a 2-element array, this is a case of

economy of storage and operation (for the cons) vs. redundant (larger)

storage and slower general operation (for the array).

No, it is the case of sacrificing the bit twiddling of cons cells for the

power of a production-quality VM like the JVM or CLR.

Are you selling toothpaste? all new, brighter teeth, organic?

My guess is that you are somehow objecting to the use of tagged

pointers, which is often a really good implementation idea.

*Post by Jon Harrop*The benefits those

VMs offer in the context of parallelism alone far outweigh the benefits of

bit twiddled cons cells in any of today's Lisp implementations.

I can't imagine why you think that the overhead of a virtual machine is

(what, faster?) than, or incompatible with, tagged pointers. Or why

tagged pointers are incompatible with parallelism.

Perhaps you would turn your keen eye to comparing the benefits of real

memory management and a quality garbage collector (as in some Lisps) to

the state of the art in JVM or similar memory management?

*Post by Jon Harrop**Post by Richard Fateman**Post by Jon Harrop*(JH) I believe the original motivation for separating cons out was performance

but, as we know now, that just led to slightly less cripplingly-bad

performance.

(RJF) You are of course free to believe anything at all, but your attestation

as to the "original motivation" is not particularly credible. Are you

talking about the roots of Lisp, e.g. Lisp 1.5 and earlier? Are you

claiming that you have a better instruction sequence for accessing

elements 0 and 1 of an array than CAR and CDR on the IBM 709X?

In other words, it was done for performance exactly as I said.

Uh, I guess you are free to claim that you agree with me.

It is presumptuous of you to claim that I agree with you. The point you

seem determined to miss is that CAR and CDR were not somehow second

choice abstractions compared to (say) arrays, some sacrifice in the name

of efficiency. Indeed the concept of "ordered pair" is in some sense an

extraordinarily powerful tool for building anything, and this was

realized fairly early. AND, as it happens, easy/fast to implement on

most machines.

RJF