modified 2003-12-20.

Musings on the geometry of thought-space: Consider *all* knowledge (known and unknown). What sort of framework does it have ? How many dimensions are needed ?

Pardon the construction. This is a extremely rough draft. Suggestions and links to relevant pages are welcome.

This includes:

related pages:

news and discussion forum


disciplines which study everything

Being a Buckminster Fuller fan 3d_design.html#synergetics , I (DAV) like to consider ``everything'' a lot. When I look at all the fields of study available at a university, I wonder ``What is the complete list of all fields of study ? How do I learn about the interactions between each field of study ?''.

Sometimes I see a tree diagram, with extremely specialized classes at the leaves like ``VLSI design'' or ``electrical machines'' where we learn details of how people currently use some bit of knowledge. Moving towards the root, both of these classes have the same prerequisite ``Intro to Electrical Science''. In turn, that requires ``General Physics''.

          |  |   |  |
          |  |    \  \
         /   |     \  ...
      ...   ...     |
                    Intro to
                    Electrical Science
                    | | | |
                    | | | |
                   /  | |  \
                  /   |  \  \     
                 /     \  \  ...
                /       \  \
               /         \  ...
              /           \
             |            |
             electrical   VLSI     
             machines     design

As we move towards the root of the tree (``meta''), we get to classes which cover more general-purpose sorts of topics, and (at least in areas like VLSI design which is changing rapidly), even when leaf classes change rapidly from year to year (because new tools and ideas are being developed), the ``meta'' classes tend to change more and more slowly. (VLSI didn't even exist until late 1900s, yet a lot of the material in physics and calculus have not changed much from their development by Sir Isaac Newton (1642-1727) ).

So what is at the root ?

I think I was reading Douglas Hofstadter when I had the epiphany that what is the root is subjective.

There are a surprisingly large number of fields of study that can be considered the root:

-- David Cary

related web pages:

natural languages I want to learn

Here's the top N natural languages I would really like to learn. [FIXME: just list their names and a pointer to where I moved the rest of the information to ?]

natural languages I want to learn:

related links:

"constructed languages"

The Sapir-Whorf-Korzybski Hypothesis, and related links.


(archive version; current version at ) Somewhere I thought I read about a language designed to be spoken very rapidly. Maybe it was called "speedtalk", I can't remember. Elsewhere I read about machines designed to "speed up" any spoken language, but avoiding the "chipmunk effect". [FIXME: I've lost the links -- please help me find more information]. If teachers could communicate 4 times as fast, then 4 years of college classes could be compressed into 1 year. (Even if it took the same amount of time to *understand* something, so it still required 4 years, perhaps you could squeeze a week's worth of classes into 2 days, and then spend the rest of the week *doing* useful but routine stuff that left your mind free to ponder what you heard.) (see data_compression.html#source_compression for basically the same idea applied to human-to-computer communication rather than human-to-human communication ). Other minor benefits: books require less paper to print (see , )

alphabetic languages

constructed languages similar to German, English, Hebrew, etc. in that they can be written down with a small alphabet, communicated verbally and sequentially, etc.

See also modified_english for smaller modifications to English.

sign languages

see also pictoral languages

pictoral languages

[FIXME: this is old. Moving to ]

ways of communicating that involve 2D relationships (non-sequential).

(pictorial ?)

``post-literate'' visual languages that do not depend on an alphabet. A few of these require animation, which is simply not possible with a static printed book.

[FIXME: there's one item here that claims to be a 3 D language ... perhaps I should rename this "nonlinear language" ? If I did that, would I have to include hypertext ?]

These are not merely a way of transcribing some spoken (or even sequential audible) language. ``Symbols are meaning referenced (can be interpreted without reference to sounds or words.)''

Minor modifications to English

Ideas for making minor modifications to English. Perhaps English could be improved incrementally.

Reasons to keep English just the way it is.

Ideas for making minor changes to English to "improve" it.

simplified English


There are 2 very different kinds of translation: translating from one "natural language" to another (difficult), and translating from one "artificial language" to another (easier, especially when the language was designed to be translated).

There are a few translation-related items that don't fit in either category -- -- translation from natural language to machine language (difficult) and vice versa (almost trivial), and tools that can be used for both kinds of translation.

Help me find more general-purpose translation tools.

Related local pages:

general translation:

natural language translation

natural language translation

(see also



Since I am fluent in the C programming language, but not-so-fluent in the Fortran programming language, sometimes I use Fortran-to-C translation tools.


more and more people are using C as a kind of ``assembly language'' that other languages compile into. [mention this on c_programming ?]

[FIXME: crosslink with #compiler ?] [FIXME: ... perhaps I should break this section up and disperse the fragments to the appropriate places. ]


how to make HP48, Windows, Atari, etc. software run under Linux or other boxes.

See also video_game.html#emulators [FIXME: merge ?] hardware_david_uses.html#hp48

Deutsch sites

I don't know much German, so I try to absorb more by reading German (".de") web pages.

See also international mailing addresses .

See also online natural language translation tools #natural_translation .


a few links about the smallest pieces of a written language, letters (listed in the alphabet).

At the letter level, there's a big split between typography computer_graphics_tools.html#fonts which can be (more or less) easily read and written by someone trained to read standard Roman letters, and the alphabets I list here, which need a bit more effort and in some cases specialized tools.

Why on Earth would you go to this extra effort when you already are familiar with the Roman alphabet ? Why hassle with anything other than traditional Roman letters unless there's some real benefit ? There are some very good reasons:

... [FIXME: where to put these links ?]

(I originally started out here talking about simple letters, but then I rambled on until I started describing the general design principle of ``levels'' ... I don't think I can really talk about such an abstract idea without some sort of concrete example. [FIXME: should I seperate out the idea of "levels" anyway, then just point to it here where I discuss letters ?] )

Once I thought that the more words and ideas one had to express, the more letters/symbols one needed to express them with. I was surprised to learn in grade school that all possible words in English can be represented by a fixed number of symbols strung in long sequences (the 26 uppercase and 26 lowercase letters ... English text also uses a few other symbols ). But I still thought that I'd have to learn more symbols every time I learned a new language (Spanish, German, Mathematics, etc.). (: Multilingual Downhill :)

I was surprised to learn that all possible words in all possible languages can be represented by a fixed number of symbols strung in long sequences.

How many symbols do I need ?

I was even more surprised to learn I only need 2 symbols. ( Bits are Bits )

In calligraphy I got my first clue that the exact shape of the letters was not very important. Simple substitution ciphers (an entertaining way to start learning about the ASCII code and Morse Code and data compression) showed the particular shape of the letters is completely unimportant. If a completely different shape were substituted for any particular letter of the alphabet in all the books of the world and in the letter-recognition part of the brain, no one would notice (with a couple of exceptions: deeply intertwined calligraphy, and expressions like ``U shaped'' and ``T shaped'' and ``O shaped''; we'd have to find some sort of replacement like ``circular'' or ``zigzag'' something ... any other exceptions ?).

This was my first glimpe of the concept of ``levels'' or ``layers'' in the sense of the OSI seven-layer model. The idea is that one thing (in this case, English words) builds on top of another thing (the Roman alphabet). In some sense, the Roman alphabet is like a tool that can be used to build many different things -- an English word, a French word, ein Deutsch Wort, etc. . But on the other hand, any of these other alphabets can be substituted.

While the Roman alphabet could be considered ``analogous'' to the Chinese alphabet, the alphabets I list here have a much closer relationship. It goes beyond putting them in the same category. Various fruits have many things in common, and while I might have roughly the same reaction (happy happy joy joy) to an apple pie, a peach pie, a cherry pie, or a pumpkin pie, substituting other fruits -- cucumbers, eggplants, tomatos, etc. -- is just not going to cut it for me.

But these alphabets go beyond being in the same category -- if they're used as an internal representation and translated at both ends, no one on the outside ever notices when one is swapped for another.


I think the cool thing about ``levels'' is that instead of having one huge monolithic structure that you have to change all-or-nothing, you have small pieces that you can pop out and replace -- hopefully making an improvement, and when it's not an improvement you can pop the original back in without much hassle and with a little more respect and understanding. If you get an entire new set of silverware (-: or in my case plasticware and stainless-steel-ware :-), you don't need to upgrade your dishes, your table, or your food to work with it. You can try it out the new set for a while and see how you like it. If you decide you liked the old set better after all, you can switch back easily. In fact you can try out just 1 new knife or fork or spoon at a time. But there are limits -- you can't try out just a new fork *prong* by itself, the entire fork prongs + handle is a monolithic all-or-nothing structure.

If you truly want to understand something, try to change it. -- Kurt Lewin

In architecture and in computer programming we have the concept of ``scaffolding'' or ``framework'' -- you build something that you really don't want (which at first seems wierd), but it works just enough that you can start swapping out levels and get incremental improvement. (related to , )

Often you have several levels. The next higher level, ``words'' ... we can swap out different words (especially nouns) ... different ways of spelling the same words ... we're still using the same alphabet at the lower level, and the sentence (at the higher level) means the same thing (with a few exceptions -- puns and onomatopei and anagrams spin_dictionary.html#anagrams ). (exceptions are called )

It seems that this concept of ``levels'' occurs in many fields of study:

From a coding perspective, all of these alphabets are ``simple substitution cipher'', a 1 to 1 mapping of the letters of the alphabet onto different shapes (sounds, bumps, etc). While they may seem mysterious and ``impossible to read'' at first glance, they're all easily decoded by any competent amateur cryptologist, given enough source text.


"Why are the letters of the alphabet in its current order (alphabetical order, rather than, say, Futhark order) ?" For most purposes the ordering of the letters makes absolutely no difference, (reading english text), while most other times the ordering is arbitrary (sorting names, words, etc) as long as we include all the letters, and we might as well use the standard alphabetic order. DAV: is there a better order ? ... consonants, then vowels ...

I briefly touch on the idea of adding or subtracting letters from the alphabet here ...

phonetic alphabet

A phonetic alphabet is useful when you are talking over a noisy phone / radio and the exact spelling of a word ( name, call sign, etc.) is important.

Expressing letters in terms of word-sounds. Names people give individual letters (and other written symbols).

Too many people give the letters ``n'' and ``m'' and ``p'' and ``b'' and ``z'' and ``c'' names that I can't tell apart. (Read the previous sentence aloud to someone over the phone...).

Greek alphabet

[FIXME: move to greek_alphabet.html ] [FIXME: delete ../mirror/greece/Greek_Alphabet.htm once I'm sure all the information there is redundant. ]

See also greece.html

improved alphabets

What I'm calling ``improved alphabets'' (is there a better term I could use ?) are superior to the Roman alphabet in some quantifiable area, although the Roman alphabet could be substituted.

technical improvements -- given characteristics of the tools you give to write letters on paper, these other alphabets let you write text much more quickly or much more compactly. compact representation -- packing more words onto a sheet of paper. [FIXME: Mark Twain quote ... ``typewriter ... an awful pile of words ...'']

alphabets for languages other than English

[FIXME: should I move this over to typography computer_graphics_tools.html#fonts ? ]

See si_metric_faq.html#iso8859 for a little info on ISO-8859-1, how to encode all the letters I use into that encoding, and some of my related rants and raves.


someone, somewhere, thought this alphabet ``looked cool'', but (in DAV's opinion) it is not objectively superior to the Roman alphabet.

Here I list alphabets with typography that's not directly readable by someone used to the Roman alphabet. It's obviously artificial and seems to be intelligent, but the mystery about what it means is part of the emotion the artist wants to convey.

Some of them were designed to handle standard English text (26 letters), others were designed for fictional characters (science fiction and fantasy) speaking fictional languages.

See typography computer_graphics_tools.html#fonts for fonts that are directly readable by humans used to the Roman alphabet, and also standard fonts for human languages.

geometry of thought-space: dimensionality as a learning aid, a framework for *all* knowledge (known and unknown).


From: Anders Sandberg 
Subject: Re: >H geometry of thought-space

Transhuman Mailing List

On Thu, 18 Apr 1996, Anton Sherwood wrote:

> I often wonder what sort of shape an information space has.  How many
> dimensions?  Flat or curved?  For the whole of human knowledge there
> may be no answers to these questions, but for some restricted areas it
> ought to be not-too-hard.
> Example: musical styles.  Badfinger is near the Beatles; [etc]

In this case you could try to form a distance metric, letting artists be
nodes linked by labelled arcs. Using some kind of graph displaying
software (there are some rather impressive systems out there) you could
then try to display this graph with miniumum distortion or some other
optimization. However, I'm not sure there is a low-dimensional embedding
of music.

Another possibility would be to plot ideas in a high-dimensional space
and use principal component analysis to project it into a low number of

In general I conjecture that information spaces can be represented as
(statistically) fractal graphs. Hmm, opens up interesting possibilities:
"Abstract: We have found that the field of social psychology has a
semantic embedding dimension of 33.3+-2.2, which is significantly lower
than psychology in general (65+-4.2 [1]). This implies that epistemic
lacunarity measure theory is not applicable."

Anders Sandberg                                      Towards Ascension!
GCS/M/S/O d++ -p+ c++++ !l u+ e++ m++ s+/+ n--- h+/* f+ g+ w++ t+ r+ !y
From:  Anton Sherwood
Subject: >H geometry of thought-space

Transhuman Mailing List

: Take lyserginic acid diethylamide instead, it's much cheaper.

Are you kidding?  I can't do math _at all_ when I'm polluted,
let alone visualize curved spaces in higher dimensions.

A video-game in curved 3-space would, I imagine, help physicists
and mathematicians gain useful intuitions about such spaces.

But this strays from what I was talking about ...

If the "natural" metric space for some concept-field (my example
was musical styles) is too high-dimensional to represent visually,
it can still be useful.  If nothing else, you can project it into
3-space.  (Choose the remaining axes wisely.  I have an idea about
that, too.)

Anton Sherwood   *\\*   +1 415 267 0685   *\\*
From:  Alexander Chislenko
Subject: Re:  >H geometry of thought-space

Transhuman Mailing List

 Anton Sherwood <> wrote:

>If the "natural" metric space for some concept-field (my example
>was musical styles) is too high-dimensional to represent visually,
>it can still be useful.  If nothing else, you can project it into
>3-space.  (Choose the remaining axes wisely.  I have an idea about
>that, too.)

  Please share more of your ideas!

  I would assume that in the ideal case you find orthogonal (unrelated)
features, and build [groups of] axes starting from the most important
features for the domain.  Then projection of your models on sub-spaces
represent generalized abstractions (i.e., you ignore the [particular]
features represented by the other axes).

  An interesting thing is to draw domains in the idea-space and watch
their dynamics.  A domain would usually look like a balloon or a curved
worm.  Domains, such as music styles in music-space, or communication
methods in the space represented by (Speed_of_Delivery * Number_of_Recipients)
axes, may come close to each other, and even overlap (usually in projections).
The domains come into competition with each other where they meet, and may
freely develop into yet unoccupied areas of space.
They often squeeze and push each other.    If you make a series of animations
out of your observations of domain positions, you will see a little movie
with a bunch of little worms crawling and struggling on some [concept] terrain.
My father used to do this for biological species, though without any computers
the results were just sequences of paper drawings.


  Physical space gave birth to biological systems, but its structure is
hardly optimal for functional development.   Throughout the history, we
see intensifying efforts to rebuild the effective system architechures
to more closely correspond to the functional space.
The key to success here is to [effectively] bring related systems closer
to each other.   The first attempts in this direction started in early
ecosystems, with related objects physically getting closer to each other.
Then, more flexible connections were discovered, and things (animals) started
moving around.  Humans enhanced these abilities by both clustering themselves
better and speeding up physical relocations with transportation.  Shortly
after this, humans started actually altering the effective metrics of
physical space by building transportation infrastructure - bridges, roads,
etc. to provide faster connections between functionally related domains.
With the advent of communications (from making faces at each other to
sending letters, to e-mail) it became possible to move away from the
necessity to drag object to each other for every interection.  Now, the
underlying structure of physical space may be often be ignoreed,  and
you may concentrate your interactions with - and representations of -
reality  increasingly in the terms of functional space.

  Ultimately, I think, this process will come to a situation where the
high-level processes effectively transcend to the idea-space, and the
physical space and matter - the hardware of the Universe - will be dealt
with by the low-level layers whose sole purpose would be to provide the
desired functional connections at the lowest possible cost.

| Alexander Chislenko | | Cambridge, MA |
| Home page:     |
Date: Wed, 15 May 1996 00:00:07 -0400 (EDT)
Subject: >H Digest
Status: U

From: Mitchell Porter 
Subject: Re: >H Implications of geometric memespace

A space of "signifieds", on the other hand, is, I think, what people
have in mind when they talk about spaces of concepts, memes, and so
forth. (You could call a signifier space a namespace, and a space of
"signifieds" a memespace.) This is an ontological concept: such a space
is a set of possible entities or possible state of affairs. It would
have to include, say, the notion of "a world where elephants invented
cars" or "a planned meeting on the Moon in 2006 AD", but not
"grzzxyllph", because that doesn't mean anything in any language. (Yet.)



automatically organizing by interest

automatically organizing by interest

self-organizing geometry of Usenet

[FIXME: summarize]

original idea by Anton Sherwood:


I like the idea Anton Sherwood proposes.

A few minor flaws:

I suspect there *will* be lots of asymmetry. Everyone will want to read what Emperor Dogbert and the Usenet Oracle have to say, but Emperor Dogbert won't read what *everyone* else has written (not enough hours in the day).

Already some people have different "modes" -- I hear that some people really enjoy the fiction of Carl Sagan, but really hate his non-fiction. I guess you could handle this ad-hoc by giving Sagan 2 points in N-space, which will quickly diverge. Perhaps you can think of a less ad-hoc mechanism.

I personally think it would be a Bad Thing for a person to read lots of stuff from group A but never send A any feedback, and then post lots of stuff to group C but ignore anything C ever says. Of course, this happens all the time -- Bernoulli *never* sent Pythagoras any mail, and (since he is dead) Bernoulli ignores everything I say. I like the way your mechanism gives us some sort of feedback on the sort of people who read our messages, not in terms of the readers themselves, but in terms of other writers who are "attracted" to the same readers.

Perhaps you could have 2 separate universes: the "universe of writers", with a star for each poster, but their positions determined by the readers -- if a reader really likes A, B, and C, then those stars feel an attractive force towards each other (or towards their center of mass, which is equivalent); if that same reader really hates X, Y, and Z, then those authors will be repelled -- but in which direction ? not away from each other, since it may be that they have a single thing in common that that reader finds repulsive, which should bind them together. the "universe of readers", with a star for each reader...

On a positive note:

Perhaps the attraction/repulsion mechanism will cause spammers to be pushed out towards the outer edges of N-space, giving new meaning to the phrase "push them over the edge". Yes, let's bring back sea monsters swimming the oceans near the Waterfall at the Edge of the World.

Date: Mon, 22 Apr 1996 00:00:06 -0400 (EDT)
From: transhuman at
Subject: >H Digest


From:  (Anton Sherwood)
Subject: >H geometry of thought-space

Transhuman Mailing List

Anton Sherwood  wrote:
: >If the "natural" metric space for some concept-field (my example
: >was musical styles) is too high-dimensional to represent visually,
: >it can still be useful.  If nothing else, you can project it into
: >3-space.  (Choose the remaining axes wisely.  I have an idea about
: >that, too.)

Sasha wrote:
:   Please share more of your ideas!
:   I would assume that in the ideal case you find orthogonal (unrelated)
: features, and build [groups of] axes starting from the most important
: features for the domain.  Then projection of your models on sub-spaces
: represent generalized abstractions (i.e., you ignore the [particular]
: features represented by the other axes).

Yeah, that's what I was thinking, except that I am not sure what you
mean by "features".  The coordinate axes are arbitrary, because a priori
we don't know what features exist, let alone which are significant.

One approach is to find the 2- or 3-plane of least squares.  But we
can't use the least-squares technique we learned in our youth, because
here is no distinction between dependent and independent variables!
Or rather, part of the task is to find the dependent variable(s), if any.

Another useful space is that of people with shared interests.  Sasha and
I have discussed this before, as a way to filter Usenet.  Each user is
assigned a point in N-space (initially random) which attracts or repels
others according to how interesting they find that user's postings.
(Requires a new header; what shall we call it?  X-Sasher?)  The resulting
vector functions as a semi-automatic killfile: your newsreader computes
the dotproduct of your vector and the author's vector, and shows you the
article only if it exceeds the threshold you set.  (You may want your
threshold to include randomness.)  With time, assuming that for active
posters "interest" is roughly symmetric (i.e. few pairs exist such that
X loves Y's posts but Y finds X tedious), I expect the users will settle
into a moderately clumped distribution isotropic on a hypersphere of no
more than eight dimensions.

The beta version of this scheme, I suggest, would use 16 dimensions.
After a year, we look at the results, derive a new coordinate system
and publish a conversion formula.  The new axes will be listed in order
of their significance (variance).  At revision time, we may decide to
change the number of dimensions.  The user's vector should be encoded
as a string of no more than ~64 characters; the tradeoff is between
resolution and number of dimensions.  There are 94 usable characters
(`!' through `~'), which I suppose is plenty for one dimension.

An article about net.personalities can then be illustrated by a projection
(in the first two or three dimensions) of this universe.  It'll look like
the night sky, where the brightest stars represent the most voluminous
posters.  (If you see a Milky Way, you did not choose the most useful

(Spammers and other killfile-bait appear as quasars surrounded by blackness.
Unfortunately, a new noise-source, appearing at a random position, causes
a sudden shift in the map as neighbors flee.  Can this effect be avoided?)

My 3-projection of this N-space may be very different from yours, because
I want one of the axes to be "interest to me" - i.e. dot-product with my
own vector.  The two next most significant dimensions would then show
the diversity of my interests.

: An interesting thing is to draw domains in the idea-space and watch their
: dynamics.  A domain would usually look like a balloon or a curved worm.
: Domains, such as music styles in music-space, or communication methods
: in the space represented by (Speed_of_Delivery * Number_of_Recipients)
: axes, may come close to each other, and even overlap (usually in
: projections).  The domains come into competition with each other where
: they meet, and may freely develop into yet unoccupied areas of space.
: They often squeeze and push each other.    If you make a series of
: animations out of your observations of domain positions, you will see
: a little movie with a bunch of little worms crawling and struggling on
: some [concept] terrain.
: My father used to do this for biological species, though without any
: computers the results were just sequences of paper drawings.

I can imagine this for species, where the axes may be length, mass,
litter size, length of beak, size of eyes and so on, diferent aspects
of the changing niche a species occupies.  What data did your father

Anton Sherwood   *\\*   +1 415 267 0685   *\\*

Date: Wed, 24 Apr 1996 00:00:04 -0400 (EDT)
Subject: >H Digest


From:  Anton Sherwood
Subject: >H geometry of thought-space

Transhuman Mailing List

Ruminating on my scheme for a self-organizing geometry of Usenet, I wrote:

: (Spammers and other killfile-bait appear as quasars surrounded by blackness.
: Unfortunately, a new noise-source, appearing at a random position, causes
: a sudden shift in the map as neighbors flee.  Can this effect be avoided?)

Yes!  And without the central choreographer that someone suggested.

Each user's X-Interest-Vector begins as +-epsilon in each dimension.
That gives the universe a "seed" asymmetry; if everyone started at zero,
which way would they move?  As users selectively step away from each
other, eventually they approach the limiting hypersphere; at that
distance, the core's asymmetry is negligible.  Since spammers don't
read, they never stray from the core, and their effect on the Sphere
of Interests is therefore symmetric.  (Same goes for flamers without
well-defined interests, who wander about aimlessly.)

I observed that X-Interest-Vector could be encoded as a string
of characters between `!' and `~' inclusive.  That range can be
interpreted as representing odd integers from -93 to +93, ensuring
no zeroes!  A new user's header, then, may show
        X-Interest-Vector: OOOOPPOPPPOOPPOP
and an old hand's header may show
        X-Interest-Vector: SYA>QA[MA>[PGRA>

Definition: the "principal dimension" is a unit vector P which
maximizes the variance of dot products of P with posters' Interest
Vectors; informally, the axis along which posters tend most to sit.
If the N-dimensional Sphere of Interests is projected along its
principal dimension, we can then look for the principal dimension
of that (N-1)-plane, and so on; if I mention the k principal
dimensions, I mean the first k mutually perpendicular vectors
found in this way.  (Statisticians, is there another way to choose
a principal k-plane?)

Conjecture: three or four principal dimensions are well-defined;
further principal dimensions are well-defined for limited sectors of
the Sphere of Interests (regions with diameter less than half that of
the whole Sphere), but no better than noise for the Sphere as a whole.
This because once the broad groups have staked out their turf - a
spontaneous order as ever was - subgroups divide on axes relevant only
to their nearest neighbors.  The division of guitars into acoustic
and electric does not show up in the fishing groups.

Which means that if the rule of attraction/repulsion in the Sphere
is symmetric with respect to the axes, every axis will be distinctive
in some sectors; the "natural" dimensionality of the Sphere will seem
to be whatever it's allowed.  Can that be avoided?  Can the users be
constrained to do most of their varying in the first few dimensions,
spreading out in the last dimensions only as necessary?  Yes, I think
so, if the migration rule has a slight bias for movement in the first
dimensions; I'm still thinking about such rules.

(Expect a long post to AltInst.)

Anton Sherwood   *\\*   +1 415 267 0685   *\\*


Anders wrote:
>>One could for example create a high-dimensional vector space where each
>>coordinate represents occurences of a certain keyword or group of keyword
>>("upload-", "cryo-", "nano-" etc), and then reduce its dimensionality to
>>three using principial component analysis. And there is a kind of diagram
>>where the user specifies some search terms and places them in space, and the
>>database creates a 3D model where nodes organize themselves depending on
>>their distances.

Weren't we just discussing something like this over on the Transhuman Mailing List ?
(I *still* haven't gotten anything from that list in a month now
-- apparently I missed the singularity.).

Remi wrote:
>my great problem...
>cryonics is close to uploading, and close to to space migration. But uploading
>is not close to space migration (I don't know if this is true. Just an example).
>So, I must make uploading close to cryonics, but far from space migration. I am
>not at all a mathematician , but I suspect a "4th dimension trick" I cannot
>resolve. I thought of using "teleports" or LOD Nodes (changing the appearance of
>the object as you approach it ,so new connections appear), but it  would be very
>difficult to materialize, and above all, a reader would not obtain a good idea
>of the relationships with just a glance to the world. And I would love to
>preserve lisibility. I don't want to transform Omega in an Escherian Doom Game !
>If you have some ideas which can  help me go out of this pit, ,I'd like to
>Iisten to you (but remember  i'm very dummy in mathematics !)

Unfortunately, adding normal Euclidean dimensions will not help this problem
(contrary to most explainations of "hyperspace" you may hear).
If uploading is close to cryonics, and cryonics is close to space migration,
then uploading will be close to space migration no matter how many dimensions you add.

It looks like the Omega database will turn into a tightly interwoven, self-referential document.
Lots and lots of links to other parts of itself.

I've seen some Web sites where (as Remi says) it is difficult for a reader
to understand what is available on the site and the relationships between them.
I just click on links and I get randomly teleported to other pages, quickly getting lost.

It looks like Moss is doing a good job organizing the Omega to avoid this problem.

On the other hand, a Escherian Doom Game sounds kinda cool ...


 Connect: 02:12:02                                    Wed Oct 18 23:36:09 1995
Starting "/usr/games/fortune | /usr/gnu/bin/less"
A good question is never answered.  It is not a bolt to be tightened
into place but a seed to be planted and to bear more seed toward the
hope of greening the landscape of idea.
                -- John Ciardi
line 1/4 (END)
Apparently-To: <cary at>
Date: Tue, 16 Apr 1996 00:00:08 -0400 (EDT)
From: transhuman at
Subject: >H Digest


From: Anders Sandberg 
Subject: >H Forwarded Intro

Transhuman Mailing List

---------- Forwarded message ----------
Date: Mon, 15 Apr 1996 18:18:18 +0200 (MET DST)
From: Eugene Leitl 

      _( )_ ascension                      (are there any
        ^               \ ^ /               official transhumanism
        |               <-*->               glyphs, btw?)
        _               / v \  expansion

Hi co-netizens and aspiring transhuman larvae,

I am a new subscriber -- please have some patience as I hadn't
had enough time to wade through the FAQ, and, since our machines
had to be transiently shut down, had only browsed through only a
few posts from the archive. Even from the scant few poster names
I had time to glimpse, however, it is obvious that right from
the start this list has acted as a powerful attractor for some
of the most articulate and shrewd thinkers on the net I ever had
the pleasure to encounter: the incredible ubiquitous Mr. Anders
Sandberg (all hail to the lampreys ;), then one of the very few
anchors of sanity on nanotech, Dr. Rich Artym, and the not
entirely unknown Sasha Chislenko (privet seml'akam). I am
looking forward to meeting other, similiarly distinguished
netizens which doubtlessly should be swarming this list in

Preliminaries done, I'd like to proceed to the beating of
diverse, doubtlessly extremely dead horses (it's probably all in
the FAQ anyway).

1. so why this list?
2. can we do something?
3. should we do something?
4. how exactly shall we proceed ?

As to 1), I am actually amazed to find any list members at all,
since the transhumanism meme appears to be even more obscure
than the cryonics one. It seems we constitute an infinitesimally
tiny fraction of the, alas, already quite exquisite netizen
community. While this is very good for ego stroking, it
obviously foretells grim future for implementation attempts.

So why this list -- to spread the transhumanism meme? Look at
CryoNet: for all the years it exists it has hardly managed
making cryonics a mass movement. In fact, after some past brief
flowering, the major players have quantitatively ceased but to
token contribute -- in their partially explicitly voiced
oppinion everything of value has been said already -- and thus
posting has become a waste of time. Sci.cryonics, the cesspool
it at times is, has doubtlessly contributed much more to the
cryonics pandemic memefection than CryoNet ever could. (However,
the utterly moronic mass media had had in nick of time
accomplished much more than even that particularly notorious

If, on the other hand, this list is meant as an instrument to
grind out the implementation roadmap and (hear, hear) to effect
a task force assignment, it can potentially become extremely
valuable. I fervently hope it is meant as the latter. Even a
mere virtual think tank can be constructive enough. (Several
guys can sample a larger fraction of idea space than one guy
alone, no?)

We have arrived at point 2). Even if this obviously sounds like
terrible hubris, I think we, scant as our numbers are, actually
_can_ do something. While the nanotech list, with its
predominant "gee whiz, wouldn't it be great, if ..." attitude
vividly shows us how things should _not_ be done, think about
such space flight pioneers as Ziolkovsky/GIRD members and that
Goddard guy. However obscure initially, philosophies/schools of
thought with time can sure draw enough followers to become
sufficiently mainstream and thus can be regarded as a
potentially transmutational force.

But what are the outline specs of the Omega project?

I think I can speak for a substanteous percentile of
transhumanists (flame me if I'm wrong) that we have set out to
construct the Ultimative Intelligence, UI, or, God. At the very
least we want to travel towards the Omega singulariy in persona,
converging towards it in Far Future -- in fact _to make Omega
happen_. Personally, I don't care too much whether it is the
infinite-by-definition Tiplerian Omega or just something
actually limited but still very, very puissant. From this side
of the long Ascension path mundanely exponential and holy
hyperbolic look all about the same. Imo, even a single planetary
or a Dyson sphere intelligence ecology is surely sufficiently
complex to be far beyond our wildest collective dreams.

3). But _should_ we do it? Obviously, the proverbial
five-digit-IQ Superhuman Intelligence, whether space- or
earth-based can plan rings around us, and, should it ever find
our feeble self-protection attempts threatening to its
well-being, boojum us out of existance while we are still in
bewildered disarray as to what is actually happening. At very
best, we will first watch the SI making its escape (probably
with a noticeable blue shift ;), then spend some few quiet
decades (or a century) marvelling at those really spectacular
things happening in the skies, then, more likely with a whimper
than with a bang, cease to exist, as the Earth is disassembled
to become Dyson sphere raw material. Or to Something Utterly
Unimaginable (but, to us, equally terminal). Or just remain
there unscathed, to puzzle over Singularity enigmatic artefact
relics peppered all over the system. Or whatever.

If, on the other hand, we can build a earth-tied,
only-modestly-superhuman intelligence, a
dead-man-switch-triggered nuke pointing to its head and a solemn
(fingers in plain view) promise to let it go once it has given
us the means to upload and ascend (certainly a trifle to a SI) I
think we might have a chance.

No risk, no fun. So, hell, why not go fer it?

4.) How should we proceed?

"- Oh, Mr. Risk Capital Vendor, Mrs. General Public - we just
intend to build God and, by the way, if we happen to succeed in
this humble venture of ours it will almost certainly destroy the
world as you know it, quite en passant" -- while this is pretty
decent eschatology it makes probably totally inadequate
marketing. The average human default perception of transhumanism
is already either a) its pure SciFi at best or b) dire,
dangerous lunacy at worst. Neither of these is likely to provoke
constructive response (or $$s) but is sure to draw funny stares
or even outright flames.

But enough of that. Before we set out, there is another thing
I'd like to know in advance:

Why is there this ominous Silence in the Skies?

- either intelligence is very, very rare and we're
  alone in the visible region of the universe
  (but why? Physics and thus secondary/tertiary planetary
  nebulae chemistry is quite isotropic, according to recent
  insights life nucleation events are now thought to be
  pretty common, and co-evolution, while certainly a nonlinear
  process, artefacts a cerebralization index growth trend as
  evident from the fossil record -- so then why rare, dammit?)

- for some reason, intelligence must screw it up in the end
  and doesn't survive up to the space leap (either Nemesi keep
  continously zebraeing iridium content of sediment layers or
  the alien idiots manage either to starve or to nuke themselves
  out of existance -- (hmm, that one seems to be a really
  substantial point if one takes a look at what has been
  happening in the last decades))

- if one has a faible for paranoia, then - hush - we're
  not alone, alien SI's stand all around our cage in the
  intergalactic zoo (if one cares for really _heavy_
  paranoia then we/the visible part of the universe
  are the experiment in progress, likely to be terminated
  at any time ("...and the first thing you see in heaven
  is a score list?")).

- alien SI's are out there but we mistake their
  metabolitic signatures for natural phenomena
  (brown dwarfs with non-blackbody spectra? now and then
  funny gamma flashes coming from everywhere? stellar jets?

- SI's are all there but keep mum for some reason/are
  not expansive/don't tread on early life forms (choose
  the one you like most)

- Vinge was right after all, the Singularity is just around
  the corner, the civilization zooms through final development
  stages real fast and transcends/ceases to interact
  with this universe in the end phase, in the aftermath
  being a very short-duration process which leaves very
  few if any long-range detectable large-scale artefacts.

- any major alternative left out? ideas?

Anyways, we can probably glimpse for what we are in for from
scrutinizing the skies long and hard enough. It is a really sad
fact that SETI has been discontinued due to lack of finance --
but at least Hubble is still operational. It may turn out yet
that Dyson stellar spheres constitute a major percentile of Dark
Matter mass ;). Since the shells probably won't be entirely
optically dense, their spectra should look funny enough to raise

So, again, how should we proceed?

What are our doorways into summer?

The worthy proponents of nanotechnology, holding banners with
Drexler's counterfei on them, will surely claim that the
all-purpose nanoassembler is just round the corner, and, once we
manage to build it, everything else becomes utterly trivial. I
don't think it is thus easy. The most trivial counterargument
would be that we still don't know whether strong (''Drexlerian
diamondoid mechanosynthesis'') nanotechnology is feasible. Oh,
doubtlessly one could do computation by diamondoid rod logic,
that much one can already see even from the preliminary
numerical simulations. But what if, e.g., for some weird reason
physics forbids mechanosynthesis reaction set to be sufficiently
all-purpose? (We have virtually no experimental data on that,
and diamondoid-tip quantum chemistry calculations can be
described as unreliable (and thats an euphemism)). Or the tip
positioning precision insufficient to construct a perfect
diamondoid lattice but leads to subsequently deteriorating
lattice perfection? A single nanoassembler one somehow manages
to bootstrap but which can't autoreplicate is obviously not
exactly awe-inspiring. If the strong nanotech Omega route is
open to us, however, matters _do_ become much easier. Since
nanoassemblers as envisioned by Drexler, prefer to frolic in the
hard-vacuum low-luminance habitats, they are virtually
tailor-made for the seeding of the planetoid belt and Oort
cloud. As their replication cycles are much shorter than of
larger systems, their advantages are obvious.

Thus not to exhaustively sample the Drexlerian Omega route is
obviously the greatest folly of them all.

But to manipulate matter is not the same as to manipulate
information. Nanoassemblers offer us the chance of building
extremely cheap maspar hardware but do not tell us how a truly
intelligent system (I think we all agree that the brittle
logic-inference-engine typed AI has been dead for at least a
decade now and good riddance to it) should look like exactly.
True enough, an array of nanodisassemblers hooked to an imager
is a good approximation of the perfect microscope archetype and
offers a excellent empowerment tool as to finding out what a
mammal brain performs its magic with exactly. But we still have
to unravel the quite nontrivial complexity essentially
unassisted, to improve upon it, and then to give orders to the
nanocritters to fashion the better mousetrap we have eventually
envisioned, which, given time, can build an iterative sequence
of even better ones, all the way up to Omega.

What I think you all can agree with is the fact that SI is
essentially all about computation. Once you have/are an SI, you
can have all the other goodies (like nanotech and uploading and
and and) virtually for free. And that the noble quest towards SI
probably must lead us through the narrow paths of neuroscience,
which literally _bristle_ with complexity.

Then steps forth the grim-faced phalanx of strong AI proponents,
chanting "Hans, Hans!" and "Marvin, Maarvin!". Their insignia
are the firm belief that, at heart, the connectionist AI guys
are, essentially, a bunch of really incompetent blokes, and,
that the biological neurons keep maintaining these highly
unusual, extremely elongated shapes and very high connectivity
at such great energetical and
time-spent-in-evolutionary-optimization expense merely to show
off what weak nanotech is capable of. On the face side of their
shields in gaudy colors the log plot of computational
performance against time ("see? see? it's straight!") is

That on the _other_ side of their shields the memory access
latency trend plot is asymptotic is, of course, of absolutely no
practical relevance at all. Of course _all_ applications,
_particularly_ the connectionist ones, _must_ all have high code
and data locality so that cache hierarchy can go on pulling its
parlor trick of faking high memory bandwidth. Of course it's
common knowledge that evolution is really lousy at optimization
and hence all that neuronal circuitry tangle is collapsible to a
few lines of code. Of course the low connectivity of retina,
which is heavily optimized to do the first stages of visual
processing only, is absolutely typical and hence one has the
right to make such really grand sweeping generalizations which
hold all the way up to the neocortex. Of course the super-GHz
switching rate of transistors can rung rings around the synapse
piffling sub-kHz rate. Of course.

Since if, with good reasons (both theoretical and practical),
one starts to suspect that all that spectacular connectivity is
really necessary and then makes an estimate, which architecture
and which silicon estate resources an integer automaton network
die implementation might need one might encounter some very,
very unpopular truths -- but truths nevertheless. That, among
other things, silicon lithography can only produce flat
structures which average path length is sadly lacking in
comparison to neurons' noisy 3d grid interconnects. That random
defect hit probability, the essential good die yield
determinant, allows only a very limited achievable die
complexity, which WSI constrains even further. That fanout is
_very_ limited and the 10k average connectivity of the cortex
makes any chip designer gasp from envy. That the synapse is a
submicron structure which is a _working_ instance of soft
nanotechnology and operates with an hitherto unsuspected
precision. That time multiplexing we are forced to use in
silicon can easily absorb three orders of magnitude (and then
some more) switching latency advantage, thus rendering
significantly superrealtime SI what it ought to be, namely a
dream. (This "feature" list is far from being complete, btw).

Because then one might arrive at a human equivalent estimate
which takes about 2 WSI MWafers, needs the volume of a small
house if packed _really_ dense and will dissipate some 100
MWatts of power. ("..oh yessir, we take this river here for
coolant and put that nuclear power plant on the opposite side of
thar them hill yonder."). Needless to say, this is not current
off-shelf technology. One cannot help but to grudgingly admit
that the human brain is a pretty awesome piece of neatly
engineered circuitry, particularly if it comes to its
volume/power specs.

Do you still think current, or future silicon photolithography
is a viable Omega implementation platform? For some weird reason
I can't recall I don't, anymore.

Of course we could always launch a von-Neumann probe to the Moon
to turn a large fraction of its surface into a sea of solar
cells and silicon chips in what is probably a relatively short
period. While one might argue whether the 80's 100 t estimate
was realistic, with our superior IT technology we probably could
do it today. In fact, I don't have a clue as to why such a
project was ever abandoned. (Fidipur has absorbed the means,
probably :( Should, due to some cosmical joke, strong nanotech
prove nonviable, turning the Moon into a large computer might
become an economically sane short-term option.

Since this has obviously denaturated into a substantial rant
sans detectable structure I might as well go on with reviewing
the more conservative transhuman technologies anyway. Again,
comments and flames alike are welcome.

As there was some discussion concerning whether the SIs will
occupy the same ecological niche as we and hence must compete
for energy/matter resources -- I don't think this is a very
substantial point. The SI's true habitat is space -- because
there are lots of elbow room and energy both, no messy
atmosphere to foul up industrial processes, no tiny planetary
surfaces which have _weather_ (uck) and _night_ and the steep
gravity gradient _plus_ atmosphere (sometimes I wonder how we
can live down here at all ;). Surely, after the entire planetoid
belt and the small atmosphereless satellites plus Mercury had
been metabolized and there is still demand for the matter
resource they will come to disassemble the inner planets. If
Singularity hypothesis is true, however, they should have had
switched substrate by then. If there is no built-in escape exit
from this spacetime, we might as well grew fatalistic enough not
to yell and kick when our cannibal children come home to roast.

But back to conservative transhuman technologies. I trust
anybody here is aware that solar sailing is the best way to
travel in high-luminance areas of the inner solar systems (in
fact, NASA had just relatively recently had to abandon a planned
solar-sail-driven comet exploration mission due to the much
lamented financial cuts). The elegance of solar sailing is that
it does not need any reaction mass which is entirely supplied by
Sol. By tilting the sail to and fro one can either climb to a
higher or descend to a lower orbit -- with not a single gram of
fuel spent -- and pretty fast, too. I haven't seen/made any
calculations, but it should amount to truly impressive values (I
think I heard somebody saying the sail mass makes itself paid
during the first day of travel). Graphite cable fractal mesh,
protected from hard UV photolysis with a thin metal layer coat
can be a good backbone for the very flimsy, almost transparent
sail reflector fashioned from nickel-iron or aluminum.

How, and where to manufacture these, however?

We are truly lucky to have the Moon at our doorstep. It has what
we hereabouts call a very hard vacuum for atmosphere and a
pretty low escape velocity. While lacking in volatiles, its
surface is made from alumosilicates (in patches, with a pretty
high titanium content) has probably large nickel-iron deposits
from past meteoritic impacts (at least mascon data seem to
indicate this) and it should also have some few carbonaceous
chondrite-containing areas. I believe a high-resolution radar
map (which ought to be able to penetrate about a meter into the
topsoil and hence give valuable remote-sense prospection data)
has recently been made (not a NASA mission, I think. Anybody
knows whether the data is online?). It has low gravity which
allows very flimsy support structures. Vacuum has no wind drag,
no oxidation, no erosion but by micrometeorite impact, so one
can use very thin foils for mirrors and solar cells. Moon has a
very long day (and, alas, an equally long night) with a 1.3
kW/m^2 luminance (the solar constant). By using relatively large
but extremely lightweight parabolic/spherical mirrors made from
micrometer-thin metal foils we can have up to 5.7 k deg C
process heat, this can be used to produce glass from soil (for
matrix and support structure) or to melt nickel-iron (for foils
and support structure, also for electric conductors), or,
simply, for fractional destillation of soil. Thus process heat
is very cheap up there. Electrical power is equally cheap if one
uses locally produced silicon photovoltaic cells. A highly
straightforward way to separate pure elements from oxides is to
use molten-oxide (see process heat) electrolysis and to allow
oxygen to escape into vacuum. An even more elegant, if
noticeably less-throughput approach would be to use preparative
mass spectroscopy, e.g. the minimalistic quadrupole MS
technique, which should scale up very well to quantitative
separation (there was I think, a MS isotope separation group at
the Manhattan project, but they had settled for
diffusional/centrifuge isotope enrichment instead)
Semiconductor-grade silicon should be pretty easy to prepare,
since both time and energy are plentiful. Circuitry structures
and even macroscopic structures can be deposited by ion and
atomic beam writers. Welding and vaporisation for thin-layers
deposition can be done with electron beams. Etc. etc. etc. An
autoreplicating factory can grow pretty fast during the day but
has to shut down for the long night. Due to vacuum and low
escape velocity using electromagnetically operated mass drivers
one can easily hurl small packets of processed material or
machinery (e.g. autark solar sail-equipped autoreplicator
probes) into space. All this can be done by extremely stupid,
autonomous or semiautonomous systems, requiring no more external
gardening effort than a patch of jungle (but better watch it
pretty closely and held a nuke or two ready should a runaway
autoreplicator emerge).

So the Moon is obviously an excellently instrumental basis camp.

Given this extremely impressive while still highly incomplete
list one grows wonder why government funding for Moon
exploration has been virtually discontinued so that we have to
wait for privately funded space programmes. But then, sanity has
never been a typical trait of upper-echelon politics. I can only
hope that this lamentable lack of foresight has not closed one
essential Omega route time window, be it due to growing social
instabilities, a nuclear winter or a stray asteroid impact or to
countless other thinkable events. Species mass extinctions have
appeared regularly in the past, even now we are the
precipitators of the current one. Given our already
well-documented liability to countless stupidities, our
rudimentary technology does by no means make us immune from mass
extinctions. In fact, as as a species we seem to be too stupid
even for this low level of technology -- as good a reason for
launching the SI project as any one can think of.

I think, we all can safely agree upon that even with this crude
technology (just solar cells and processing elements,
line-of-sight or fibre laser links) the planetary intelligence
the size of the Moon can be a true SI. If we can build it (and
we obviously potentially can, the necessary effort threshold
being substantially lowered with each passing decade), then we
have succeeded in bootstrapping the first step towards Omega.

Now I'd like to begin a long detour into maspar connectionist IT
techniques & other goodies. Imo, if applied synergistically,
they present a sufficiently strong method for Omega first-stage
bootstrap implementation. Drop me a line if you feel this is

(In one very true proverb somebody said, if one's toolbox sole
contents is a hammer -- then everything in the world looks very
like a nail. In contrast to Ron ''opponent processing'' Blue I
think I can provide at least several hammers of diverse weights
and shapes.)

Modern organic synthesis often utilizes a technique known as
retrosynthesis. In it, the target molecule is considered as a
graph, to be constructed from feedstock graphs (which are
readily available) by a finite series of discrete synthetical
reaction steps. Obviously, though chemists often cloud it in
arcane acronyms and circumspect imagery, this is a standard
multidimensional optimization problem. Using an empirically
available ''cost'' metric we must choose one of the countless
discrete-jump trajectories leading through structure space. (Of
course, the fact that most transitions are forbidden due to
limitation of synthetic art does not exactly contribute to the
simplicity). However, in a neat trick, the problem is reversed
like a glove: instead of the synthetic reactions, we use the
formally exact opposite, the inverse transform, which maps
target to its simpler precursor(s). So instead of starting the
sampling the large subspace of available feedstock we have only
to examine the paths leading to the nearest neighbours of the
structure space lattice. Since the total path length is limited
to low two-digit numbers due to end product yield considerations
and the atomic step cost varies, but not very ruggedly so, we
can prune a lot of obviously fruitless paths quite early.
Diverse enhancements upon this scheme exist, e.g. by GA
optimization, but you surely got the salient point.

We can consider the first bootstrap stage of Omega our target.
In fact our task is even easier, since a lot of targets meet the
member-of-SI-categorization-bucket criterion. So if you can
choose a simpler target, the more power to you. Once an SI has
significantly progressed beyond the HI threshold, it can easily
do all the rest on its own.

Let's choose the primitive planetary SI as synthetic target.
Since the location, large-scale geometry and available energy
flow as well as the first-stage implementation technology are
all known in advance, we obviously have a lot of boundary
condition constraints.

Let's sum up the things we can de novo know. Whether a
planetary, or a Dyson SI, the fundamental symmetry is a
spherical shell or segments of it. The former is imposed by
stress load forces arising from gravitation and available power,
the latter by orbit geometry and available power (shading off by
other modules). The power is provided by the entropy gradient
(''hot spot in the cool sky'') photovoltaics. The planetary SI
consists of elementary Christaller-flavoured hexagonal cells,
populated by identical modules, which communicate by direct
line-of-sight diode laser and optical fibre links. The stellar
SI, which suffers much more from communication latency and
concurrently has to rule out the fragmentation lawine
catastrophe has thus a lot of possible orbits and orbit
population kinetics excluded. The computation paradigm is
connectionism, particularly edge-of-chaos-rule integer automaton
network connectionism. Since we obviously have to utilize an
optical packet-switched high-connectivity network of
orthogonally aligned nodes, an instance of grassrouting (a
finest-grain local-knowledge routing), specifically hypergrid,
instantly comes to mind. Mapping of a high-dimensional hypergrid
to a 2d spherical shell is trivial, since deformed orthogonal
lattice maps to a sphere artefacting only four small ''blind
spot'' addressing singularities, where the mesh is too heavily
distorted to be of any value.

How does the individual module look like? It has a photovoltaics
area to provide power for both computation and structure
anabolics. It needs a mass separation unit, e.g. by quadrupole
MS and a construction unit, which can be a parallel beam writer.
It must have a rudimentary local-sensing and local matter
transport infrastructure (bucket brigade if thru-cell), which
can be based on emergent behaviour autonomous agents for
simplicity's sake, which gives us a nice ''ant queen''/''ant
nest'' metaphor for each hexagonal cell ''nest''. While the ant
queen structure is pretty large and complex (mostly bits,
though), the individual ant agent can be a spindly multipod
which sucks juice off photocell power grid by means of motive
''antennae''. Since it needs headroom to crawl about (or should
we let it crawl over the cell pavement?), photocell and the
associated power grid infrastructure have to suspended by
spun-glass tensile structure or Fuller polyonal spacefill
structures, rods made from nickel-iron or soil-glass.

I could have gone on endlessly, but it is clear enough that one
can arrive at arbitrary level of refinement. (Should the concept
draw enough interest, I can provide detail specs ad nauseam.)

Since, once seeded and after having spread over the Moon
surface, the system has a high (both local and global) matter
throughput and a generic macro and microscopic assembler
capability, we (or the infant SI itself) can upgrade it
overnight (uh, better make it a Moon day ;). Hence, if we can
implement this particular synthetic target, we have virtually
achieved our omegalomonomaniac goal.


-- Eugene

P.S. coming next (if not too off-topic): system statespace
kinetics, mapping integer networks to molecular cellular
automata machines, hypergrid routing details & Co.


see computer_architecture.html#replication

Date: 18 Oct 96 04:41:14 EDT
From: Remi Sussan 
To: David Cary 
Subject: Re: geometry of thought-space and cyberspace protocol


Here is an article of Mark Pesce  you might find interesting for your researches
on thought space. Although it deals principally with VRML (this article started
the whole trip, as far as i know), it presents a theory of "cyberspace protocol"
which intends to visualise the contents of the whole Web in a spatial
coordinates system. The "cyberspace protocol" has been abandonned in the
subsequent implementations of VRML, probably because it demanded a complete
reorganisation of the web and of the URL adresses system.(and, as far as I know,
"Labyrinth", the only browser which could use it, is not downloadable anywhere)
But I suppose the cyberspace protocol could be easily simulated inside one
server. (Of course I think of the Omega database). Your comments are welcome. As
you know already, most of the mathematical stuff in this article is flying much
higher than my little brain can do.



Mark D. Pesce, Peter Kennard and Anthony S. Parisi
Labyrinth Group
45 Henry Street #2
San Francisco, CA 94114


This work describes a visualization tool for WWW, "Labyrinth", which uses WWW and a newly defined protocol, Cyberspace Protocol (CP) to visualize and maintain a uniform definition of objects, scene arragement, and spatio-location which is consistent across all of Internet. Several technologies have been invented to handle the scaling problems associated with widely-shared spaces, including a distributed server methodology for resolving spatial requests. A new languague, Virtual Reality Markup Language (VRML) is introduced as a beginning proposal for WWW visualization.


The emergence, in 1991, of the World Wide Web, added a new dimension of accessibility and functionality to Internet. For the first time, both users and programmers of Internet could access all of the various types of Internet services (FTP, Gopher, Telnet, etc.) through a consistent and abstract mechanism. In addition, WWW added two new services, HTTP, the Hypertext Transfer Protocol, which provides a rapid file-transfer mechanism; and the Uniform Resource Locator, or URL, which defines a universal locator mechanism for a data set resident anywhere within Internet''s domain.

The first major consequence of the presence of WWW on Internet has manifested itself in an explosion in the usability of data sets within it. This is directly correlatable to the navigability of these data sets: in other words, Internet is useful (and will be used) to the degree it is capable of conforming to requests made of it. WWW has made Internet navigable, where it was not before, except in the most occult and hermetic manner. Furthermore, it added a universal organization to the data within it; through WWW, all four million Internet hosts can be treated as a single, unified data source, and all of the data can be treated as a single, albeit complexly structured, document.

It would appear that WWW, as a phenomenon, has induced two other processes to begin. The first is an upswing in the amount of traffic on Internet (1993 WWW traffic was 3000x greater than in 1992!); the second is a process of organization: the data available on Internet is being restructured, tailored to fit within WWW. (This is a clear example of "the medium is the message", as the presence of a new medium, WWW, forces a reconfiguration of all pre-existing media into it.) This organization is occurring at right angles to the previous form of organization; that is to say that, previously, Internet appeared as a linear source, a unidimensional stream, while now, an arbitrary linkage of documents, in at least two dimensions (generally defined as "pages"), is possible. As fitting the organization skills most common in Western Civilization, this structure is often hierarchical, with occasional exceptions. (Most rare are anti-hierarchical documents which are not intrinsically confusing.)

Navigability in a purely symbolic domain has limits. The amount of "depth" present in a subject before it exceeds human capacity for comprehension (and hence, navigation) is finite and relatively limited. Humans, however, are superb visualizers, holding within their craniums the most powerful visualization tool known. Human beings navigate in three dimensions; we are born to it, and, except in the case of severe organic damage, have a comprehensive ability to spatio-locate and spatio-organize.

It seems reasonable to propose that WWW should be extended, bringing its conceptual model from two dimensions, out, at a right angle, into three. To do this, two things are required; extensions to HTML to describe both geometry and space; and a unified representation of "space" across Internet. This work proposes solutions to both of these issues, and describes a WWW client built upon them, called "Labyrinth", which visualizes WWW as a space.

Visualization and VRML

As of this writing, HTML is capable of expressing both textual and pictorial data, and can provide some limited formatting features for each of them; beyond this it provides a linkage mechanism to express the connection between data sets. HTML's roots are in text; its parent, SGML, specifies a format for printed media, a expression which is intrinsically two-dimensional. For this reason, we have stepped "outside" of HTML in our language specifications for geometry and place, defining a simple, easily parsed scripting language for the generation of objects and spaces.

The basic functionality for any three-dimensional language interface to WWW can be broken into three parts; object definitions, which include the definitions of the geometric representations for these objects; scene definitions, which define "placement" of these objects inside of a larger context; and a mechanism which "binds" a URL to an object within a scene. The current revision of Labyrinth's Virtual Reality Markup Language (VRML), while unsophisticated, does fulfill all of these requirements, and therefore provides all of the basic functionality required in a fully visualized WWW client.

As currently defined, Labyrinth's VRMLd" files are a unique data type, like MPEG or AIFF, and must be integrated with MIME in order to launch a companion "viewer". This is not an optimal solution; rather, it should be possible to extend HTML to encapsulate "spatial" data types; these, then, could be visualized or ignored given the capabilities of the WWW client. The OpenGL, OpenInventor, or HOOPS specifications could form a basis, insofar as object definitions are concerned, for HTML extensions, and should be examined as a possible (and well-supported) solution to this issue. Our scripting language should serve as a starting example, rather than a proposal for an all- inclusive solution.

Any conceptualization of space contains within it, implicitly, the quality of number; i.e., "how much" or "how far" is contained within the simple expression of existence. Space, in its electronic representation, is numbered, and, if it is to be shared by billions of simultaneous participants, it must be consistent, unique, and very large/ dense. Despite this, it is rarely necessary for a WWW client to deal with the totality of space; operations occur local to the position of the WWW viewer, and this local description of space is nearly always a great deal more constrained than the entire spatial representation.

It is necessary for VRML to define a numbering system for visualization which conforms to the three principles outlined above. Another section of this work describes such a system.


For the purposes of continuity in navigation, it is necessary to create a unified conceptualization of space spanning the entire Internet, a spatial equivalent of WWW. This has been called "Cyberspace", in the sense that it has at least three dimensions, but exists only as a "consensual hallucination" on the part of the hosts and users which participate within it. There is only one cyberspace, just as there is only one WWW; to imply multiplicity is to defeat the objective of unity.

At its fundamental level, cyberspace is a map that is maintained between a regular spatial topology and an irregular network topology. The continuity of cyberspace implies nothing about the internetwork upon which it exists. Cyberspace is complete abstraction, divorced at every point from concrete representation.

All of the examples used in the following explanation of the algorithmic nature of cyberspace are derived from our implementation of a system that conforms to this basic principle, a system developed for TCP/IP and Internet.

Metrics in Cyberspace

Internet defines an address "space" for its hosts, specifying these addresses as 32-bit numbers, expressed in dotted octet notation, where the general form is {s.t.u.v}. Into this unidimensional address space, cyberspace places a map of N dimensions (N = 3 in the canonical, "Gibsonian" cyberspace under discussion here), so that any "place" can be uniquely identified by the tuple {x.y.z}.

In order to ensure sufficient volume and density within cyberspace, it is necessary to use a numbering system which has a truly vast dynamic range. We have developed a system of "address elements" where each element contains a specific portion of the entire expressible dynamic range in the form:


where p is the place value, and x, y, and z are the metrics for each dimension. The address element is currently implemented as a 32-bit construct, so the range of p is ?127, and x, y, and z, are unsigned octets. Address elements may be concatenated to any level of resolution desired; as most operations in cyberspace occur within a constrained context, 32, or at most, 64 bits is sufficient to express the vast majority of interactions. This gives the numbering system the twin benefits of wide dynamic range and compactness; compactness is an essential quality in a networked environment.

This is only one possible numbering scheme; others may be developed which conform to the principles as given, perhaps more effectively.

Cyberspace has now been given a universal, unique, dense numbering system; it is now possible to quantify it. The first quantification is that of existence (metrics); the second quantification is that of content. Content is not provided by cyberspace itself, but rather by the participants within it. The only service cyberspace needs to provide is a binding between a spatial descriptor and a host address. This can be described by the function:

f(s) => a

where s is a spatial identifier, and a is an internetwork address.

This is the essential mathematical construction of cyberspace.

Implementation of Cyberspace Protocol

If cyberspace is reducible to a simple function, it can be expressed through a transaction-based protocol, where every request yields a reply, even if that reply is A. In the implementation under examination, cyberspace protocol (CP) is implemented through a straightforward client-server mechanism, in which there are very few basic operations; registration, investigation, and deletion.

In the registration process, a cyberspace client announces to a server that it has populated a volume of space; in this sense, cyberspace does not exist until it is populated: this is a corollary to Benedikt''s Principle of Indifference, which states: "absence from cyberspace will have a cost."

The investigation process will be discussed in detail later in this work. The basic transaction is simple: given a circumscribed volume of space, return a set of all hosts which contribute to it. The reply to such a transaction could be NULL or practically infinite (consider the case where the request specifies a volume which describes the entirety of cyberspace); this implies that level-of-detail must be implemented within the transaction (and hence, within registration), in order to optimize the process of investigation. Often, it is enough to know cyberspace is populated, nothing more, and many other times, it is enough to know only the gross features of the landscape, not the particularities of it. In this sense, level of detail is a quality intrinsic to cyberspace.

Registration contains within it the investigation process; before a volume can be registered successfully, "permission" must be received from cyberspace itself, and this must include an active collaboration and authentication process with whatever other hosts help to define the volume. This is an enforcement of the rule which forbids interpenetration of objects within the physical world; it need not be enforced, but unless it is observed in most situations, cyberspace will tend toward being intrinsically disorienting.

Finally, the deletion process is the logical inverse of the registration process, where a volume defined by a client is removed from cyberspace. These three basic transactions form the core of cyberspace protocol, as implemented between the client and the server.

Cyberspace Servers

Cyberspace is a unified whole; therefore, from a transaction-oriented point of view, every server must behave exactly like any other server (specifically with respect to investigation requests). The same requests should evoke the same responses. This would appear to imply that every server must comprehend the "totality" of cyberspace, a requirement which is functionally beyond any computer yet conceived of, or it places a severe restriction on the total content of cyberspace. Both of these constraints are unacceptable, and a methodology to surmount these constraints must be incorporated into the cyberspace server implementation.

The cyberspace server is implemented as a three-dimensional database with at least three implemented operations; insertion, deletion, and search. These correspond to the registration, deletion, and investigation transactions. Each element within the database is composed of at least three items of data; the volumetric identifier of the space; the IP address of the host which "manifests" within that space; and the IP address of the cyberspace server through which it is registered.

The investigation transaction is the core of the server implementation. Cyberspace servers use a repeated, refined query mechanism, which iteratively narrows the possible range of servers which are capable of affirmatively answering an investigation request until the set exactly conforms to the volumetric parameters of the request. This set of servers contains the entire possible list of hosts which collaborate in creating some volume of cyberspace, and will return a non-null reply to an investigation request for a given volume of space. The complete details of the investigation algorythm are beyond the scope of the current work and will be explained in greater detail in a subsequent publication.

An assumption implicit in the investigation algorithm is that investigative searches have "depth", that investigation is not performed to its exhaustive limit, but to some limit determined by both client and server, based upon the "importance" of the request. Registrations, on the other hand, must be performed exhaustively, but can (and should) occur asynchronously.

The primary side-effect of this methodology is that cyberspace is not instantaneous, but is bounded by bandwidth, processor capacity, and level of detail, in the form:

where c is a constant, the "speed limit" of cyberspace (as c is the speed of light in physical space), l is the level of detail, b is bandwidth of the internetwork, p is processor capacity, D is the number of dimensions of the cyberspace, and r is the position within the space. The function rho defines the "density" of a volume of cyberspace under examination.

This expression is intended to describe the primary relationships between the elements which create cyberspace, and is not mathematically rigorous, but can be deduced from Benedikt''s Law.

Finally, because cyberspace servers do not attempt to contain the entirety of cyberspace, but rather, search through it, based upon client transaction requests, it can be seen that the content of a cyberspace server is entirely determined by the requests made to it by its clients.

One way to visualize the operation of cyberspace servers is with the metaphor of Indra''s Net, from Vedanta Hinduism; finely woven of glittering jewels, each jewel reflecting every other.

Cyberspace and the World Wide Web

Having defined, specified, and implemented an architecture which provides a binding between spatio-location and data set location, this architecture needs to be integrated with the existing WWW libraries so that their functionality can be similarly extended. As "location" is being augmented by the addition of CP to WWW, it is the Universal Resource Locator which must be extended to incorporate these new capabilities.

The URL, in its present definition, has three parts: an access identifier (type of service), a host name (specified either as an IP address or DNS-resolvable name), and a "filename", which is really more of a message passed along to the host at the point of service. Cyberspace Protocol fits well into this model, with two exceptions; multiple hosts which collaborate on a space, and the identification of a "filename" associated with a registered volume of space.

We propose a new URL of the following form:


where {pn...} is a set of CP address elements.

Resolution of this URL into a data set is a two-stage process: first the client CP mechanism must be used to translate the given spatio-location into a host address, then the request must be sent to the host address. Two issues arise here; multiple host addresses, as mentioned previously, and a default access mechanism for CP. If a set of host addresses are returned by CP, a request must be sent to each specified host; otherwise, the description of the space will be incomplete. Ideally, all visualized WWW clients will implement a threaded execution mechanism (with re-entrant WWW libraries) so that these requests can occur simultaneously and asynchronously.

A default access mechanism for CP within WWW must be selected. The authors have chosen HTTP, for two reasons; it is efficient, and it is available at all WWW servers. Nonetheless, this is not a closed issue; it may make sense to allow for some variety of access mechanisms, or perhaps a fallback mechanism; if one service is not present at a host, another attempt, on another service, could be made.


It is now possible, from the previous discussion, to describe the architecture and operation of a fully visualized WWW client. It is composed of several pieces; WWW libraries with an integrated CP client interface; an interpreter for an VRML-derived language which describes object geometry, placement, and linkage; and a user interface which presents a navigable "window on the web".

The operation of the client is very straightforward, as is the case of the other WWW clients. After launching, the client queries the "space" at "home", and loads the world as the axis mundi of the client's view of the web. As a user moves through cyberspace, the client makes requests, through CP, to determine the content of all spaces passed through or looked upon. A great deal of design effort needs to be put into the development of look-ahead caching algorithms for cyberspace viewers; without them, the user will experience a discontinuous, "jerky" trip through cyberspace. The optimal design of these algorithms will be the subject of a subsequent work.

At this time, visualized objects in WWW have only two possible behaviors; no behavior at all, or linkage, through an attached URL, to another data set. This linkage could be to another "world" (actually another place in cyberspace), which is called a "portal", or it could link to another data type, in which case the client must launch the appropriate viewer. Labyrinth is designed to augment the functionality of existing WWW viewers, such as NCSA Mosaic, rather than to supplant them, and therefore does not need a well-integrated facility for viewing other types of HTML documents.

Data Abstraction Protocols

Cyberspace Protocol is a specific implementation of a general theory, which has implications well beyond WWW. CP is the solution, in three dimensions, of an N-dimensional practice for data set location abstraction. Data abstraction places a referent between the "name" of a data set locator and the physical location, allowing physical data set location to become mutable.

If an implementation were to be developed for the case where N = 1, it would be an effective replacement Internet''s Domain Name Service (DNS), which maintains a static mapping of "names" to IP addresses. Any network which used a dynamic abstraction mechanism could mirror or reassign hosts on a continuous basis (assuming that all write-through mirroring could be maintained by the hosts themselves), so that the selection of a host for a transaction could be made based upon criteria that would tend to optimize the performance of the network from the perspective of the transaction. It would also be easy to create a data set which could "follow" its user(s), adjusting its location dynamically in response to changes in the physical location or connectivity of the user. In an age of wireless, worldwide networking, this could be a very powerful methodology.


This work attempts to outline the requirements for architectures which can fully visualize WWW, and proposes solutions to the issues raised by these requirements. While much further study needs to be done, this work is meant to serve as a starting point for an understanding of the subtleties of wide-area, distributed, visualized data sets.

Labyrinth and Cyberspace Protocol are logical extensions to the World Wide Web and Internet. Indeed, without the existence of WWW, neither would be very useful immediately; they would operate, but lack content, and individuals would hardly be compelled either to use them or adapt their existing data sets to realize their new potentials. Used together, they work to make both WWW and Internet inherently more navigable, because they help to make Internet more human-centered, adapting data sets to human capabilities rather than vice versa. This, thus far, is the single largest contribution that "virtual reality" research has offered to the field of computing; a human-centered design approach that lowers or erases the barriers to usage by creating user-interface paradigms which serve humans to the full of their potential.

Finally, network visualization marks the end of the "first age" of networking, where protocols, services, and infrastructure dominated the discourse within the field. In the "second age" of networking, questions like data architecture and the inherent navigability of a well- designed data set become infinitely more important than "first age" questions; where "how do I find what I'm looking for?" becomes more relevant than "where did it come from?"


The authors would like to thank the following individuals, who contributed their own thoughts to the formation and development of the ideas expressed in this work: Michael J. Donahue, Owen Rowley, Dr. Stuart D. Brorson, Clayton Graham, Christopher Morin, Neil Redding, James Curnow, Marina Berlin, Casey Caston and the Fugawi tribe.

Date: Fri, 19 Apr 1996 00:00:08 -0400 (EDT)
Subject: >H Digest


From: "Alexander 'Sasha' Chislenko" 
Subject: Re: >H geometry of thought-space

Transhuman Mailing List

>On Thu, 18 Apr 1996, Anton Sherwood wrote:
> I often wonder what sort of shape an information space has.  How many
> dimensions?  Flat or curved?  For the whole of human knowledge there
> may be no answers to these questions, but for some restricted areas it
> ought to be not-too-hard.
At 08:31 PM 4/18/96 +0200, Eugene Leitl wrote:

>You ought alway to follow the practicability criteria. Personally, I
>find it extremely hard to vizualize even a 4-hypercube, aside from the
>other 4-lattices. One should alway choose the best approach, that one
>which allows the maximum productivity. Obviously, this is 3-space.

  After studying topology and functional analysis for a few of years,
I personally have no problem understanding spaces with any number of
dimensions, including infinite, metric spaces without dimensions, as
well as non-metric spaces.  I still have problems with many others
though, and in general would choose the simplest representative space
I can use for a model.  My favorite representation for an object is
still a 0-dim. space (a dot).  In many cases though, the intrinsic
structure of an object requires a complex space, and so for better
understanding of its nature it may be easier to learn a more complex
space, than to squeeze the object into a overly simple familiar

   A simple suggestion would be to think of the general realm of ideas
as a metric space without dimensions, and draw some rough local
projection of neighbor ideas in 2 or 3-D.

   There are some indications though (based on the remarkable fact that
I can't explain, that the number of parts in a whole [geometrically]
averages to pi) that the "idea space" may actually *be* a locally
Euclidean manifold - at least in some respects.

   My article on the topic (written at the time when my English was
significantly worse than now) is available at

Alexander Chislenko 
Date: Sun, 26 May 1996 00:00:05 -0400 (EDT)
Subject: >H Digest


From: "Dr. Rich Artym" 
Subject: Re: >H Flexibility of Namespace

Transhuman Mailing List
Corwyn J. Alambar writes:

> ... this brings up an ineteresting point about names and labels in a >H
> society - Namespace is already becoming more fluid (with people adopting and
> shedding names with marriages, stage names, pen names, pseudonyms, etc.), and
> a person is often referred to by multiple names.  The 'Net has made this even
> more prevalent - How many people on the 'Net use a name different from their
> birth name, compared to those that do so in "real life"?
> Show of hands - how many people here have already "drifted" in namespace?

"Drifting in namespace" ... a wonderful concept, Corwyn! (:-)

There are some interesting questions that go with the territory here.
This "individual namespace" that we're talking about, what kind of
namespace is it, does it imply a mapping of name to individual, or
is it very intentionally free of such a deterministic mapping?  [Eg.
most of us make use of an implied deterministic mapping during most
of our lives, and we'd be most annoyed if an amount creditted to our
name were to go to the account of a namesake.]  On the other hand, the
"I am not a number" view (if it is at all meaningful, and it may not
be) would tend to mitigate against a deterministic mapping, just like
it does in practice when we book into a hotel as "John Smith" or when
we adopt stage names, or pseudonyms as authors.

It seems to me that there are uses for both types of mapping, ie. one-
to-many, as well as one-to-one, and that many-to-one might even become
relevant if mentalities are developed that can carry multiple threads
of consciousness localized in independent individuals.

If deterministic mapping is perceived to be useful then we'll have to
develop unique identifiers to implement it.  "Rich Artym" has a good
probability of being a unique person-tag in the world at present, but
not "John Clark", and probably not even "John K Clark".  Any suggestions
for a unique identification system?  Perhaps a URL-type scheme like

        name:/Rich Artym/

would be suitable, since (i) its implementation is distributed, so
that no central registry is required, DNS being a distributed system,
and (ii) in any given hostspace it would be easy to ensure deterministic
mapping of a given name to a given person identifier.  Note that such
a scheme does not actually provide the personal identifier itself, by
design, since that may not be desired by the individual concerned.
What it does do is to provide a deterministic mapping for public
references to a given person by name, by specifying a domain within
which that mapping IS unique.

As to actual personal identifiers, DNA fingerprints are hard to beat.
One wouldn't want them as part of the public names though, since that
would remove at a stroke the freedom of the individual concerned to
control access to her unique identifier.

###########  Dr. Rich Artym  ================  PGP public key available
# galacta #  Internet: <rich at>     DNS
# ->demon #            rich@mail.g7exm[.uk]   DNS
# ->ampr  #  NTS/BBS :
# ->nexus #  Fun     : Unix, X, TCP/IP, OSI, kernel, O-O, C++, Soft/Eng
# ->NTS   #  More fun: Regional IP Coordinator Hertfordshire + N.London
###########  Q'Quote : "Object type is a detail of its implementation."


Date: Sat, 25 Jan 1997 10:32:47 -0500
From: Remi Sussan <>
Subject: Re: >H new world
To: tranhuman mailing list <>
Precedence: list

Transhuman Mailing List

Very interesting post, Anders. Thanks for these thoughts.

It is time now to ask The Forbidden Question: what is the correlation
between so-called "New age movements" and tranhsumanism ? Of course, the
beliefs are obviously not the same. But the mythic movies projected on the
screen on the brain are quite close. It would be easy, for anybody
attracted by this imagery,  to balance between new age and transhumanism,
especially if devoid of good epistemological background (I don't say
"scientific background", because, according some statistics, new age
attracts a good proportion of people with some scientific knowledge,
especially in engineering).
Somebody completely stranger both  to Transhumanism and New age , would
have difficulties to see the difference between the two. I read once a
(very funny) publication of the catholic church which mixed happily New
Age, cybernetics, virtual reality, psychedelism, Chaos theory and Rock
culture. And if the church said it, who am I to contradict ? ;-).
Now let's see the case of Gregory Bateson. He was a rationalist, a
co-founder, with Norbert Wiener and Heinz Von Foerster, of the cybernetic
method (he introduced it in anthropology).But he choosed to finish his days
at Esalen Institute, sanctuary of the New Age, and died in an american Zen
Temple. He didn't renounced to rationalist method, and never believed in
reincarnation, channelling, etc.But when asked about his way of life, he
simply answered that he was disgusted by the destructive attitude of most
of his scientific colleagues, and preferred to live among the Esalen people
and their strange creeds.
to summarize, new agers and tranhumanists are definitely on the same
boat-even if it seems sometimes too small.

> I don't think visions like
>this or theosophy in general should be interpreted literally, rather as
>myths and ideas showing interesting possibilities and affecting us in
>various ways.

Agree again. "visions" play a great role in an imaginary space. They must
not corrupt the "real space" of facts, as the "real space" must not corrupt
the "imaginary space". To avoid such confusion, it is better to encapsulate
imaginary space  in a virtual spatio-temporal place: the ritual. To be
"secure" the ritual should begin by the assertion of a belief system used
only for emotional purposes, an incantation like that :
(define Hale Bope :a-wonderful-space-city-populated-by-great-masters)
And the ritual should finish by an inverse banishing riding us toward the
"real world"
(define (Hale-Bope Lemurians Great-Masters Channeling) : Illusion)
Brenda, see no offense in that vision of Theosophy: HP Blavatsky has really
created a powerful mythology which influenced people like HP Lovecraft, who
in turn influenced numerous science-fiction writers who in turn created
most of the mythological framework of transhumanism. We all owe a great
debt to her. Saying theosophy is a metaphor is not saying it is silly, to
the contrary. We only want to give it a place where it can be really

>Personally I think ideas like theosophy might have some merit, but they
>far too often tend to place the locus of control outside the individual.

But with little adjustments, we can make them really useful. I have always
been impressed by Roger Zelazny's "Lord of light", with its wonderful mix
of Indian mythology and high technology. somewhere in the story, the hero,
Sam, is practising meditation on a stone. An other character worry about
it, saying Sam is renouncing to active life and is only interested in
spiritual attainment. but an other says:
"No. Look at his eyes. He doesn't want to forget reality by uniting the
subject and the object in a state of ecstasy. He is STUDYING the stone..."

Please email all technical problems to, not to the list. is our web site. is our ftp site.

mental dimensions

3DXML: Authoring, Publishing and Viewing Structured Information by Dan Ancona.

3DXML is intended to pave the way for the eventual replacement for HTML. It is intended to be a general front end for XML. The web has many problems, and lacks much of the functionality imagined by its conceptual but unimplemented predecessors like Vannevar Bush's Memex and Ted Nelson's Xanadu project. 3DXML addresses a subset of this missing functionality and brokenness

Date: Thu, 27 Aug 1998 12:35:51 -0700 (PDT)
From: Joe Jenkins 
Subject: >H Re: Uploading
MIME-Version: 1.0

Transhuman Mailing List

---John Clark  wrote:
> Joe Jenkins Wrote:
>     >My first shot at defining identity is as follows:
>     >Identity - A slightly dynamic but mostly stable fuzzy area
>     >within the design space of all possible information
>     >processors
>     >as defined by the ego.
> I don't know what you mean by "design space", as far as I know
> we were not designed. Perhaps you mean the set of all
> conceivable
> experiences, but some experiences I can never have, such as the
> experience of reading every Chinese book ever written.

No, I don't mean all conceivable experiences.  Your mind is an
information processor.  Therefore, A universal Turing machine can
emulate your mind.  If you were to make a graphical representation
where every point in its space represented one and only one possible
state of that universal Turing machine you would have what I call the
"design space of all possible information processors".  This is
regardless of whether that information processor was designed,
evolved, or just happen to drop out of the sky from a disordered blob
and accidentally self assemble into an SI.  I prefer to think of this
graph in terms of 3D.  In reality the graph would have to be assembled
such that the smallest of changes in the human mind would always be
associated with adjacent point plots.  I'm sure this would cause it to
end up being Nth dimentionnal with N being very large.  This concept
is not new to you. You recently wrote about Tipler's Omega creating a
copy of all possible minds.  Here its assumed that all possible minds
means those in the "the design space of all possible information
processors" that are compatible with the biological configuration of
the human mind.

 All of your arguments so far seem to be exploiting the fact that
there is a fuzzy boundary around identity.  However, it doesn't follow
from that that you can just throw up your hands and give up the clear
difference between being inside and outside that boundary.  Yes the
transition between night and day is fuzzy, but I can say for sure that
between 11:00 am and 2:00 pm in my time zone it is always daytime.  To
be consistent with your method of debating in this thread I expect you
to come back with something about an eclipse.  From my perspective you
seem to be skirting the issues here.  My debate with you hinged on
your answer to the question posed in my modified thought experiment.
In your response to that post that part of it was just conveniently
omitted.  So I'll ask it again:

My original thought experiment:
You've just completed 10 hours of mundane work at the office today in
your normal biological state.  You then documented your work more
thoroughlly than ever before.  A doctor shows up and completely
convinces you beyond any shadow of a doubt that he has invented a
flash light type device that can induce a perfect 10 hour amnesia with
no side effects [or pain]whatsoever.  You are 100% convinced this is
safe and effective.  He offers you $10,000 to let him use the device
on you.

John Clarks response was:

> As you say it's a judgment call but I'd tell him no way.

I can accept this answer as honest, especially if you are a
multi-millionaire.  So I modified the thought experiment to more
accurately reflect the situation in the original spawning of 1000
copies. I wrote:

My modified thought experiment:
Okay, then how about if he said he found out that your wealth had been
divided by 1000 because of some unfortunate events in the U.S. economy
today.  And that your family and friends will welcome your
interactions with them 1000 times less.  This includes your friends on
the Extropian and Transhuman lists.  And then he continued to list
many more unfortunate things that happened to you today.  Then he
says, "but I can fix it all if you just let me use the device on you".
 Would you do it if all this was true?

This question was conveniently omitted from your reply.  Your answer
is key to this debate.

John Clark wrote:

> Another problem, I'll bet any definition of "ego" will have the
> concept of identity  in there someplace, it's the sort of thing
> that gives circular definitions a bad name. I'm sure the ego
> exists, consciousness too, but the chances that anybody
> will ever come up with a definition of either that's worth a
> damn
> is almost zero. This doesn't make me despair however because
> definitions are vastly overrated, most of our knowledge and all
> of the really important stuff is in the form of examples not
> definitions.

I agree with your comments above.  I was pressured into inventing a
definition by John Novak so I did the best I could with a first cut at
it.  This definition however is not key to my debate with you.  Its
for the reasons you mention above that I have been sticking with
examples and thought experiments whenever possible.  If I were so
inclined, I'm sure my best definition of identity would be no less the
size of a book - complete with a multitude of examples and thought

> [Me]
>   >In order to self preserve and keep my ego intact 999 of my
>   >point plots would not hesitate to commit voluntary amnesia.
>   >With the above definition it is clear to me that we are in
>   >fact
>   >talking about 1000 minds with one ego because the ego defines
>   >identity as the fuzzy area where all 1000 point plots reside.
[John Clark]
> Then why not look on me as an imperfect copy of you, after all
> from a Martian's point of view all humans are pretty much alike.
> If I could prove that I'm happier and more successful than you
> would it then be logical to kill yourself?

The point of view that matters here is not that of a Martians but that
of your ego.  So obviously, from my egos point of view, you are not an
imperfect copy of me.

Joe Jenkins

"the mind is capable of at least seven dimensional thought" not exactly hard science, but he's getting there.


[FIXME: gather quine-related stuff I have scattered elsewhere]

access: security vs. privacy


DAV: Many people have a simple understanding of security: either someone has access, or that person does not have access.

This leads to unnecessary fear, confusion, panic, and anxiety when presented with "transparent society", "open source", and some other useful ideas. It's obvious that we don't want certain people having full access to certain things. Too many people jump to the conclusion that it's wrong to allow those people to have *any* access to those things , because that's the only alternative they know about.

Computer system administrators have long used a more detailed security model: "read access" and "write access".

There are other ways of giving people "some" access without giving them "full" access.

For example, when you want someone to know about a particular fact, you could

"Secrecy vs. Integrity: What are you trying to protect?"


The SSN FAQ by Chris Hibbert draws the distinction between a "representation of identity ... identification" vs. "a secure password ... authentication", and notes that far too many people mistakenly believe that SSNs are a secure password. "Even assuming you want someone to be able to find out some things about you, there's no reason to believe that you want to make all records concerning yourself available."

Yes, I'm a Brinist.

related links:


Started 1996 Oct 17

Original Author: David Cary.


known by AltaVista

known by Yahoo!

known by infoseek

Send comments, suggestions, errors, bug reports to

David Cary feedback.html

Use CritSuite to comment on this page and to see others' comments: .

Return to index // end