Nock, Hoon, etc. for Non-Vulcans (Why Urbit Matters)

Effluvia

Grand Rearchitectures, Interlocking Plans

I have come to identify a pattern that crops up in proposals for business models, social engineering, computer architectures, etc.

It is this: instead of paring things down to the minimum (Antoine_de_Saint_Exupery's "nothing left to take away" / Steve Blank's "minimum viable product"), people propose large steaming piles of things which are (a) incompatible with what came before, and (b) depend on every component working flawlessly.

This is, in general, a doomed strategy – which you will note if you have ever had the misfortune to liked your healthcare plan and chose to keep it.

Some crazy proposed business models operate this way ("we teach the natives to harvest rain forest fruit in a sustainable way, then float the goods down the river to market on carbon-fiber-and-help catamarans with help rigging built with micro-loans from our new website…").

Some crazy proposed social revolutions operate this way ("after we cut off the heads of anyone with royal blood, we cut off the heads of anyone who objects to cutting off heads, then we turn the Cathedrals into Temples of Rationality!").

etc.

So, when I saw a software architecture proposes much the same thing ("we build a new virtual machine called Nock VM that's entirely incompatible with the existing standards, then we create a new language to run in it (also called Nock), then we build a higher level language on top (called Hoon), then on top we layer an operating system (called Urbit), encryption, namespaces, and delegation of privileges ….based on neo-reactionary politics! Oh, and also, we have a customizable UI that not only gives error messages in phrases you like, but it lets you turn political enemies into unpersons. And, wait, wait, I'm almost done: also I've got a new way that you've got to pronounce combinations of characters…so the characters '|:' are pronounced 'bardeg'. ") I was fairly dubious.

I note that more Frenchmen vacation in July than do in Thermidor.

…and Yet

After 10 minutes of reading the Urbit documents I was sure that it was technically plausible but practically idiotic.

Why would anyone want to throw away the current technology stack (x86 CPUs running Linux running either C++ that compiles into native code or Java that runs inside a Java Virtual Machine that is implemented with C, all communicating with each other using reliable TCP/IP) in favor of a pile of not just unproven but as-yet unwritten technologies ( x86 CPUs running a new virtual machine who interprets a beyond-cryptic tree-based programming language called Nock which is used to implement an also-beyond-cryptic language called Hoon, which is used to implement a new operating system called Urbit, all of which talks to other instances using the unreliable UDP protocol?

It's madness.

If the author of this monstrosity, Curtis Yarvin, had any justification for this insane project he was silent.

And that sentence right there explains why I spent more than ten minutes on this danger-Will-Robinson-attractive-nuisance thing.

Curtis (aka Mencius Moldbug), is brilliant. Sadly, like a few brilliant people I know personally, he is also elliptical. I don't know if this is because his surplus brainpower finds word-games and allusions to be an irresistible attraction, or because he honestly doesn't realize that most folks can't see what he's getting at.

So, anyway, having read Yarvin's political stuff and knowing that he has a habit of throwing away conventional thought in service of reaching deep truths, I gave Urbit more time. I read the documents, thought deeply about them, and let it percolate.

I Think I Know What He's Up To

In general, there are two reasons why people want to throw away everything that has gone before.

The first, as covered in the examples above of redesigning the health care system, redesigning the calendar, redesigning our conception of who is and is not human, redesigning our transportation systems, is egoism, contempt for local knowledge, and a desire to have the sheer fun of redoing everything from scratch.

In my experience, this explains the human motivation behind the majority of all ab initio rewrites.

There is a second explanation though.

If you face dozens, if not hundreds, of different measures of length, volume, and area not just across national and cultural boundaries, but even as you move from village to village within France , then a slow convergence of Parisian elles and Avignone elles – all while keeping "the king's foot" constant – does not solve the problem so much as keep the old problem and add a whole class of new problems.

No, there are times when a man must spit on his hands, hoist the black flag, and hoist the non-compatibility flag that forever demarcates the Ancien regime from the new.

Yarvin may or may not be an egoist (I've never met him), but he's not an idiot. If he's proposing throwing away 50 years of software stack, he's got a good reason…or at least a very very interesting one.

As the perhaps-apocryphical army officer review goes: "His men would follow him anywhere…if only to see where he was going".

So what problems is Yarvin trying to solve? And how does he want to solve them? I'll follow him, at the very least for long enough to find out.

He doesn't crisply say, but I've got my thoughts.

Out of the crooked timber of legacy computation, no straight thing was ever made

(with apologies to Kant)

Subsumption architecture is a concept in robotics which – and I'm being sloppy here – designs systems by making logical building blocks. One block solves problem X, one solves problem Y, etc. Instead of making a hyper-intelligent robot software stack which thinks deep thoughts about about chess and about climbing stairs, you build a lizard hind-brain that knows how to climb stairs, and then you build a chess playing robot which wants to walk from here to the chessboard, and lets its lizard hind-brain deal with any stairs in the way.

This is a clever approach. There is a problem though: leaky abstractions. (By the way, note that the example Joel chooses to use is …TCP. Just like I've mentioned here twice so far. This is not a random coincidence.)

In theory, we can take a database, slap an API on top, and declare that it is a device for storing data, just as we can take a refrigerator, slap an API on it, and declare that it is a device for making things cold.

In practice, though, there's a difference between theory and practice. A refrigerator behaves markedly differently when you put a single gallon of already cold milk in it than it does when you load it with 50 pounds of steaming freshly-butchered venison. Despite what the API claims, in the latter case, it ceases to be a device that quickly makes things cold.

To deal with leaky abstractions one has to either (a) expand the abstraction with all sorts of implementation-relevant details, thus exposing all the messy crap that you hoped to hide (suddenly a chef can no longer think of their refrigerator as "a cold making device" and has to think of it as a huge pile of engineering specifications), or (b) come up with a better device that truly does have an implementation that matches its specification.

For half a century we have used the first approach: we've connected computers with inherently flaky communications channels, we've tried to paper that over by adding redundancy and retry-algorithms on top (turning UDP into TCP), designing programming languages that blow up in certain situations and then pretending that they don't, etc.

You can build a shoddy foundation and build a one bedroom house on top of it and no one will get hurt.

You can maybe build a second story on top of that first house.

You probably shouldn't build a third.

The further you build, the more time you're going to spend fixing, fixing, fixing the foundation.

This is not an acceptable solution in the ancient cities of Mars or in Paul Graham's own personal Clock of the Long Now.

At some point you really need to rip the entire legacy of 1950s hackery down, sink a hole down to bedrock, and pour a foundation of blast resistant cement.

So, seriously, what problems is Yarvin trying to solve?

I've not had time to read the Urbit message boards, so this is speculation – but I'd wager a federal reserve note or two that I'm right.

As we move from single computers sitting alone in machine rooms to lots of PCs on desks, up to the current emergent mesh of billions of machines of all types in everything from light switches to coffee makers to gaming consoles, the scale changes.

We are no longer trying to make one machine solve one problem so much as make billions of machines (and trillions of ad hoc, short-duration pairings and groupings) work well.

When you log into your bank account in a web browser you're engaging a dozen or more machines and asking them to cooperate to do a job.

When you go to Google to ego-surf your own name you're doing the same.

When you have a LAN party (as we old folks used to call them) or play a MMORPG, you're doing the same.

When you check a bitcoin chain – well, you get the point.

The point is: we know that in the future even the most trivial act of computation will be spread across multiple machines. We know this because it is already true.

Spreading the act of computation across multiple machines is tricky, because of (a) the assumption of network connectivity even in the fact of the reality of network downtime, (b) the less-than-fully-specified state of an ongoing computation on any given machine, (c) the inability to tolerate failures and pick up partially complete work.

How Does Urbit Address These Three Issues?

First, a digression into a taxonomy of programming languages.

For the lawyer, the plumber, the physics PhD, all programming languages are alike: they're a sequence of letters in a text buffer somewhere that tell the computer what to do, and details beyond that are boring.

Well, no – the details aren't boring, they're fascinating. (I say that not because I have some special affinity for programming languages, but because almost any complex system is fascinating once you understand the problems, constraints, and solutions that drive it. I could watch "How It's Made" videos for hours. Ahem. Correction. could do.

There are dozens of different types of programming languages. By "type" I mean something akin to "schools of thought". Keynesianism vs Free Market. East Coast Rap vs. West Coast. Etc.

One of the most common language types is imperative languages. "Imperative" from the Latin imperativus, of course: to dictate, to give a peremptory command. See also: emperor. Imperative languages micro-manage the CPU. They don't tell it that health care should be reorganized, they tell it – ahem. Excuse me. Anyway, imperative languages tend to look like this:

 
monthly_interest_rate = 1.001

define calculate_interest(loan_amount, months) 

   running_total = loan_amount

   do_it months times: 
     running_total = running_total * monthly_interest_rate 
   end

   print "total due: %f", running_total 
end 

There are other common types. SQL is a database query language that is declarative (these language "express the logic of a computation without describing its control flow.")

In a declarative language one might write:

 
loan_amount * (monthly_interest_rate ^ months)

and let the language implementation figure out the best way to actually do the math.

In a sense, all of these languages are equivalent: the concept of Turing completeness means "any problem you can solve in language X I can solve in language Y".

Of course, in a sense, spoons are the same as shovels are the same as bulldozers. They can all accomplish the same job, just some will get the job done before the heat death of the universe / the workers rise up / the pile of hack built upon hack built upon hack collapses under its own weight.

It is the last that we are concerned with: what type of programming language is conducive to solving huge distributed problems without collapsing under its own hacks?

We want a language which:

  • will operate the same way if its doing its fraction of the task on node #1 or node #16,180,339,887.
  • will operate the same way if it is run today or in five years
  • will operate the same way if you hand it inputs of 1,2,and 3 while you're also playing an mp3 of Mozart as it will if you hand it inputs of 1,2,and 3 while you're also playing an mp3 of death metal
  • can either do a sub computation on the same node or delegate that sub computation to some geographically foreign node
  • will operate the same way if it pauses mid-computation to get a piece of data from a companion machine that delivers the datum in seconds or after three weeks of retrying over a crappy network

In short, to touch again the conclusion of the previous section, we want a programming language which lets us split computation up across multiple machines, run those sub computations in repeatable and fully specified ways, and deal with the fact that networks go down.

Q:Is there a class of languages that makes this possible? Because if there is, not only can we build space elevators (metaphorically, certainly, but perhaps also literally), but we can do it without spending any time worrying about all the crap of the current computational infrastructure which always lets us down.

A: Yes, Functional Languages

Functional languages are weird. Mandarin level weird (a quick aside: one would almost expect a language named "Mandarin" to be imperative, but that's crossing a stream too far).

Functional languages are closer to imperative languages (slogan: "do this…this particular way") than they are to declarative languages (slogan: "get a result that does X"), but their true weirdness is the way in which they handcuff the CPU.

If languages were people, your typical imperative language is a ditzy teenager with a sloppy desk: there are lots of post-it notes around, lots of half-remembered facts, a cork-board at the front of the room with some notes, etc.. Ask a program in a typical imperative language to do a very specific task (e.g. compute total payment for a loan of principle X with interest rate Y after Z months) and the internal monologue would be something like:

"I know that we give special discounts to our best customers. Who is this customer? Ah, there is it, pinned to the corkboard: this is for a partner of the firm. OK, so he gets the discount rate. And the discount rate is…what? Let me look it up in this file cabinet. OK, now I need some coffee, so let me write the discount rate down on this post-it note, get coffee, and hope that no one changes my post-it note before I return…"

There is a lot of looseness and flexibility in typical imperative language that makes a programmer's job pretty easy. If the function to compute loan payoffs accepts just three variables (initial amount, rate, months), and someone later comes along and asks for the special friend-of-the-firm feature to be added, there is no need to rewrite the function and everything that uses it to now pass in four pieces of data.

Looseness and flexibility is absolutely wonderful when you're building a hobbit hole: "this branch curves oddly…let's make a porch out of it!"

Looseness and flexibility is not called for, or even tolerated, when we're building space elevators. For that task we need a language that is brutal, uncompromising, and hard as concrete. Like Le Corbusier's plan to change human nature via modernist architecture. Or the rhubarb pie I attempted to make that one time Mrs. Clark let me into the kitchen.

Or math.

If the flexibility and ease of use of typical imperative languages come from sloppy data passing, then what does rock-hard Vulcan data passing look like?

Like this:

 
define calculate_interest(loan_amount, months, monthly_interest_rate) 
    // Q: what if it's a leap year?  
    // A: too bad; we only get to use the three pieces of data passed in

    // Q: what if it's for a partner of the firm?  
    // A: too bad; we only get to use the three pieces of data passed in

    // Q: what if the underlying asset burned down?  
    // A: too bad; we only get to use the three pieces of data passed in

    // Q: what if the customer has a discount coupon?  
    // A: too bad; we only get to use the three pieces of data passed in

    // Q: what if the customer declared bankruptcy?  
    // A: too bad; we only get to use the three pieces of data passed in

    // Q: what if the bank is being sued for racial discrimination in loans?  
    // A: too bad; we only get to use the three pieces of data passed in

    // Q: what if the bank clerk hits control-C to override?  
    // A: too bad; we only get to use the three pieces of data passed in

    total = loan_amount * (monthly_interest_rate ^ months)

    // Q: what if we want to store this total in a database for later use?  
    // A: too bad; we only get to return one piece of data and zero side effects

    return total

end

Once we add all those comments in we start to realize that there are VAST numbers of things that this function probably wants to deal with but can not because it's constrained by some choices we made earlier.

We have two choices: we can either decide to eliminate those complications from our business / computation model, or – more likely – we can write code that simultaneously solves the actual business problem and crisply acknowledges all of its inputs.

If we rewrite the code to deal with leap years, partners in the firm, bankruptcy law, etc., the code is going to much more complicated.

Bad, right?

No! That's a good thing. The complexity was there all along, but we were hiding it from ourselves.

Why, though, is it a good thing to make the complexity explicit?

The answer is that when complexity is implicit we might be unaware of the tight couplings that different parts of a system have. To get back to the analogy of the worker with the sloppy desk, if we're using not just information that was explicitly passed to us, but information that is pinned to a corkboard at the front of a room, and information that we have written on post-it notes, all sorts of problems develop. In the analogy, we run into problems if the worker freezes up for an hour: the post it note might change, the corkboard might change. We run into problems if we give up on worker #1 and hand the task off to worker #2 who sits in a different room with a different cork board and a different stack of post-it notes. We run into problems if we want to double check the computation three years later and the room, the corkboard, and the post-it note no longer exist.

If we very crisply define input and output, and remove side effects and data leakage from our systems, we get all sorts of very nice higher order tools that we can use in a world of billions or trillions of computing nodes coupled together by crappy wires.

Yarvin has created Nock, a very ugly, very weird language that runs on absolutely no hardware at all…except inside his own interpreter / virtual machine. Nock is not nearly as human-readable as the examples I've put above; an actual simple Nock statement (from the docs!) reads

*[a 10 [b c] d] *[a 8 c 7 [0 3] d]

Hideous stuff.

Too Hideous?

Nock code is weird, no doubt.

Yet the code at the bottom of our modern software stack is hideous too. At the lowest level, the machine you're sitting in front of runs assembly language programs that look much like this:

IRQ      LDA #$00
         STA CLICK
         LDA $C5
         CMP #$01
         BNE CONCHECK
         STA CLICK
CONCHECK CMP #$0C

Software engineers do not actually deal with ugly code like this – they use much higher level code like the examples you've seen above. If we can cover over machine code with assembly, and assembly with C and C with Python or Ruby or Go, then surely we can cover up the ugliness of Nock with something cleaner…and, in fact, Urbit proposes to do exactly that, with Hoon (although my own aesthetic sense does not yet actually see Hoon as being all that beautiful…but that's a different point).

So sheer ugliness alone doesn't make us reject Nock. We can slap it with a coat of whitewash – or better yet, like Huck Finn, get someone else to whitewash it for us, right?

…but Nock isn't just ugly. It's weird. So weird that the whitewash may or may not cover it.

I put a bit on the screen above in a linear format, but Nock is not really a sentence-like flow of words like C, COBOL, Russian, Linear-B, or even the boustrophedonic Rongorongo. No, Nock programs are tree structures.

This is not unprecedented – Lisp ("The greatest single programming language ever designed.") does too.

And here – suddenly – the conceptual Legos start clicking together.

Because a Nock program is functional, it operates without caring what machine its on, what time it is, what the phase of the moon is.

Every Nock program is a tree, or a pyramid. Every subsection of the tree is also a tree. Meaning that each subsection of a Nock program is a smaller Nock program that can operate on any machine in the world, at any time, without caring what the phase of the moon is. Meaning that a Nock program can be sliced up with a high carbon steel blade, tossed to the winds, and the partial results reassembled when they arrive back wafted on the wings of unreliable data transport.

Nock programs – and parts of programs – operate without side effects. You can calculate something a thousand times without changing the state of the world. Meaning that if you're unsure if you've got good network connectivity, you can delegate this chunk of your program not just to one other machine, but to a thousand other machines and wait for any one of them to succeed.

Nock supports and assumes full encryption of data channels, so not only can you spread computation across the three machines in your home office, you can spread it across three thousand machines across the world.

The list goes on and on.

Envisioning and defining Nock took a stroke of genius. Implementing it, and Hoon, and Urbit, will be a long road.

But once it's all done, it will function like an amazingly solid, square, and robust foundation. All sorts of things that are hard now, because we have built our modern computational civilization on a foundation of sand will become easy. We have vast industries based around doing really hard work fixing problems that modern computing has but a Nock infrastructure would not – Akamai, for example, pulls in $1.6 billion per year by solving the problem that modern URLs don't work like BitTorrent / Urbit URLs.

When an idea, properly implemented, can destroy multiple different ten-billion-dollar-a-year-industries as a side effect it is, I assert, worth thinking about.

Switching from Dominos to Legos

Back in the early days of the internet when Usenet was cutting edge, there was a gent by the name of Timothy C May who formed the cypherpunk mailing list.

His signature block at the time read

Timothy C. May, Crypto Anarchy: encryption, digital money, anonymous networks, digital pseudonyms, zero knowledge, reputations, information markets, black markets, collapse of government.

I bring up his sig block because in list form it functions like an avalanche. The first few nouns are obvious and unimportant – a few grains of snow sliding. The next few are derived from the first in a strict syllogism-like fashion, and then the train / avalanche / whatever gains speed, and finally we've got black markets, and soon after that we've got the collapse of government. And it all started with a single snowflake landing at the beginning of the sig block.

Timothy C May saw Bitcoin. He saw Tor. He didn't know the name that Anonymous would take, and he didn't know that the Dread Pirate Roberts would run Silkroad, and he didn't know that Chelsea Manning would release those documents. …but he knew that something like that would happen. And, make no mistake, we're still only seeing small patches of hillside snow give way. Despite the ominous slippages of snowbanks, Timothy C May's real avalanche hasn't even started.

I suggest that Urbit may very well have a similar trajectory. Functional programming language. Small core. Decentralization.

First someone will rewrite Tor in it – a trivial exercise. Then some silly toy-like web browser and maybe a matching web server. They won't get much traction. Then someone will write something cool – a decentralized jukebox that leverages Urbit's privileges, delegation and neo-feudalist access control lists to give permissions to one's own friends and family and uses the built in cryptography to hide the files from the MPAA. Or maybe someone will code a MMORPG that does amazingly detailed rendering of algorithmically created dungeons by using spare cycles on the machines of game players (actually delegating the gaming firms core servers out onto customer hardware).

Probably it will be something I haven't imagined.

And then, five, ten, or twenty years from now, the new architecture will really start catching on. More and more computation will slide into the black, beneath the waves. SilkRoad will have no central server – parts of it will be running our your machine, copper-top. Amazon will still be in the cloud computing business…and so will your wrist watch. All sorts of interesting problems that we don't even think about right now because they're so intractable will become easy.

…and that's even without taking into account the parts of the system that I haven't talked about, like version control built into resources, etc.

In short, if Urbit works as designed, the world will get weird.

At least, that's my take on it.

Last 5 posts by Clark

102 Comments

101 Comments

  1. Mercury  •  Dec 6, 2013 @11:10 am

    You mean kelp?

    Damn, I'm going to have to set aside half of Saturday to properly dig into this. Fascinating stuff.

    He's still sticking with the whole 0's and 1's thing right?

  2. luagha  •  Dec 6, 2013 @11:14 am

    A wonderful explanation. I had glanced through your original pointer to this but hadn't gotten deep enough.

  3. David  •  Dec 6, 2013 @11:17 am

    For examples of less onanistic people who are actually solving the problems of mutability and scalability in intelligible languages that professionals actually use in production now, see (for background) Tony Hoare's seminal "Communicating Sequential Processes", Communications of the ACM 21:8 (1978): 666–677; and (for example) the popularizing works of Rich Hickey (such as http://www.youtube.com/watch?v=dGVqrGmwOAw), the writings of Michael Fogus, and developments in the burgeoning Scala ecosystem.

  4. Paul  •  Dec 6, 2013 @11:20 am

    Functional programming is great but why not use Haskell or Scala? Both are relatively mature languages. Twitter runs on Scala, for example. With Scala, I can use the Future monad to abstract away asynchronous computation. For example:

    val friendsTweets = for {
    friends <- getFriends
    tweets <- getTweets(friends)
    } yield tweets

    chains together an asynchronous call to get my Twitter friends with an asynchronous call to get the tweets sent by a list of Twitter users, producing a Future[List[Tweet]].

    Nock is cool, but why use it over these existing tools?

  5. Shane  •  Dec 6, 2013 @11:21 am

    If it ain't broke, don't fix it. Oh yah, and don't re-invent the wheel.

  6. SPQR  •  Dec 6, 2013 @11:27 am

    I hadn't seen this, Clark, I'm going to have to go digest it.

  7. David  •  Dec 6, 2013 @11:30 am

    Also, on the not particularly exotic fact that values over identity in a functional language may be stored as a tree structure, see Hickey's very accessible explanation of his shallow tree implementation:

    And as you note, Clark, Lisp itself is essentially a parse tree.

    You write, "Because a Nock program is functional, it operates without caring what machine its on, what time it is, what the phase of the moon is." But you seem to be confusing the fact that functional languages explicitly isolate side effects and reap the benefits of immutability with the (orthogonal) fact that Nock, like Java and the Microsoft language implementations and many others, is VM-dependent. Well, VMs don't just appear and work magically; they're answerable to the next layer down– and eventually, if not directly, to hardware. (This is similar to your earlier confusing map and reduce, the functional language features, with map/reduce, the data parallelism, aggregation, and processing strategy.)

    You write, "Envisioning and defining Nock took a stroke of genius", but you haven't shown that. In truth (and despite your confusion about the essentials), it appears to be an f(cluster) of better thinkers' ideas, projected as rhetoric to draw in those, unprotected by relevant knowledge, for whom the nexus of ideology, technology, and vocabulary is a moth-baiting flame.

    Finally, a word about the difference between brilliance and wisdom:

    "Any fool can write code that a computer can understand. Good programmers write code that humans can understand." ~Martin Fowler

  8. cdru  •  Dec 6, 2013 @11:30 am

    Is there a CliffsNotes version of this?

  9. Clark  •  Dec 6, 2013 @11:41 am

    @Mercury

    He's still sticking with the whole 0's and 1's thing right?

    Yes.

    …but he reverses their values.

    I wish to hell I was joking.

    An earlier version of the FAQ included this insantiy:

    We should note that in Nock and Hoon, 0 (pronounced “yes”) is true, and 1 (“no”) is false. Why? It’s fresh, it’s different, it’s new. And it’s annoying. And it keeps you on your toes. And it’s also just intuitively right.

  10. davnel  •  Dec 6, 2013 @11:42 am

    Ken:
    I know I'm picking a lot of nits, BUT, please read through this article, carefully, with the idea of turning it into readable English. I think the article is good, but I keep stumbling over grammar and spelling issues.
    Thank you

  11. Clark  •  Dec 6, 2013 @11:42 am

    @cdru

    Is there a CliffsNotes version of this?

    Sadly, I think I just wrote the one extant example of a CliffsNotes version of what-is-this-all-about.

  12. Wohlfe  •  Dec 6, 2013 @11:43 am

    Very interesting read, my main concern though is with assuring the information you receive from another node is accurate. It will be very interesting to see how they solve common issues.

  13. Frank  •  Dec 6, 2013 @11:59 am

    @davnel

    Ken didn't write this. Clark did.

  14. Darryl  •  Dec 6, 2013 @12:08 pm

    I am not a techno person at all, just a lawyer. What I took from this was that Clark believes Urbit will take down all world governments, and he would love it. :)

  15. Jamie  •  Dec 6, 2013 @12:17 pm

    I will bet you. This will never work.

    We have UdP and TCP. I can think of simpler protocols, but they will use similar protocols (space has issues, but it seems to be working there). If we get to a Strossian singularity, all bets are off (or handled in jupiter's orbit, which might strengthen or weaken the underlying case, depending)

    Now. It is has been done before, better. See Plan9. See the GNU project. Hell, I had my own, back before I understood that people don't want it. The problem is security. This does not solve security you might say, "yes! It does", but it doesn't. PGP solved the problem. PEM solved as Problem.

    And here we are.

    We build on accretion. Lots of books of inherited wisdom, and behaviors.

    Tell me when I can easily pipeline the results of my Perl one liner into nomaxes. What? You might say. Nomaxes. See my web page. You'll get it, you're as smart as Plato.

  16. Clark  •  Dec 6, 2013 @12:22 pm

    @Jamie:

    We build on accretion.

    If only I had addressed this point in my post.

  17. francis  •  Dec 6, 2013 @12:23 pm

    Functional programming is absolutely the future. I wish I could wrap my head around it faster. I don't think that we'll ever actually see Urbit as a reality, at least not on a large scale. What will happen is it will get worked on, then something else combining it's concepts with imperative programming will get implemented, and eventually something that looks nothing like Urbit or C++ but a little like both will be the tool everybody's using. The differences will be subsumed by progress. I just hope I can keep up.

  18. Clark  •  Dec 6, 2013 @12:23 pm

    @Wohlfe

    Very interesting read, my main concern though is with assuring the information you receive from another node is accurate.

    Accurate is hard.

    Authentic is easier.

  19. htom  •  Dec 6, 2013 @12:23 pm

    Yummy. Thank you, Clark.

    (reads more)

    Maybe I should not say "Thank you". I can see how this could be a real time-sucker. It reminds me of my discovering the use of the programming languages Icon –> IDOLl –> Unicon.

    In any case, the x86 architecture is a depleted uranium weight on computing, and has been since the invention of any number of other, usually better, architectures.

  20. Gabriel  •  Dec 6, 2013 @12:26 pm

    Having read through some of the Urbit docs, I'm convinced that Urbit was conceived not as a fix for TCP but as a fix for DNS. Always remember that Moldbug is intensely political; he's not so much a tech geek with political interests as a political geek with technical interests. If you start looking for a way to replace our current centralized, hierarchical, public-identities network naming system (DNS) with a Bitcoin-like decentralized, anonymous-but-reliable identity service, you might well end up on the road leading to Urbit.

  21. Clark  •  Dec 6, 2013 @12:32 pm

    @Gabriel

    If you start looking for a way to replace our current centralized, hierarchical, public-identities network naming system (DNS) with a Bitcoin-like decentralized, anonymous-but-reliable identity service, you might well end up on the road leading to Urbit.

    We are entirely of one mind on the general thrust here.

    The neo-reactionary stuff on Urbit that seems to be decoration is not. It is the whole point.

  22. Malc  •  Dec 6, 2013 @12:32 pm

    @Clark: using 0 for true and 1 for false is neither fresh, different, or annoying. Shells (in the Bourne, Korn, Born Again, etc. etc. etc. sense) have been doing it consistently for something over 40 years, and I'd hazard a guess that shell scripts make up an appreciable portion of deployed software in the world today.

    There are good reasons for Shells to do this (mainly related to the fact that there are many ways for a command to fail, and being able to encode the failure type in the return value is useful, while by definition if a command does what it's supposed to do, there is only one result code: success).

    But there are no good reasons for abstract logic implementations to do this, because regardless of any pseudo-anarchic posturing, logic has rules, and implementing "true" as "0" and "false" as "1" has the sort of side effect that you complained about earlier:

    (A and B) equates to (A * B) and (A or B) equates to (A + B) if and only if (false == 0).

    Of course, there's no universal mandate that logical AND and integer MULTIPLICATION should be equivalent, but absent distinct hardware to handle each (which creates an arbitrary hardware requirement), there is going to be value in the equivalency, and someone who designs a programming language/environment/VM/whatever without a view to the implementation is a fool[1]

    Malc.

    [1] This is a very abbreviated reference to remarks made by my programming languages professor at college. Since he was insisting that we use ML for assignments, too many of my brain cells were consumed trying to get my head around what he had assigned for me to take completely accurate notes on his bon mots like this one, but that was the gist. And since he invented ML (i.e. his name was Robin Milner), I tend to value his comments. Incidentally, he (and others) were working on the issues you mention in 1980. Nock/Hoon/Urbit/Turd/Whatever is not the answer, just a piece of mental masturbation.

  23. Carl  •  Dec 6, 2013 @12:32 pm

    Great post.

    BUT!

    As a physics Ph.D. (and someone who has done his own plumbing once or twice), I should alert you that

    For the lawyer, the plumber, the physics PhD, all programming languages are alike: they're a sequence of letters in a text buffer somewhere that tell the computer what to do, and details beyond that are boring.

    is fast becoming false, at least for physicists. Much like the literary greats create impressive turns of phrase out how it reads, how it sounds, idiom, entendre, etymology, etc., many physicists have long had a similar relationship with math. The math is not a series of expressions to parse into a final quantity – it is elucidation of hidden truths, deeply tied to the underlying theory and its history and connection across disciplines, etc.

    As the trajectory of sciences becomes more computational, a similar appreciation for the deeper truths revealed by choice of algorithm, expression, how it lives on the hardware, etc is seeping into our cultural veins.

    Just thought I'd mention it so you don't miss out on good chats with any physicists you might run into.

  24. Malc  •  Dec 6, 2013 @12:37 pm

    @Gabriel, @Clark:

    DNS and BitCoin have contradictory goals: any BitCoin spends like every other. No URI (generally) may be mistaken for any other. Webs-of-trust are all very well, but ultimately the current DNS is a web-of-trust.

  25. Clark  •  Dec 6, 2013 @12:38 pm

    @Carl:

    You know, as I typed that, I realized that the majority of physics PhDs I know do write code.

    Good correction.

  26. Clark  •  Dec 6, 2013 @12:39 pm

    @Malc:

    DNS and BitCoin have contradictory goals

    Indeed.

    I point you to Yarvin's words:

    http://www.urbit.org/2013/09/24/urbit-intro.html

    If Bitcoin is money, Urbit is land. (Floating land is still land, if there’s a limited amount of it.)

  27. azazel1024  •  Dec 6, 2013 @12:41 pm

    Genius and insanity sometimes are difficult to prise apart.

    As for Nock and Hoon, etc…everything I have read about them sounds more of insanity than genius.

    A lot of recycling extant solutions, but in much more complicated ways (That don't solve the problems any better) and most of the "truely novel" stuff is more about adding complexity than it is actually coming up with a better solution or a simpler solution.

    There are some interesting things in it, but mostly from what I have seen its a pile of over-warmed logic gate excrement.

  28. naught_for_naught  •  Dec 6, 2013 @12:43 pm

    There are some who would map the rise and fall of ancient Greece to the increasingly complex styles in their columns — beginning with the elegantly simple Doric column, evolving into the more ornate Ionic column and ending with the elaborately detailed Corinthian column.

  29. David  •  Dec 6, 2013 @12:44 pm

    And those who do would mistake coincidence for causation.

  30. Hoare  •  Dec 6, 2013 @12:44 pm

    This occurs when something on ~doznec or ~zod has (essentially) gone awry, and all of Urth gets big-banged all over again. What does this mean for the Urbit user? Your piers won’t work again, nor will your ships. Delete your piers (everything in URB_HOME except for urbit.pill and zod/) and then build a new pier and (if applicable) rebuild your destroyer.

    Why can’t that stop happening?

    It will, total continuity is expected on October 4th, 2013.

    http://www.urbit.org/faq.html

  31. Clark  •  Dec 6, 2013 @12:46 pm

    @azazel1024

    A lot of recycling extant solutions, but in much more complicated ways (That don't solve the problems any better) and most of the "truely novel" stuff is more about adding complexity than it is actually coming up with a better solution or a simpler solution.

    This was absolutely my take on it at first.

    I now disagree with my original evaluation because of the synergies that the individual recycled extant solutions achieve when used together in a certain way.

    I don't think Yarvin is doing this because he finds the pieces interesting; I think Yarvin put the pieces together because that's how he wanted to achieve his end goals.

  32. Clark  •  Dec 6, 2013 @12:47 pm

    @naught_for_naught

    There are some who would map the rise and fall of ancient Greece to the increasingly complex styles in their columns — beginning with the elegantly simple Doric column, evolving into the more ornate Ionic column and ending with the elaborately detailed Corinthian column.

    There are some who would map the rise and fall of the stock market to the rise and fall of hemlines.

  33. anne mouse  •  Dec 6, 2013 @12:48 pm

    @wohlfe:

    assuring the information you receive from another node is accurate.

    You don't and you can't, regardless of what yet-to-be-invented technologies you're using. The only reliable way to verify the accuracy of a result you get after asking another machine "hey, can you calculate this for me?" is to re-calculate it yourself. The best you can do is try to be picky about who you talk to. You can also reduce errors through redundancy (ask lots of people and go with consensus), but if someone is deliberately messing with your results, you may have no way of knowing whether you're dealing with a lone troll and a bunch of honest machines, or a bunch of corrupted machines and a lone honest voice.
    There's been some work around encryption that tries to make some kinds of calculations feasible while hiding what's being calculated upon. Such schemes should make it possible to detect and automatically reject various crude methods of messing with your results (as well as certain accidental errors). But these are really designed for (partial) privacy, not for correctness. Ultimately there's no way to verify that the calculation performed was the one you requested, short of duplicating the work.

  34. dtsund  •  Dec 6, 2013 @12:56 pm

    I think you're arguing that Urbit is the right thing. Which, even if true, wouldn't necessarily be good.

  35. naught_for_naught  •  Dec 6, 2013 @1:00 pm

    @Clark

    There are some who would map the rise and fall of the stock market to the rise and fall of hemlines.

    Yes, but none of those people were in a position to provide me with three units of transferable credit. Therefore, I would have made no effort to retain that little tid-bit. Plus there is in fact no strong correlation between hemlines and the stock market. There is strong correlation in my example, though I doubt any cause-effect relationship.

  36. Craig  •  Dec 6, 2013 @1:01 pm

    If you want a scalable, concurrent, distributable, fault-tolerant functional programming language, I suggest you start with Erlang. Ericsson designed it 20 years ago and has used it to implement huge telephone switching systems and other large-scale massively-concurrent products. And the language itself, including tools, is given away for free.

  37. htom  •  Dec 6, 2013 @1:08 pm

    @anne mouse — Ultimately there's no way to verify that the calculation performed was the one you requested, short of duplicating the work.

    And if your hardware is flaky, even duplication isn't trustworthy. Send the same computation to three different FPU, are the two agreeing correct, or is the oddball correct? There are ways to attempt to solve this problem, but they are not pretty. And not reliable. It's much easier on your dreams not to have to imagined such happenings. (I'm looking at you, Intel P5/FDIV.)

  38. Clark  •  Dec 6, 2013 @1:10 pm

    @naught_for_naught

    @Clark

    Yes, but none of those people were in a position to provide me with three units of transferable credit.

    :)

    There is strong correlation in my example

    With a sample size of 1, any correlation you find will be 1.0.

  39. Clark  •  Dec 6, 2013 @1:11 pm

    @htom

    Send the same computation to three different FPU, are the two agreeing correct, or is the oddball correct?

    Go to sea with one chronometer or three, but never two.

  40. anne mouse  •  Dec 6, 2013 @1:16 pm
  41. Conrad  •  Dec 6, 2013 @1:17 pm

    And U was so certain when reading through the docs on nock and hoon that it was satire by a CS grad student.

  42. CHH  •  Dec 6, 2013 @1:28 pm

    @Mercury

    You mean kelp?

    Maybe he meant hemp?

  43. Vincent  •  Dec 6, 2013 @1:35 pm

    The problem with this is that just because a platform might discourage bad code, doesn't make bad code impossible. You can still make bad code that does bad things even in a functional universe.

    Your crypto isn't going to magically become secure just because it was implemented in Koon.

    Hard problems remain hard, abstractions over those problems remain leaky until someone goes and plugs in the leaks by deeply understanding the domain and properly modeling it. You can't rely on the computer to do it, unless P == NP, but nobody really believes anybody's going to find that magic algorithm anymore.

    Koon won't change the world, but it will make certain things easier. But the things it will make easier are already made easier by currently -existing tools. Koon will become just another Lisp, a cool idea that will founder because it's not compelling enough to drive widespread adoption, and not social enough to care.

  44. htom  •  Dec 6, 2013 @1:57 pm

    anne mouse — Thanks! How have I missed that strip? Never mind, bookmarked!

  45. CKemp  •  Dec 6, 2013 @2:02 pm

    "Then someone will write something cool – a decentralized jukebox that leverages Urbit's privileges, delegation and neo-feudalist access control lists to give permissions to one's own friends and family and uses the built in cryptography to hide the files from the MPAA. Or maybe someone will code a MMORPG that does amazingly detailed rendering of algorithmically created dungeons by using spare cycles on the machines of game players (actually delegating the gaming firms core servers out onto customer hardware)."

    Maybe because I've read some odd books, but this sounds like it has the potential to be the seed to a great tree, to build into something as amazing as a distributed, global smart AI, a technological world brain, with computing power limited only by the number of connected nodes

  46. Craig  •  Dec 6, 2013 @2:13 pm

    @CKemp: "a distributed, global smart AI, a technological world brain, with computing power limited only by the number of connected nodes"

    … and for a vision of how such an AI might behave, consult Harlan Ellison's classic story, "I Have No Mouth, and I Must Scream".

  47. perlhaqr  •  Dec 6, 2013 @2:32 pm

    Heh. I know TCM.

  48. Gabriel  •  Dec 6, 2013 @2:38 pm

    DNS and BitCoin have contradictory goals: any BitCoin spends like every other. No URI (generally) may be mistaken for any other. Webs-of-trust are all very well, but ultimately the current DNS is a web-of-trust.

    @malc: Of course money systems and real estate systems have contradictory goals; they are different apps. However, both sorts of apps may attempt to use methods which rely on authoritative root authorities or decentralized webs of trust. DNS is absolutely not a web of trust; IANA runs the root zone and if you don't like their take on things you don't get to use the internet. Urbit would do to DNS what Bitcoin hopes to do to government-fiat currency: take the power away from any single authority.

  49. CKemp  •  Dec 6, 2013 @2:52 pm

    Hope for the best and prepare for the worst.

    Let's aim for something closer to the end of Davi Brin's Earth or Michael from Simon Morden's Metrozone Saga.

  50. Marzipan  •  Dec 6, 2013 @3:42 pm

    Clark,

    Actually, with any N = 1, the correlation coefficient is undefined.
    r = (zx*zy)/(N – 1)
    or, if you prefer the co/variance form of the correlation, each variance is undefined for N = 1, as each variable's variance will be 0, e.g., {∑X^2 – [(∑X)^2/N]} will reduce to X^2-X^2, yielding zeroes again in the denominator.

  51. G. Filotto  •  Dec 6, 2013 @3:53 pm

    Clark, you realize you made me read for about 4 hours trying to understand Urbit when I have important shit to do for work? I mean, come on man, this is way harder than assassination politics to overthrow governments you know? We should co-ordinate our individualist anarchistic tendencies in meaningful ways. I mean I actually did some machine code and assembler 25 years ago and this was all Ancient Greek written in Linear-B to me. Yet it kinda made sense and was interesting but… think of the hours of non-productive anarchy you prevented Clark! I'm starting to think you're really a government agent sent to cripple the popehat readers from taking action. Including actions like working an eating and…damn you Clark!

  52. J  •  Dec 6, 2013 @4:11 pm

    Reminds me of "We have 10 standards, lets synthesize them into 1 superior standard to rule them all!"…


    "We Have 11 standards!"

  53. Marconi Darwin  •  Dec 6, 2013 @5:01 pm

    It is nice to have made out like a thief so that you can spend time on pure nonsense like this. The top comment on the presentation covers it well.

    Throwing away previous work and starting fresh is an indication that someone figures he or she can do it better if not for legacy. Plus it is more fun. Also, designing things is a naturally very appealing.

    Very few succeed. Very, very few.

  54. Anon-UV-Squirrel  •  Dec 6, 2013 @7:12 pm

    Why would anyone want to throw away the current technology stack

    Because of the flaws of those that program it. Humans make errors even when they try hard not to. It would be much better to redesign out computer systems so that flawed code is much better contained when an error does happen. Unfortunately the current security setups often lack the ability to contain an exploited error.

  55. Bear  •  Dec 6, 2013 @7:12 pm

    RE: "blast resistant cement"
    …properly known as reinforced Roman concrete, silica powders and all.
    Possibly not the best analogy for a new system, completely from scratch, not based on previous versions. [grin]

    UHPC – https://www.cement.org/bridges/br_uhpc.asp

    Roman Concrete – https://en.wikipedia.org/wiki/Roman_concrete#Material_properties

  56. Clark  •  Dec 6, 2013 @7:31 pm

    @Marzipan

    Actually, with any N = 1, the correlation coefficient is undefined.
    r = (zx*zy)/(N – 1)
    or, if you prefer the co/variance form of the correlation, each variance is undefined for N = 1, as each variable's variance will be 0, e.g., {∑X^2 – [(∑X)^2/N]} will reduce to X^2-X^2, yielding zeroes again in the denominator.

    I meant to say that in a single time sequence of

    { Greek rise | Doric }, { Greek stasis | Ionic }, { Greek collapse | Corinthian }

    which we can plot as

    { 0, 0 } , { 1, 1 } , { 2,2 }

    there is a 1.0 correlation…something you'd be unlikely to get if you threw in a second time sequence of, say, Carthaginian rise and fall versus column types.

    I thought that that would be communicated given the context, but perhaps I condensed my argument too much.

  57. Garrett  •  Dec 6, 2013 @8:01 pm

    Wow. Okay. Going off of your description (because that's the topic of discussion, and because I'm lazy) rather than the source documents:

    Imperative languages are dominant for several reasons. One of which is efficiency. More specifically, at the bottom of the software stack is hardware, somewhere. Every piece of hardware I know operates in an imperative fashion. Being able to convert a higher-level description into a lower-level one allows you to get more performance out of the system. Put another way, what kind of performance disparity do you need between your local machine and your remote machine do you need before running native code locally is slower than running whatever levels of condensed abstraction you can manage remotely, even assuming 0 latency for communications in this model? One of the reason that Java is actually usable is that Run-Time Compilers work reasonably well.

    Next, the reason that people use computers is *because* of side-effects. Reading data from disk is a side-effect. Altering the pixels on a screen is a side-effect. The functional programming community has managed to work around this through the use of Monads (cool hack!). However, this now requires schlepping around the state of the Universe. At a certain point, the time required for transmission/reception of the Universe to a remote service will exceed the time required to perform the operation locally. This doesn't mean it is worthless, but it does mean that the set of problems that this is good for is pretty small.

    As mentioned up-thread, there isn't any good way to be able to trust the processing done by an arbitrary remote node on an arbitrary problem. Certain problems, like finding solution to trap-door problems, work very well for this. In those cases, you can verify the solutions. There's been work done on being able to perform operations on encrypted data without knowing what it is, though I'm not certain that there's any way to be able to be sure that the operations were performed correctly.

    What I see is a problem where it is very hard to verify that an arbitrary remote node has performed the correct calculations while at the same time there being a business model for them to do so. The Bitcoin model works well because the work performed is easily verified through trapdoor functions, and the servers are paid in bitcoins. Thus, even if you find a reason to use functional languages, there's still a lack of incentive for the general-case to work.

  58. jdgalt  •  Dec 6, 2013 @8:23 pm

    Opaque languages are useful only to miscreants who want to sneak code onto your computer that you'd never let in if you understood what it did.

    There's a good analogy to the writing of laws here (*cough*ObamaCare…)

  59. David  •  Dec 6, 2013 @8:28 pm

    "We have to submit to Emmanuel Moldbug's techfascistic obfuscationware in order to find out what's in it."

  60. David  •  Dec 6, 2013 @8:31 pm

    @Garrett

    At a certain point, the time required for transmission/reception of the Universe to a remote service will exceed the time required to perform the operation locally. This doesn't mean it is worthless, but it does mean that the set of problems that this is good for is pretty small.

    It's only necessary to transmit/receive the deltas.

  61. CJK Fossman  •  Dec 6, 2013 @9:16 pm

    @David

    It's only necessary to transmit/receive the deltas.

    Right. But doesn't that postpone the problem rather than eliminating it?

  62. DC  •  Dec 6, 2013 @9:29 pm

    I would address the technical, cultural, and political issues, but others have made the points far more in depth and eloquently than I can.

    On the other hand, am I the only one who noticed that Clark wrote "Huck Finn" where he meant "Tom Sawyer"?

  63. Rhonda Lea Kirk Fries  •  Dec 6, 2013 @10:01 pm

    It is nice to have made out like a thief so that you can spend time on pure nonsense like this. The top comment on the presentation covers it well.

    Maybe he's brilliant, but if so, it's lost on me. He's a terrible communicator. Probably worse in the video–I shut it off the third time he said "basically"–than in print (which I find entirely impenetrable).

    I read a comment on Reddit that claims "Mencius Moldbug (aka Curtis Yarvin) has an intentionally inflammatory style, and often (possibly always) attempts to use language which discourages further investigation of his ideas if the reader succumbs to their superficial emotional response. This kind of filtering mechanism for preventing casual interest in his body of work synergizes well with his hyper-Elitist persona."

    If this is true characterization, then he's not only a lousy communicator, he's also narcissist. And annoying. Really freaking annoying.

  64. Tom Hunt  •  Dec 6, 2013 @10:37 pm

    If he sets out to write prose that attracts some types of people and repels others, and succeeds at it, then that's a mark of a good communicator, no? Being annoying to the type of people he doesn't want to deal with is a feature, not a bug.

  65. Deathpony  •  Dec 6, 2013 @10:56 pm

    So…in essence…is the "Killer Ap" of this whole mess a way to reclaim individual control of ones internet life…accessibility without compromise?

    Or is it a cry from the wilderness for someone to stop the rot, and a subconscious goad to make them try? (*that looks unnecesarily stupid…I bet I could achieve that outcome so much easier, here…*)

  66. E  •  Dec 6, 2013 @11:08 pm

    Clark, have you ever written about programming like this before? I never would have pegged you as someone to write a long piece on functional programming and such. Then again, I wouldn't have pegged you as the kind of person to write anything because until now I felt like I had no clue what in particular you did/were into. Did I just miss something in Popehat history? Or has some part of the veil been pulled back here?

  67. Erbo  •  Dec 7, 2013 @2:31 am

    Let me just leave this little bit of wisdom here to illustrate an issue at the core of this project:

    Q: Why was God able to create the world in 6 days?
    A: He didn't have to contend with an installed base.

  68. J. Bryant Hill  •  Dec 7, 2013 @2:42 am

    Thanks for the objective review of Urbit. I'll stick to tinkering with mainstream programs for now.

  69. Clark  •  Dec 7, 2013 @3:22 am

    @E

    Clark, have you ever written about programming like this before?

    No.

    I never would have pegged you as someone to write a long piece on functional programming and such.

    Some people can waste a weekend on Civ 5.

    I'm one of those people who can waste a weekend on Wikipedia.

    I could write a similar sized essay on Nazi Germany and the Trolley Problem, or the history of bungalow houses as the form relates to Perry opening Japan.

    Shorter: I am good at generating hot air on a variety of topics. I really should move to D.C. and get myself on that phat money train.

  70. Jamie  •  Dec 7, 2013 @4:36 am

    Let me try again.

    Timothy C. May proposed an anarcho-libertairian world based on crypto, back in the day. There was a mailing list, rants, and the usual happened. I was on the list, I have the archives.

    One good thing that came out of it was Julian. I remember him. Another was anonymous remailers. The TOR project is a sort of intellectual heir.

    What sucked was geeks wanting to take over the world, and explaining why, this time, it is different.

    Now, I am a geek. A highly specialized geek. My family doesn't get what I do. I tell my partner about declarative SPAs and coercive convergence, then, we go to dinner.

    I'm a plumber. I almost was a lawyer, but thought better about that. This guy does not get the point of most of what came before. You can write that off as, "exactly". The point is the future is already here. Bitcoins. Assassination markets (look up Jim Bell). Anonymous communication.

    Ok, maybe the future started too soon, and needed a reboot. That is a popular scifi trope. Well, I can't refute that. Here we are.

    JavaScript, as evil as it is, +1.
    Declarative languages (SQL, endless numbers of defined grammars) +1

    Heroes to the throne, +o

    That only looks like a Win.

  71. Grandy  •  Dec 7, 2013 @6:24 am

    @Tom Hunt

    If he sets out to write prose that attracts some types of people and repels others, and succeeds at it, then that's a mark of a good communicator, no? Being annoying to the type of people he doesn't want to deal with is a feature, not a bug.

    A feature? Sure. Does this inherently indicate good communication? No. Not even a little bit. What an absurd claim.

  72. Rhonda Lea Kirk Fries  •  Dec 7, 2013 @8:12 am

    Being annoying to the type of people he doesn't want to deal with is a feature, not a bug.

    Particularly if his goal is to avoid peer review.

    A lifetime ago, I wrote such gobbledy-gook–but only once. The chairman of my philosophy department consistently demonstrated a lack of enthusiasm for a clear and concise writing style, so I embarked on an experiment. I produced for him a content-free paper in his preferred form–Moldbug on steroids, as it were–which even I didn't understand.

    I got an A. I got half a page of glowing praise. I got offered the TA position for the coming year.

    I got out of there as quickly as possible.

    Lois McMaster Bujold said, "Meaning is what you bring to things, not what you take from them." The great joy of screed is that anyone with enough time on his hands can adopt it and make it his own. The sheer density of output leaves it entirely open to individual interpretation consistent with one's own pet theories.

  73. AP²  •  Dec 7, 2013 @8:16 am

    I'm have to say I'm disappointed with the new announcement. Im shouldn't be, since when I read about Nock, Hoon, etc, I assumed it was either a pure thought experiment or an elaborate troll, and that nothing practical would come out of it.

    The DNS proposal in particular just sounds badly informed. He starts by stating Zooko's triangle – which, by the way, is only a suspicion, not a proof – which says that a naming hierarchy can only have two of three attributes.

    Then he proceeds to propose two kinds of names: the submarines, which are in fact nothing more than PGP keys, and the shorter ones. To review the latter in light of Zooko's Triangle, we need to define what we mean by 'decentralized'.

    If we mean that a node can generate its own name in an offline way (like submarines and PGP can), then we see that his proposal only actually delivers 1 of the attributes (secure).

    If we consider a system of shared consensus like his web of friends to be decentralized, then there's already a system that fulfills the Triangle; it's called Namecoin, and its omission from the post is glaring.

    Namecoin offers security (as much as Bitcoin does), decentralization (in the shared-consensus definition) and even human-meaningful names.

    So what new feature does this proposal bring, besides granting him a monopoly over the hierarchy of names?

  74. David  •  Dec 7, 2013 @8:49 am

    …badly informed…
    …a monopoly over the hierarchy of names…

    That's it in a nutshell. Techfascistic obfuscationware.

  75. ...  •  Dec 7, 2013 @11:13 am

    What is the issue with UDP? You consider it 'unreliable'? Just because the protocol itself does not guarantee delivery doesn't mean that UDP is a bad choice. As long as the application is design to handle loss and/or retries then it doesn't matter if the network drops packets. Frankly, TCP is pretty poor for certain applications over lossy networks, UDP can do a much better job when the application does not count on some features of TCP.

  76. Narad  •  Dec 7, 2013 @2:41 pm

    Hideous stuff.

    I take it you also failed to appreciate APL.

  77. Malc  •  Dec 7, 2013 @2:51 pm

    @Gabriel DNS *is* a web-of-trust, just a particular degenerate form of one. The fact that (at the moment) there is only a single generally-accepted root (IANA) is irrelevant to the issue. It is trivial to create a new TLD (".popehat", for instance), and it is straightforward to propagate that TLD: all you need is a root that delegates everything else to the existing root servers (this is, effectively, how the Namecoin system exists at the moment).

    As others have mentioned, there exist approaches (Namecoin, Petname) that attack some of these issues, but fundamentally they're unsatisfactory: Namecoin, for example, has a finite supply of names (21 million, I believe), which is great if you want a market in names (they will appreciate in value), but crap in the real world (not learning from the mistakes that resulted in IPv4 exhaustion is unwise).

    Bluntly, the assertion that Clark repeated (that DNS covers a finite address space / amount of land) is just wrong. Sure, implementations would start to roll over and die if you fed them URI's with 1000 subdomain parts, each with a 32 character names… but that's an implementation weakness.

  78. John Beaty  •  Dec 7, 2013 @8:00 pm

    It is somewhat amusing to see people's reactions to the whole Hoon/Nock/Urbit propsal, and the most interesting thing to me is how mostly people are picking at each piece to say that "It isn't SO bad. It's just bady implemented/improperly used/not really what's wrong." Or "It's hubris/egotism/lack of understanding about UDP/TCP or not really functional." This misses the whole point of the thing, AFAICS, which is that the existing stack, taken as a whole is a giant kludge built on a clusterfuck and wrapped with kleenex to support it.

    I've been writing programs for a while, nothing particularly interesting, mind you, but the usual range of device drivers, GUI stuff, some graphics, some games, some busint, blah blah woof woof. Typical bullshit. But I know, and so do most of the programmers that I know, that to build a really good program, you end up doing it twice: once to solve the problems and the second time to do it right. This looks to me like nothing more or less than an attempt to convince people to begin the process of "doing it right".

    Now, I know that "right" and "solve the problem" are not exactly static, nor are they even things that everyone would agree on. But as far as I can see, no one person or group is looking at the whole space from soup to nuts except Yarvin. So, it looks worthwhile to me to engage with his ideas (not just reject them because "DNS would work right if it were implemented properly" which misses the point about it being non-secure and maybe non-securable) but actively work with the ideas and see if maybe there's something there that's not visible while you're looking through the lens of what's here already.

  79. Grandy  •  Dec 7, 2013 @9:18 pm

    @John Beaty

    It is somewhat amusing to see people's reactions to the whole Hoon/Nock/Urbit propsal"

    You should see your post, from over here.

    This looks to me like nothing more or less than an attempt to convince people to begin the process of "doing it right".

    Just like OO? Agile? TDD? Extreme Programming? Paired Programming? We can spend hours just going over the list.

    1. Sometimes people solve tyhe problems and get it right the first time.

    2. It's often impossible to solve the problems are get it right the first time.

    There is nothing about Urbit that will change this, because the problems that lead to his have nothing to do with the technologies, tools, and techniques we are using to create software.

  80. Orv  •  Dec 8, 2013 @12:34 pm

    To me this has a strong whiff of cultism. Especially the coining of new words for old concepts. It reminds me strongly of Scientology and its particular jargon ("thetan" instead of "soul", "enturbulate" instead of "disrupt", etc.) L. Ron Hubbard was a pretty smart guy, too.

  81. Mike Scott  •  Dec 9, 2013 @1:21 am

    A small correction here. Most declarative languages are not Turing complete (until they get redefined to include imperative bits, like SQL stored procedures and Javascript extensions to HTML), and so they're not all equivalent to each other or to imperative languages.

  82. Tom Z.  •  Dec 9, 2013 @7:21 am

    "But once it's all done, it will function like an amazingly solid, square, and robust foundation. "

    That line reminds me of Step 3: Profit!

    In ten years, if any of the underlying components have been built, and fulfill their initial promises, I will take a look at this oddity.

    Carl Sagan said, "Extraordinary claims require extraordinary evidence". This system of languages and operating systems has vary fantastical claims. I have seen many programming projects with elaborate and elegant designs which collapse under the weight of their elegant perfection.

  83. Q  •  Dec 9, 2013 @7:33 am

    Maybe I was too annoyed by the health insurance stuff*, but I kind of lost interest when a post about programming theory used examples that had a function to calculate interest also print the result. That's such poor programming practice that it makes me wonder how much the person writing really knows about programming, and makes me think it's not worth investing my time in reading the whole article.

    * If you're surprised and annoyed that a for-profit corporation discontinues a product (even one you use and like), maybe you should understand some economics and this system we call capitalism. If you blindly believe the corporation when they claim it's not about making more money for the corporation, then maybe you should understand this thing called 'corporate public relations'.

    Also, if you think the Health Care Reform Act is an example of throwing everything out and starting from scratch, rather than an example of adding another layer on top of existing complexity, then maybe you should try and understand the Act using a source besides 15-second snippets of Fox News.

  84. Clark  •  Dec 9, 2013 @7:56 am

    @Q

    I kind of lost interest when a post about programming theory used examples that had a function to calculate interest also print the result. That's such poor programming practice that it makes me wonder how much the person writing really knows about programming… Fox News.

    Youtube called; they want their comments back.

  85. Shameless Lurker  •  Dec 9, 2013 @3:21 pm

    No apology for thread drift, shell script to me means a poem written on a seashell, and Lisp is an oral affliction. But as a student I followed the link for Mandarin and was delighted to see David apply Popehattery to arguably our oldest living language. The searches are silent, so I'd like to just ask was the task laid aside for

    (2) a quick and dirty intro to how the characters work and how to learn them, and (3) an overview of some of the better online resources

  86. David  •  Dec 9, 2013 @6:59 pm

    @Shameless Lurker,
    Here's a link to part 1: grammar.
    Here's a link to part 2: pronunciation.
    Part 3 on the writing system and part 4 surveying online resources are indeed forthcoming. (Patience, Grasshopper!)

    2013 has been an unusually busy season hereabouts: a lot of software-makeage and trip-takeage and thirst-slakery and half-bakery has left little timeroom to langblog. But hey– what's a year among friends? :)

  87. Borepatch  •  Dec 9, 2013 @9:37 pm

    The computer security implications of this are particularly interesting. Assuming that this worked, and you could have a massively distributed program running on hundreds (or thousands) of CPUs in the Internet Of Things, what does a Denial Of Service Attack mean anymore?

    That would be a pretty big paradigm change, because currently the Internet really has no answer to DoS.

    As to anonymity, I would recommend people take a read about Covert Channels. It's likely not possible to prevent CCs, and if you can leverage this into Urbit, then all sorts of Dark Nets become not just feasible, but likely.

  88. Devil's Advocate  •  Dec 10, 2013 @10:11 am

    @Clark

    Youtube called; they want their comments back.

    Oh, Youtube comments are far more highbrow, and use many more vowels.

  89. Shameless Lurker  •  Dec 10, 2013 @10:39 am

    @ David

    Thank you Sir. I sympathize with the demands on a busy life …

  90. John Beaty  •  Dec 10, 2013 @11:31 am

    @Borepatch, don't tell @Grandy. He thinks it's all been invented already, and it all works properly.

  91. Grandy  •  Dec 10, 2013 @6:06 pm

    @John Beaty

    He thinks it's all been invented already, and it all works properly.

    Even the most cursory of readings of my post would find it impossible to come away with this interpretation.

    Emotion is the enemy of reason.

  92. marvo  •  Dec 10, 2013 @8:11 pm

    Sadly reinventing lisp is not going to end well :) what immediately came to mind was "Worse is Better" which describes some of the reasons why the overwhelming majority of code is in C++, Java and their unexpressive, verbose derivatives, rather than lisp or some descendant (while talking about lisp vs scheme).
    Switching the truth of 1 & 0 not is not going to endear you to anyone. I won't like you even if it actually amuses me.
    Building network stacks that actually work real world is not easy. TCP/IP is the winner now for most applications, but it was not clearly the case 15 years ago, IPX\SPX anyone? Microsoft had such a bad client for it that some programs used their own TCP implementation. Linux had endless problems with writing it robustly. So rolling your own is brave if not foolhardy.

    Great post Clarke, I am sort of surprised at how good it is (given you said you didn't know much about it before the weekend). There is more in the realm of discussion of language and OS design than wikipedia might reference directly. I tried to find the excellent lecture on parallel programming (I think Peter Norvig) on youtube, but that will have to wait.

  93. John Beaty  •  Dec 11, 2013 @8:56 am

    @ Grandy, you're right! Too bad my post was snark, not emotion, but there you go…

    Anyway, I've been thinking about what you said (that some things were solved correctly the first time) and the only one I could think of was gcc. Which were you thinking about? Also, even if you disagree (and I would completely understand, no snark intended) the work that came out of RMS's insanities was worth the ride, IMO. That's kind of what I was thinking when I wrote the first comment: whether or not the whole Urbit thing is itself valid, the attempt to rationalize the whole stack, not just a language, is worth while.

  94. David  •  Dec 11, 2013 @9:23 am

    It appeared to me that Grandy was saying there's no magic bullet: some endeavors will pay off immediately, some will be a rocky road through blundersville, and admirable goals (if such there be) will do absolutely nothing to change that fact because it's driven by human nature.

    Wasn't Grandy's list intended to illustrate hopeful endeavors, not perfectly solved problems, and to illustrate our ongoing quest to find a way forward– a quest not likely to find its resolution in the ravings/cravings/handwavings of a smart but self-segregating (and maybe self-segmenting) utopian?

  95. John Beaty  •  Dec 11, 2013 @11:09 am

    I think his point was that there have been numerous attempts to do things right and most of them are dead ends. Or at least not magical solutions, even though they are touted as such when they first come along. But I think Clark's point, and certainly mine, was that it's worth trying, and here's something that is far more together than the bare implementation of a language or ideal. And, in my case, that the objections here (and over most of the net) are about some individual bits that are already "good enough", or not workable, or whatever, instead of thinking about the whole shebang and how it fits together.

  96. CJK Fossman  •  Dec 11, 2013 @11:40 am

    @John Beaty

    Maybe it's worth rethinking the entire computer/network architecture from top to bottom. Maybe, but if and only if the reason is to create a more reliable, faster and affordable architecture.

    As Clark pointed out earlier, the thrust of Urbit is to support and enable its developer's political philosophy: neo-monarchism.

    For me, that's a really great reason to say "No thanks. Ain't got the time."

    Actually I'm surprised that Libertarian Clark would be attracted to something like this. If he thinks the LEOs we have now are thugs, just wait until King Moldbug's crew arrests him for not being Moldbuggy enough.

    Nice cat, by the way. Gray tabbies are almost as good as orange tabbies.

  97. jorgeborges  •  Dec 11, 2013 @2:59 pm
  98. Ken White  •  Dec 11, 2013 @3:25 pm

    @jorgeborges Ken hasn't, because Ken finds that it's mostly word salad, and Ken isn't sure to what extent it's serious, some sort of parody, or some sort of mental illness, or possibly just too smart for dummy Ken.

  99. Grandy  •  Dec 11, 2013 @5:06 pm

    I think his point was that there have been numerous attempts to do things right and most of them are dead ends.

    Most of the things I listed are not dead ends. On the contrary, once the hullabaloo died down most of these things became established methodologies that frequently enriched projects they were used on to some degree (based in part to the degree they were used). At some point, somewhere, There were people swearing something to the effect of "this is about getting software right the first time". And this is true of every other thing we've ever come up with to better software development. None of them turned that into a solved problem because it cannot be solved that way, and probably ever, not fully. Urbit – even if it is an amazing breakthrough, smash hit, and tastes like that awesome skittle flavored beer – will not solve that problem and anyone who claims otherwise is selling snake oil. Awesomer, more radical things than Urbit have tried and failed.

    The reasons we can't get it right are frequently beyond the realm of our control. When I said "sometimes we get it right up front" I wasn't talking about this or that technology/methodology/eldritch ritual. . . the things we use when writing software. I'm talking about the actual software, the many millions of software projects we happy few have tried to finish since the dawn of computer time.

    The ideal of bettering ourselves as developers is god's work (to paraphrase from the original Bill and Ted). But the thing is, not all such pursuits are created equal, and while the principle is noble the actual results are not always.

    Maybe, just maybe, urbit will be divine. Yeah. . . and maybe I'm a Chinese jet pilot.

  100. John Beaty  •  Dec 12, 2013 @10:19 am

    @Grandy, I'm sorry for my misinterpretation. I meant dead ends in the sense that the passionate believers didn't get what they were hoping for: complete reinvention. Not that they weren't useful, fun, meaningful, worthwhile. Quite the opposite. But they never lived up to the hype, that's all. My point was that all the things you mentioned were not in and of themselves attempts to reinvent the entire computer-human-realworld interactive space, in large part because it simply didn't exist at the time they were developed.
    Anyway, I don't think Urbit will be divine, I don't think we should have a monarchy, I don't think we should throw everything away and start over. I absolutely think we should think carefully about what is on the table and see if it makes sense to re-think the entire system from the ground up, rather than adding patches over kludges over quick fixes over somethings that were never intended to be used the way they are.

  101. John Beaty  •  Dec 12, 2013 @10:23 am

    @CJK Fossman, thanks for the kitty-love. This dear one passed away a few weeks ago, and I have her pic up to remind me of her. Our orange male is a dilute tabby who always looks like he rolled in the dust just before you take his picture.

    Be well.

1 Trackback