Future

Developers Writing The Future

I just read this pretty awesome book ‘I Hate The Internet’ by Jarett Kobek. It has a nice cynical irreverence, but very astute awareness of the power of modern social networks.

Coincidentally, I just came across this decent talk from Joel Spolsky, which if you kinda squint a little, sorta touches upon a few similar subjects as in ‘I Hate The Internet’, at least in that there a lot of assumptions and decisions baked into the algorithms in modern software, a lot of hidden power structures and bias. The talk goes off in a different direction towards the end, but definitely worth watching.

The book is brilliant too. Check it.

Docker Storage Drivers

Super nice quick overview of the importance of Copy-On-Write filesystems to Docker, going into detail of the benefits and downsides to each of the CoW options - AuFS, BTRFS, ZFS, Overlayfs, Device Mapper - great stuff!

From Config Management to a proper Scheduling system

This was the excellent closing talk to the recent CoreOS Fest, Kelsey Hightower walks through differing approaches to deployment, from configuration management to container scheduling, via Kubernetes..

boston dynamics

dang!! Boston Dynamics four legged robots are amazeballs:

How to Build a Mind

Hugely compelling talk by Joscha Bach covering history and possible futures of AI..

Found here

Solomon Hykes, Dockercon14

Solomon Hykes, creator of Docker, speaking at Dockercon - paints a nicely detailed overview of all the new Docker ecosystem libraries released recently - Libcontainer, Libchan, and Libswarm - basically all middle layer abstractions which seem to have buy across all the main platforms and providers. He starts talking about 10mins in..

Eric Brewer on Container usage at Google

Eric Brewer’s keynote from Dockercon14 -

Crazy stuff, definitely feels like Containers have gained mass momentum, and we’re about to undergo a major shift in Systems Architecture. Very exciting times!

Brandon Philips, CoreOS

Excellent interview on Linux Action Show with Brandon Philips, lots of great info on understanding and running CoreOS ::

Rob Pike, GopherCon 2014 Keynote

Awesome, awesome talk from Rob Pike, the keynote for GopherCon 2014, some old Unix/C history and lots of details of Go development –

The Ono-Sendai SLB-10 ORB system

Ono-Sendai SLB-10 ORB System

“The SLB-10 is the latest generation of ORB manufactured by Ono-Sendai, utilizing state of the art EM pulse powered flotation; archive-quality audio and visual recording capabilities; fully CMYK coloured, tactile holographic capability; and guaranteed always-on connection to your personal encrypted data storage facility.

ORBs have become so integral to our life and yet they are barely a 7 year old technology. It is worth reflecting on the brief history of these astounding devices, upon which we now rely so heavily.

In 2027, a group of engineers, skaterboarders and film-makers came together to work on a project utilizing the first commercial Shawyer/EmDrive propulsion system released by the British National Space Centre. Their early device was designed initially as a simple anti-gravity camera to be able to follow a moving person, while filming and streaming the data. It was a personal project for their own use, but they also expected it would be useful for other filming niches. What they hadn’t expected was the enormous demand from the general public, as real-time life-recording and live-blogging quickly took off, finding a multitude of personal and business uses. They formed a company to market their ORB devices, calling themselves Ono-Sendai, in joking homage to William Gibson.

The market for ORBs exploded, with feature upon feature being added in response to consumer demand: multiple cameras, higher definition a/v recording, voice control and audio speakers, automated laser protection systems, and crucially - the visual feed became integrated with in-Frame overlays. In those days Frames were still competing with handheld devices, before we perfected the means to spatially interact with data.

Two more pieces were necessary to bring us up to date with the contemporary ORB experience - the addition of the holo-projector in 2030, in order to finally disperse with a physical interface; and, more recently in 2032, the first mature ultrasound-based haptic holographic interface was introduced to provide a truly tactile experience.

As many imitators as there are on the market, Ono-Sendai manages to stay one step ahead of the competitors due its truly innovative operating system, SOLX - Son Of LinuX, the most advanced and personable decision-taking, realtime smart O/S. With Ono-Sendai and SOLX assisting your life, you be sure of trusted network access, personal security, and reliable data archive - Get on with enjoying your life!”

// reblog from old Drawing B0ard post

hitchhikers guide to the googleverse

Maybe they ain't doing so well with Social, and so what if people are stupid enough to try Bing - It's visionary efforts like this that show Google still leave most other companies behind in a dust ball of mundanity…

(( personally, I actually think Google+ is doing fine too - its full of way more techie interesting people and content than all the FB posts by yer aunties and people you went to school with.. ))

Technology As Society's Engine

Unfortunately I forget where I found this link - Hacker News? The Edge Newsletter? I dunno, but it's a pretty interesting one -

A debate between an MIT professor, Erik Brynjolfsson, and an Economist, Tyler Cowen, about the the role of technology in driving economic growth. My views side with the MIT professor, as does most of the audience in the debate.

I won't repeat any of the arguments made in the debate, but what I will add is that the unequal distribution of wealth we see around today is not a symptom of lack of technological growth, it is purely down to good old fashioned political manipulation and deep rooted traditions of cronyism, a tradition thousands of years old.

Technology on the other hand: absolutely it's what will drive the economy, but even that view completely misses the big picture, which is the Medium itself, The Universal Network. I believe we have created a whole new dimension, an evolutionary mathematical abstracted form of biology. This is the beginning of History, Year Zero.

One hundred years from now, or two thousand - people will be able to look back in time and know with a rich level of detail what our life is like now. Thousands, upon millions of instances of video and audio, images, writings, geo locations, online trails, all readily accessible, interlinked and searchable. This level of detail will only increase, as we start recording every aspect of life.

With such archives of data, I can easily imagine the kids of 2123 being able to walk through and interact with a virtual London in the swinging 2020′s, or San Francisco's roaring 2030′s. Whereas, for future generations, any time predating the late 1990′s will essentially be a static foreign place in comparison. We have created time-travel - we just don't know it yet.

This Network has already achieved a basic level of independence from humanity - where now it is possible for a Something to exist outwith a single containing computer system using techniques like redundancy and geographic load-balancing. I don't mean to imply there is any intelligence there, but there is a level of resilience we've never seen in nature before. To give a more concrete example, I'm referring to something like you as a user interacting with the amazon website to purchase something, meanwhile the power goes out in the datacentre hosting the server your browser was communicating with, and, if engineered correctly, your interaction could continue, picked up by a secondary datacentre with no loss of data, nor interruption of service. This isn't exactly life as we know it, but if you squint your eyes just a little, its not too hard to see an analogy to biological cell life.

Over the next few years, Society's experience of reality is going to go through the biggest change in history, as our physical world merges completely with this new virtual world of realtime interconnected information and communication, completely warping our sense of time and geography.

The iPhone was stage one, Google Glasses or something very similar will be stage two, and its right around the corner.

The Networked City


[[ image half-inched from here]]

I started studying Sociology a few years back with the Open University, but never managed to complete my course as I got a job here in the States and turned my study-time back towards practical technology. I did however study it long enough for it have quite a profound effect on my understanding and conceptualisation of networking, and felt especially influenced by the works of Manuel Castells and Stephen Graham.

I just came across a paper i wrote almost a year ago which covers a lot of these ideas and ties in quite well with the general theme of this blog, so i thought i'd post it up here instead of languishing on my hard drive…

Why is it important to understand how a city’s fortunes are shaped by its connections?

The City is our personal gateway to the wider world. It resonates with a dense polyrhythm constructed of the flow and foci of innumerable networks. From the physical infrastructure under and around us: the travel and power networks, water, gas and telecommunications lines; to the more ephemeral flows of culture, people, information, finance, and commodities. These structures, relationships and their interconnections are the very essence of a city, connecting the local to the global. In order to fully comprehend the economic and social wellbeing of an urban spatiality, it is essential to look at how a city is positioned within a wider global system, and conversely to examine how these global flows connect to local networks. I would like to answer the above question by first looking at the role of connections in a city’s formation and then explain how these connections extend out to form a global network of influence, including the historically new form we now find ourselves in, the Network Society. Within this framework, we shall then examine how these flows connect locally, explaining the new forms of social division created by a combination of technology and ideology, and why now, more than ever, it is critical to understand how a city’s fortunes are shaped by its connections.

Although we have no concrete proof of the origins of the first cities, Jane Jacobs has a particularly convincing argument for their establishment as trading centres, locations of some geographic or social convenience that became a permanent market place. According to the theory, as more people settle in one place, more opportunities for connections are easily made, and local networks of cooperation and competition would grow and drive innovation. Initially they would be trading with local neighbouring lands, but as trade would increase, these local connections would stretch out more and more to form part of a larger network. Trade would quickly diversify through the division of labour, where the commodity itself would be “one export. The other export is a service: the service of obtaining, handling and trading goods that are brought in from outside and are destined for secondary customers who also come from outside” (Jacobs, 1970, p21).

Chicago is a good example of one such city whose growth was predicated upon positioning itself within emerging trade networks. A small trading centre since the late seventeenth century, its growth was assured when it was connected to the first rail and telegraph networks in 1848. With travel time between Chicago and the East Coast cut from over two weeks to two days, and the near instantaneous messaging of the telegraph, “the pace of life had speeded up and the distances covered by flows of goods, people and information were ever greater”. Chicago managed to place itself at the centre of a voluminous pan-American trade and travel network, thereby establishing itself as a thriving and vibrant financial and social hub. As easy it would be to equate this fortune with Chicago’s geographic positioning, the reality was that it achieved and maintained this dominant position through political and economic manoeuvring - before the construction of the railway, Chicago was competing with St Louis, which had a more capable waterway system. Chicago’s good fortune was due to some canny businessmen who realised they needed an alternative network to compete with the waterways, and who set up a railway company by persuading the local farmers to invest in them. (Pile, 2010, p24-35)

Competition over trade routes is a recurring story throughout the development of the modern interstate system. The early capitalist city-states of fifteenth century northern Italy, of which Venice was the most prominent, established their wealth and power through a monopolistic control of trade routes to India and China. Other northern European countries, mainly Spain and Portugal, tried to find alternative routes to bypass the Venetian monopoly, and it was through this process of exploration that Columbus “discovered” the Americas, thereby creating a whole new network of connections. The sixteenth century saw a great deal of change as Spain, Portugal and the mini-empires of France, England and Sweden all vied for world power through territorialist expansion of their respective networks (Arrighi, 2010). Although I digress here and talk of nation states, the unit of power and of management remains that of the city. We can see this in the conquest of Mexico City by the Spanish in 1521, which transformed the Aztec City, then known as Tenochititlan. Until that point, its dominant network of social relations and trade was confined around Mexico, but with the arrival of the Spanish, that network changed, as the flow of power now came from Madrid, and Mexico City’s “local dominance was now in turn subordinated to an even greater power, a new imperial capital across the Atlantic” (Massey, 2010, p105).

The Dutch Republic, operating from its capital, The Hague, managed to shape its own fortune and usurp Spain’s might by creating a new network of connections atop trade routes, a level of abstraction beyond the trading of physical goods: financial networks. It became the hegemonic power on the world stage by innovating forms of financial speculation based on capitalist expansion rather than territorialist expansion. “These networks encircled the world and could not easily be bypassed or superseded” (Arrighi, 2010, p46). Over successive centuries we have seen the balance of world power shift through manipulations in these network of connections, with first the United Kingdom and then the United States, leveraging themselves into subsequent positions of economic strength.

Immanuel Wallerstein’s World System Theory provides a useful framework for understanding the nature of these global flows of power, conceiving the world not as separate nations with separate economies but as one interlinked capitalist world economy. His conceptualization of the modern world system distinguishes between the Core (developed) and Periphery (developing) countries, with the core countries exploiting the resources of the periphery through monopolistic control of network connections (Arrighi, 2010). Within this widescreen view the relationship between a city’s fortunes and its connections to the network of power becomes clear, that it has to be actively aware of, and strive to maintain it’s position within the global “’power geometry’ – different cities have their own trajectories and there is a constant process of the making and unmaking of connections” (Massey, 2010 p124).

Society is now entering a new form of space-time experience – the Informational Age, a new form of network and thus, new forms of connection for a city. Although today’s main network of power is still finance, Manuel Castells points out a crucial difference: Although we have had interconnecting networks of influence and a world economy going back now to the sixteenth century, it is only now due to computer networks that we have a truly global economy “with the capacity to work as a unit in real time, or chosen time, on a planetary scale” (Castells, 2010, p101). Following similar societal changes as the train and the telegraph, the mass adoption of broadband Internet connectivity in many aspects of our everyday life is drastically changing our lived experience of space and time. The Internet has its roots in military scientific work of the 1960s and 1970s, however the mass adoption of the technology into everyday life only began in the mid 1990s with the advent of the World Wide Web, essentially an easy to understand and use interface to the Internet; In just over ten years, Core countries of the West have migrated whole areas of life onto this digital network, affecting everything from work and education to banking, government, leisure, travel, media, relationships and much more. A whole new network space of power, from its physical fibre-optic network infrastructure, to the new virtual realms it allows.

Since the Haussmannisation of Paris in the mid-nineteenth century, the overarching ethos behind urban planning was a scientific-minded belief in a comprehensive and unified infrastructure. Guided by Keynesian welfare states, the modern infrastructural idea for most of the 20th century was based on universal access and cross-subsidized provision provided by a government or private monopoly, such as the railway or telephone networks. Since the 1980s, this way of thinking has become eroded as ideas of privatisation and liberalisation of the markets gained popularity. The monolithic conception of a city as a coherent unified machine no longer fit with postmodern ideas of identity while technological advances allowed for the creation of tiered and premium network services. Technological control now allows secessionary network infrastructure such as private tolled highways, gated communities, enclosed malls, and Business Improvement Districts - an idea originating in the US, but now “found in Europe, the Caribbean, Australia and South Africa” (Hannigan, quoted in Graham/Marvin, 2009 p261). Business Improvement Districts take over the running of their own network connections: street cleaning, lighting, garbage, policing etc., leaving remaining areas to deal with their own problems. Although in theory the market should provide for all who have a requirement, according to the work of Castells what we are seeing is a new social division, “structurally irrelevant people”, people who have no economic power and who therefore, the market can simply ignore because the “architecture of global networks connects places selectively, according to their relative value to the network” (Castells, 2010 pXXXV)

Stephen Graham and Simon Marvin call this phenomenon “Splintering Urbanism”. They outline the physical geography of the network society by looking at the powers behind, and embodied in the urban infrastructure that services the network. They highlight that this area is often overlooked due its technical nature, often dismissed by architects, sociologists and geographers as a politics-neutral engineering problem, outwith their area of expertise. Graham and Marvin demonstrate a rise in “premium networked spaces of the splintering metropolis” across the globe, a new geography transcending ideas of Core and Periphery countries, centring on cities from Shanghai to Manhattan, Sao Paolo to Montreal, Dubai, London, Bombay, and beyond. We see express highways and train routes connecting business centres to international airports, bypassing surrounding local areas to create virtual network topologies. (Graham, Marvin, 2009). We should strive to see that not all inhabitants of a city feel the benefits of these global connections of power and financial flows equally, a problem exasperated by the dual effects of government ideology - the prevailing idea to allow the market to manage all aspects of society - combined with the technological sophistication to allow very selective and granular unbundling of network services.

However, we must also be aware that the situation is more complex that this binary description of the Included and Excluded. Doreen Massey adds definition to the Network Society concept, using examples of poverty in Bombay and Los Angeles to show how these dominant spaces of capital flows are contested urban areas, with differing rhythms sharing a shaky co-existence. There are a multiplicity of flows within a city which stretch beyond it boundaries, and though certain people may be excluded from certain flows, “they are all the products of complicated interweavings of networks of social relations” (Massey, 2010, p 130)

Drawing together the various threads of this topic, we can see that a city’s fortunes are one and the same as its connections to larger network flows. Originally a city’s connections would be with neighbouring regions, but society has evolved over the centuries into one worldwide flow of influences and trade. Now, as an Informational Society, we see the emergence of a singular realtime global economic network, yet in sharp contrast we see a greater division of wealth and power. As we enter this new stage of Society the old systems and language for understanding structure and inequality are no longer adequate to express this new historical reality. An understanding of the mechanics and flows of this new virtual geography and multi-tiered network society becomes an absolute necessity for anyone involved in the planning and governance of urban space, or indeed for anyone simply living or working in a city.

References:
Allen, J., Massey, D., and Pile S. (2010) Understanding Cities: City Worlds, Oxon, The Open University.
Arrighi, G. (2010), The Long Twentieth Century: Money, Power and the Origins Of Our Times London, Verso
Castells, M., (2010) The Rise Of The Network Society, Oxford, Wiley-Blackwell
Jacobs, J. (1970) The Economy Of Cities, New York, Random House
Graham, S., and Marvin, S., (2009) Splintering Urbanism: Networked Infrastructures, Technological Mobilities and The Urban Condition, New York, Routledge

The Edge Question 2012

Edge

The Edge just published their annual Q/A in which they pose a question and ask a group of artists, scientists, and various kinds of intellectuals for their answer. This year the question was

What Is Your Favorite Deep, Elegant, Or Beautiful Explanation?

The list of contributors stretches to 192, most of whom i can’t claim to know, however three names I’m particularly interested in stick out:

Rudy Rucker on Inverse Power Laws

Tim O Reilly on Pascal’s Wager

Stewart Brand on Fitness Landscapes

Worth taking a dig through, tons of good stuff.

iperf and virtualisation and clouds and clouds

One of the tools i've been using a lot of recently is iperf - a really simple and sleek tool for measuring bandwidth between two hosts - rather than write up a full tutorial myself, i'll simply point you at this one by Jayson Broughton

more soon!

Vagrant and Chef setup

I've been reading through ThoughtWorks' latest ‘technology radar‘ which led me to look up Vagrant, one of the tools they list as worth exploring.

Vagrant is a framework for building and deploying Virtual Machine environments, using Oracle VirtualBox for the actual VMs and utilizing Chef for configuration management.

Watching through this intro video:

http://vimeo.com/9976342

i was quite intrigued as it is very similar to what i was looking to achieve earlier when i was experimenting with installing Xen and configuring with Puppet.

So here's what I experienced during the setup of Vagrant on my Macbook - I decided to start with a simple Chef install to familiarise myself with Chef itself and it's own requirements CouchDB, RabbitMQ and Solr, mostly by following these instructions -

-CHEF INSTALL-

sudo gem install chef
sudo gem install ohai

Chef uses couchDB as it's datastore, so we need to install it using the instructions here

brew install couchdb

The instructions I list above also contains steps to install a couchDB user and set it up as a daemon. They didn't work for me, and after 30mins of troubleshooting, i gave up and went with the simpler option of running it under my own user - in production this will be running on a Linux server rather than my Macbook, so it seemed fair enough -

cp /usr/local/Cellar/couchdb/1.1.0/Library/LaunchDaemons/org.apache.couchdb.plist ~/Library/LaunchAgents/

launchctl load -w ~/Library/LaunchAgents/org.apache.couchdb.plist

Check its running okay by going to
http://127.0.0.1:5984/

which should provide something akin to :
{“couchdb”:”Welcome”,”version”:”1.1.0″}

- INSTALL RABBITMQ -

brew install rabbitmq
/usr/local/sbin/rabbitmq-server -detached

sudo rabbitmqctl add_vhost /chef
sudo rabbitmqctl add_user chef testing
sudo rabbitmqctl set_permissions -p /chef chef “.*” “.*” “.*”

Ok, Gettin' back to my mission, break out the whipped cream and the cherries, then I go through all the fly positions - oh, wrong mission!

Ok..

brew install gecode
brew install solr

sudo gem install chef-server chef-server-api chef-server chef-solr
sudo gem install chef-server-webui
sudo chef-solr-installer

Setup a conf file -
sudo mkdir /etc/chef
sudo vi /etc/chef/server.rb
- paste in the example from:

http://wiki.opscode.com/display/chef/Manual+Chef+Server+Configuration - making the appropriate changes for your FQDN

At this point, the above instructions ask you to start the indexer however the instructions haven't been updated to reflect changes to Chef version 0.10.2 in which chef-solr-indexer has been replaced with chef-expander

So, instead of running:
sudo chef-solr-indexer

you instead need to run:
sudo chef-expander -n1 -d

Next i tried
sudo chef-solr

which ran into
“`configure_chef': uninitialized constant Chef::Application::SocketError (NameError)”

i had to create an /etc/chef/solr.rb file and simply add this to the file:

require ‘socket'

startup now worked -
if you want to daemonize it, use:

sudo chef-solr -d

Next start Chef Server with:
sudo chef-server -N -e production -d

and finally:
sudo chef-server-webui -p 4040 -e production

Now you should be up and running - you need to configure the command client ‘Knife' follwing the instructions here - under the section ‘Configure the Command Line Client

mkdir -p ~/.chef
sudo cp /etc/chef/validation.pem /etc/chef/webui.pem ~/.chef
sudo chown -R $USER ~/.chef

knife configure -i

(follow the instructions at the link - you only need to change the location of the two pem files you copied above)

Ok, so hopefully you're at the same place as me with this all working at least as far as being able to log into CouchDB, and verifying that Chef/Knife are both working.

- VAGRANT SETUP -

Now, onward with the original task of Vagrant setup…
Have a read over the getting started guide:

Install VirtualBox - download from http://www.virtualbox.org/wiki/Downloads

Run the installer, which should all work quite easily. Next..

gem install vagrant

mkdir vagrant_guide
cd vagrant_guide/
vagrant init

this creates the base Vagrantfile, which the documentation compares to a Makefile, basically a reference file for the project to work with.

Setup our first VM -
vagrant box add lucid32 http://files.vagrantup.com/lucid32.box

This is downloaded and saved in ~/.vagrant.d/boxes/

edit the Vagrantfile which was created and change the “box” entry to be “lucid32″, the name of the file we just saved.

Bring it online with:
vagrant up

then ssh into with
vargrant ssh

Ace, that worked quite easily. After a little digging around, I logged out and tore the machine down again with
vagrant destroy

- TYING IT ALL TOGETHER -
Now we need to connect our Vagrant install with our Chef server

First, clone the Chef repository with:
git clone git://github.com/opscode/chef-repo.git

add this dir to your ~/.chef/knife.rb file
i.e
cookbook_path ["/Users/thorstensideboard/chef-repo/cookbooks"]

Download the Vagrant cookbook they use in their examples -

wget http://files.vagrantup.com/getting_started/cookbooks.tar.gz
tar xzvf cookbooks.tar.gz
mv cookbooks/* chef-repo/cookbooks/

Add it to our Chef server using Knife:
knife cookbook upload -a
(knife uses the cookbook_path we setup above)

If you browse to your localhost at
http://sbd-ioda.local:4040/cookbooks/
you should see the three new cookbooks which have been added.

Now to edit Vagrantfile and add your Chef details:

Vagrant::Config.run do |config|

config.vm.box = "lucid32"

config.vm.provision :chef_client do |chef|

chef.chef_server_url = "http://SBD-IODA.local:4000"
chef.validation_key_path = "/Users/thorsten/.chef/validation.pem"
chef.add_recipe("vagrant_main")
chef.add_recipe("apt")
chef.add_recipe("apache2")

end
end

I tried to load this up with
vagrant up
however received:

“[default] [Fri, 05 Aug 2011 09:27:07 -0700] INFO: *** Chef 0.10.2 ***
: stdout
[default] [Fri, 05 Aug 2011 09:27:07 -0700] INFO: Client key /etc/chef/client.pem is not present - registering
: stdout
[default] [Fri, 05 Aug 2011 09:27:28 -0700] FATAL: Stacktrace dumped to /srv/chef/file_store/chef-stacktrace.out
: stdout
[default] [Fri, 05 Aug 2011 09:27:28 -0700] FATAL: SocketError: Error connecting to http://SBD-IODA.local:4000/clients - getaddrinfo: Name or service not known”

I figured this was a networking issue, and yeah, within the VM it has no idea of my Macbook's local hostname, which i fixed by editing its /etc/hosts file and manually adding it.

Upon issuing a
vagrant reload, boom! you can see the Vagrant host following the recipes and loading up a bunch of things including apache2

However at this point, you can still only access it's webserver from within the VM, so in order to access it from our own desktop browser, we can add the following line to the Vagrantfile:
config.vm.forward_port(“web”, 80, 8080)

After another reload, you should now be able to connect to localhost:8080 and access your new VM's apache host.

In order to use this setup in any sort of dev environment will still need a good deal more work, but for the moment, this should be enough to get you up and running and able to explore both Vagrant and Chef.

Node.js

I keep coming across mentions of node.js, but wasn't sure what it was. This morning I've been watching some tutorials and reading up a little, and from what I understand it's basically a network server framework built on top of Google's V8 JavaScript engine, really an abstraction for socket programming. It's main advantage is speed and scalability, due to it being based on an event driven I/O model, rather than threaded, like most other languages or frameworks.

This video from node's creator, Ryan Dahl is a pretty funny and very informative introductory video. I'd recommend programming along with watching it:

Here's some further links:
http://nodejs.org/docs/v0.4.8/api/synopsis.html
http://howtonode.org/

This podcast is also a good source of information:
http://herdingcode.com/?p=299

Record Store Bot

First draft of my Record Store Bot is live over on Github - basically tying a Chatbot::Eliza style interface to Last.fm web services for an interactive (hopefully amusing) music recommendation bot.

Works surprisingly well - although only has one method at the moment..