howto

TidalCycles Tutorials

Alex McLean’s TidalCycles has been a big influence on my own Soundb0ard, and lately, now that my instruments are mostly built, I’ve been looking for ways to algorithmically modify them more.

This series of howto’s by Mike Hodnick/Kindohm have been an amazing source of inspiration - super clear and very entertaining - here’s the first episode:

Android - High Performance Audio

I’m currently working on a Brillo audio component, which will require very low latency performance. While searching for some tips and prior art, I came across this talk from Google I/O 2013. A bit older now, but provides an excellent overview of the problem domain and approaches to solving those issues..

Libmill on Randal Schwartz's FLOSS Weekly show

I’m moving into a new job position soon involving way more code, working in C/C++. I’ve just started watching this podcast series and it’s excellent - this one in particular is super relevant, bringing one of the best features from Golang into C. I like the minimalism of the project, it seems very much in alignment with the philosophy behind Suckless. (There’s a previous episode, speaking with Anselm R Garbe from suckless, i’d also highly recommend..)

Docker Storage Drivers

Super nice quick overview of the importance of Copy-On-Write filesystems to Docker, going into detail of the benefits and downsides to each of the CoW options - AuFS, BTRFS, ZFS, Overlayfs, Device Mapper - great stuff!

How to Build a Mind

Hugely compelling talk by Joscha Bach covering history and possible futures of AI..

Found here

AWS VPC best practises

Super detailed and practical VPC design best practises - very cool stuff..

Joshua Davis - Beyond Play

Wow. just wow…


[Tack](https://twitter.com/tackyy) turned me onto this guy the other day - i love him! Artist/Skater/Hacker - whats not to love?!

This was following me and Tack’s Codetraxx project - which is working - we’re making music and synchronized via RabbitMQ as we set out to do - however we’ve hit a bit of a wall with the algorithmic math behind musical composition, so we’ve decided to focus on some mathematical hack nights for a while. Another friend of mine, Matt Spendlove, had told me about The Nature Of Code which i’ve been wanting to read, so I suggested we play with Processing - seems awesome already! So yeah, Codetraxx on hold for a while, and work on a couple of Processing projects for a while …

Go Presentation

More and more I’ve been dabbling with Go, which, mainly due to Hacker News, i’ve been reading so many good things about. The syntax is super easy to pick up, but the killer feature seems to be the concurrency primitives - the Go Functions and message passing Channels seem like a super tight, rock solid implementation of Hoare’s Communicating Sequential Processes. The following video is a really succinct walk through of building a concurent multi-protocol chat application ala Chat Roulette..

Berkshelf - the missing piece

If you've been following my past few posts, you've seen i was investigating how best to integrate the plethora of Chef testing tools that've been coming out — foodcritic, chefspec, test-kitchen, mini-test — and although not testing tools per se, Berkshelf and Vagrant are the other pieces of the puzzle… but how to fit them all together? What is the directory structure for keeping your Berkfile - at the top of the repo? Inside a cookbook directory? How many Vagrant files am I going to create here?

If, like myself, you weren't along at this year's ChefConf 2013, you may also have missed on a major conceptual shift that has happened. Instead of the all-inclusive Chef-repo design pattern, as implied by the OpsCode Chef Repo - https://github.com/opscode/chef-repo - which, when used with all the community cookbooks out there, creates a mess of forked, modified and sub-moduled cookbooks and recipes.

The conceptual shift away and now recommended way, is to treat each cookbook as a separate piece of software and to give it it's own git repo, keeping them separate from from your Chef-repo. This combined, with a distinction between Library and Application cookbooks, and then bundled together via Berkshelf, enables a much cleaner and modular way of working. When you accept this move, it's much easier to then fit all the testing pieces together as they all live within each separate cookbook/repo.

This Comment Thread was what really drew it together for me, and then to fully clarify this way of working, watch Jamie Winsor's ChefConf talk which is the original starting point:

Mo' Chef Testing

Following on from my last post about Test Driven Chef, this latest Food Fight show is a great roundup of the current testing tools landscape -

Test-Driven Chef

I'm looking to start using Test-Kitchen and Berkshelf, and basically trying to get my head round setting up a proper test driven Chef setup.

I found this video from last year to be quite a good introduction to some of the setup -

Netflix OSS

Found this to be a particularly good episode of The Food Fight show with Jeremy Edberg and Adrian Cockcroft talking about the Netflix tools and architecture:

Usenix/Lisa 12

Just got back from the Usenix/Lisa 12 conference in San Diego, and had a great time, super inspiring talks and content.

Highlight of the conference for me was Brendan Gregg speaking on Performance Analysis Methodologies - most of his talk was based upon a paper he just published in ACM - Thinking Methodically About Performance.

The talks haven't yet been published on the Usenix website, but Brendan's blog has a ton of great looking content and older talks including this one on Visualisations for Performance Analysis

Percona Table Checksum

I must admit MySQL replication is something I've never felt too comfortable with - in most of my positions, I've had the luxury of working with a full time DBA who would handle all database related work. In my current workplace we have three major pairs of database machines, and have been going through upgrading them all to Percona MySQL 5.5. As you'd expect data integrity is of the highest importance, so discovering this Percona Table Checksum tool is a real life saver, providing an amazing tool for verifying and fixing any drift or problems with MySQL slaves.

I can't take any credit for these instructions or the trial and error in assembling them, as they were penned by my workmate, the awesome Trystan Leftwich - these are his notes for use at our place, with some additional clarifications from myself from working through them.

First things first, grab the Percona Toolkit and install.

Now on the master DB do the following:

create database BLAH;

This will be the database you store your checksums, so something like pt_checksums will do.

Now on the master as the mysql user, run

pt-table-checksum --create-replicate --replicate [db_name].[table_name] --databases [comma_separated_list_of_databases_you_want_to_check] --empty-replicate-table --chunk-size=5000 localhost

Where [db_name].[table_name] is the database you created before, and a table name you will be able to remember.

EG pt_checksums.myimportanttables_checksums

(If you get a “can not connect to host: blah, this is ok, ignore)

Now, when this is complete, go to the slave DB. (ensure replication is up to date - if you have errors, just skip them to get it up to date)

Then run the following

connect [db_you_created_above];
select * from [table_name_you_created_above] where this_crc != master_crc;

If this returns an empty set, Then your DB is in sync - go straight to Go, collect $200.

If not you will have to try and sync it -

Create a user with the following permissions (pretty much everything) (Also it may not need all of these, but couldn't find what exactly it needed)

GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT

You can create with:

create user 'pt_checksum_maint'@'%' identified by 'blah';
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT on *.* to 'pt_checksum_maint'@'%';

Then, still on the master, run the following command

pt-table-sync --execute --replicate [db_name].[table_name] master_db_ip/hostname --user user_you_created_above --ask-pass --no-foreign-key-checks

(At first I assumed this would be run on the slave to fix it up, however the man page for pt-table-checksum explains:

it always makes the changes on the replication master, never the replication slave directly. This is in general the only safe way to bring a replica back in sync with its master; changes to the replica are usually the source of the problems in the first place. However, the changes it makes on the master should be no-op changes that set the data to their current values, and actually affect only the replica.

)

Once this table sync has been run, re-run the pt-table-checksum command, then verify your results on the slave - should be good .

perl parp parp

I updated the IP address for both my Name Servers tonite, and was monitoring to see how quickly the new addresses were propagating. First stop was the exceptionally useful Whats My DNS

At the host level I also wanted to track the incoming DNS queries using tcpdump. I could see them streaming into the new host, and visually you could see an obvious difference when viewing the output of the same command on the old host. I googled around for a timer utility which run a command for a given time, so i could quantify the difference. Perfect answer was here, a simple perl wrapper function.

Here's how to use it to run tcpdump command for sixty seconds, and count the packets seen:

# doalarm () { perl -e 'alarm shift; exec @ARGV' "$@"; }
# doalarm 60 tcpdump -u -i eth0 port 53 -n |wc -l
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
19504

tcpdump patterns

McCarthy

I use tcpdump a lot, but mostly at a reasonably high level, only really restricting the capture to host and port info, then pulling the dump back into Wireshark for nicer visualisation and easier filtering.

A couple of months back I read Moonwalking With Einsten, which is a nice pop-science history of the importance of memory in previous societies, alongside the contemporary phenomena of competitive memory competitions. The book is great, and explains how feats of memory are achieved via the technique of memory palaces, a technique dating back to Roman times - spatial memory relationships. I've been using the technique a lot since I read this book, and truly, no magic to it, it really works. Basically, when you have a list of items to remember, you weave each item, in order, into a spatially focussed narrative.

So, last night, I get out my copy of TCP/IP Illustrated, Volume 1*, one of my most-returned-to tech books - I've always wanted to have a more encyclopedic knowledge of the lower level details of TCP/IP, and last night applied the Memory Palace Technique to the structure of a TCP packet.
(( read the wikipedia article for more details))

In my memory palace I was walking down the path towards the house where I grew up, and seeing a ‘SoRCerer/Src Port‘ battling with ‘Dick DaSTardly and Mutley/Dst Port‘, then walk into my mothers front hallway with a Sequence Number along the front hall, then my Grandfather sitting in a chair in the living room saying “ACK!” because the soccer is on the television and he's complaining about the Header Length … you get the idea - but yeah, you need to make your own memory palace.

Now that I have a complete image of this TCP packet in my head suddenly expressions like :

tcpdump -ni en1 tcp[13] == 18 and host 172.16.1.200 and port 80

are way easier to understand and use - the tcp[13] part refers to the 13th Octet of the packet, which is the Flags octet, then the 18 part is a simple decimal representation of the binary flags, in the order they are in the diagram above - i.e the Flags are

CWR | ECE | URG | ACK | PSH | RST | SYN | FIN

so in my example 18 refers to having both the ACK and SYN flags set - 00010010 which if you're used to dealing with netmasks math is quite an easy translation. My example, then, will only capture the first response packet from the server, as it would be the only part of the conversation to have both an ACK and SYN flag set. (I used a separate memory palace for the flags themselves)

To capture all SYN packets, including the ACK/SYN ones, you would use:

tcpdump -ni en1 'tcp[13] == 18 or tcp[13] == 2' and host 172.16.1.200 and port 80.

Memory Palaces are pretty damn useful!

** Most Engineers are aware of TCP/IP Illustrated, however a lot of people I've spoken to aren't aware there was a 2nd Edition published in November of 2011, updated by a guy named Kevin R. Fall - I would absolutely recommend it, an amazing book and especially with the updates, just seems an essential addition to any Engineer's library..

Puppet stages and APT

gonz -- for no reason except he's the MAN!

At work, our old code deployment strategy was basically a wrapper script doing an svn checkout and some symlinking. With our move to Puppet for config management, we also moved to using Apt packaging for our code deployment, tying them together with a line similar to :

class foo-export {
package { 'foo-export': ensure => latest }
}

So that whenever we deploy a new version of a package to our apt-repo, it can then be installed with a:

puppet agent --test
(and with an initial dry-run using --noop)

( I should mention I manage our Puppet runs via our own distributed scripts, rather than having the nodes set up to check in every 30mins - when I'm doing so much work on our Puppet setup and config, I'd rather not having machines check in automatically in case the config is in a broken state )

Inevitably I would run the above Puppet command and it would not find any new packages, because ‘d'uh!', of course I still need to run an apt-get update.

I've been using Puppet stages for a while now, in order to group package installations in a broader sense rather than manually spelling out every dependency with a require => stanza, so it was a simple addition to add in a pre stage, and have the nodes run apt-get update before any runs.

In order to use stages, you need to first define them in your site.pp. By default every defined class runs under Stage[main], so you just need to add the new stages and define the running order. (full Puppet stage documentation is here)

At the top of my site.pp file, I added a pre and post stage, then define the execution order via:

stage { [pre, post]: }
Stage[pre] -> Stage[main] -> Stage[post]

Then I created a class called apt-hupdate (sorry, i use stupid naming conventions!) in
modules/apt-hupdate/manifests/init.pp

which contained:
class apt-hupdate {

exec { "aptHupdate":
command => "/usr/bin/apt-get update",
}
}

And finally, include that in your site.pp with:

class { apt-hupdate: stage => pre }

Now every time you do a Puppet run, apt-get update will be the first task run.

Vagrant and Chef setup

I've been reading through ThoughtWorks' latest ‘technology radar‘ which led me to look up Vagrant, one of the tools they list as worth exploring.

Vagrant is a framework for building and deploying Virtual Machine environments, using Oracle VirtualBox for the actual VMs and utilizing Chef for configuration management.

Watching through this intro video:

http://vimeo.com/9976342

i was quite intrigued as it is very similar to what i was looking to achieve earlier when i was experimenting with installing Xen and configuring with Puppet.

So here's what I experienced during the setup of Vagrant on my Macbook - I decided to start with a simple Chef install to familiarise myself with Chef itself and it's own requirements CouchDB, RabbitMQ and Solr, mostly by following these instructions -

-CHEF INSTALL-

sudo gem install chef
sudo gem install ohai

Chef uses couchDB as it's datastore, so we need to install it using the instructions here

brew install couchdb

The instructions I list above also contains steps to install a couchDB user and set it up as a daemon. They didn't work for me, and after 30mins of troubleshooting, i gave up and went with the simpler option of running it under my own user - in production this will be running on a Linux server rather than my Macbook, so it seemed fair enough -

cp /usr/local/Cellar/couchdb/1.1.0/Library/LaunchDaemons/org.apache.couchdb.plist ~/Library/LaunchAgents/

launchctl load -w ~/Library/LaunchAgents/org.apache.couchdb.plist

Check its running okay by going to
http://127.0.0.1:5984/

which should provide something akin to :
{“couchdb”:”Welcome”,”version”:”1.1.0″}

- INSTALL RABBITMQ -

brew install rabbitmq
/usr/local/sbin/rabbitmq-server -detached

sudo rabbitmqctl add_vhost /chef
sudo rabbitmqctl add_user chef testing
sudo rabbitmqctl set_permissions -p /chef chef “.*” “.*” “.*”

Ok, Gettin' back to my mission, break out the whipped cream and the cherries, then I go through all the fly positions - oh, wrong mission!

Ok..

brew install gecode
brew install solr

sudo gem install chef-server chef-server-api chef-server chef-solr
sudo gem install chef-server-webui
sudo chef-solr-installer

Setup a conf file -
sudo mkdir /etc/chef
sudo vi /etc/chef/server.rb
- paste in the example from:

http://wiki.opscode.com/display/chef/Manual+Chef+Server+Configuration - making the appropriate changes for your FQDN

At this point, the above instructions ask you to start the indexer however the instructions haven't been updated to reflect changes to Chef version 0.10.2 in which chef-solr-indexer has been replaced with chef-expander

So, instead of running:
sudo chef-solr-indexer

you instead need to run:
sudo chef-expander -n1 -d

Next i tried
sudo chef-solr

which ran into
“`configure_chef': uninitialized constant Chef::Application::SocketError (NameError)”

i had to create an /etc/chef/solr.rb file and simply add this to the file:

require ‘socket'

startup now worked -
if you want to daemonize it, use:

sudo chef-solr -d

Next start Chef Server with:
sudo chef-server -N -e production -d

and finally:
sudo chef-server-webui -p 4040 -e production

Now you should be up and running - you need to configure the command client ‘Knife' follwing the instructions here - under the section ‘Configure the Command Line Client

mkdir -p ~/.chef
sudo cp /etc/chef/validation.pem /etc/chef/webui.pem ~/.chef
sudo chown -R $USER ~/.chef

knife configure -i

(follow the instructions at the link - you only need to change the location of the two pem files you copied above)

Ok, so hopefully you're at the same place as me with this all working at least as far as being able to log into CouchDB, and verifying that Chef/Knife are both working.

- VAGRANT SETUP -

Now, onward with the original task of Vagrant setup…
Have a read over the getting started guide:

Install VirtualBox - download from http://www.virtualbox.org/wiki/Downloads

Run the installer, which should all work quite easily. Next..

gem install vagrant

mkdir vagrant_guide
cd vagrant_guide/
vagrant init

this creates the base Vagrantfile, which the documentation compares to a Makefile, basically a reference file for the project to work with.

Setup our first VM -
vagrant box add lucid32 http://files.vagrantup.com/lucid32.box

This is downloaded and saved in ~/.vagrant.d/boxes/

edit the Vagrantfile which was created and change the “box” entry to be “lucid32″, the name of the file we just saved.

Bring it online with:
vagrant up

then ssh into with
vargrant ssh

Ace, that worked quite easily. After a little digging around, I logged out and tore the machine down again with
vagrant destroy

- TYING IT ALL TOGETHER -
Now we need to connect our Vagrant install with our Chef server

First, clone the Chef repository with:
git clone git://github.com/opscode/chef-repo.git

add this dir to your ~/.chef/knife.rb file
i.e
cookbook_path ["/Users/thorstensideboard/chef-repo/cookbooks"]

Download the Vagrant cookbook they use in their examples -

wget http://files.vagrantup.com/getting_started/cookbooks.tar.gz
tar xzvf cookbooks.tar.gz
mv cookbooks/* chef-repo/cookbooks/

Add it to our Chef server using Knife:
knife cookbook upload -a
(knife uses the cookbook_path we setup above)

If you browse to your localhost at
http://sbd-ioda.local:4040/cookbooks/
you should see the three new cookbooks which have been added.

Now to edit Vagrantfile and add your Chef details:

Vagrant::Config.run do |config|

config.vm.box = "lucid32"

config.vm.provision :chef_client do |chef|

chef.chef_server_url = "http://SBD-IODA.local:4000"
chef.validation_key_path = "/Users/thorsten/.chef/validation.pem"
chef.add_recipe("vagrant_main")
chef.add_recipe("apt")
chef.add_recipe("apache2")

end
end

I tried to load this up with
vagrant up
however received:

“[default] [Fri, 05 Aug 2011 09:27:07 -0700] INFO: *** Chef 0.10.2 ***
: stdout
[default] [Fri, 05 Aug 2011 09:27:07 -0700] INFO: Client key /etc/chef/client.pem is not present - registering
: stdout
[default] [Fri, 05 Aug 2011 09:27:28 -0700] FATAL: Stacktrace dumped to /srv/chef/file_store/chef-stacktrace.out
: stdout
[default] [Fri, 05 Aug 2011 09:27:28 -0700] FATAL: SocketError: Error connecting to http://SBD-IODA.local:4000/clients - getaddrinfo: Name or service not known”

I figured this was a networking issue, and yeah, within the VM it has no idea of my Macbook's local hostname, which i fixed by editing its /etc/hosts file and manually adding it.

Upon issuing a
vagrant reload, boom! you can see the Vagrant host following the recipes and loading up a bunch of things including apache2

However at this point, you can still only access it's webserver from within the VM, so in order to access it from our own desktop browser, we can add the following line to the Vagrantfile:
config.vm.forward_port(“web”, 80, 8080)

After another reload, you should now be able to connect to localhost:8080 and access your new VM's apache host.

In order to use this setup in any sort of dev environment will still need a good deal more work, but for the moment, this should be enough to get you up and running and able to explore both Vagrant and Chef.