Logic Puzzle: Truth Tellers and Deceivers

January 22nd, 2004 | 16:37

Steven Den Beste has a discourse on an old Martin Gardner puzzle (which is more interesting that the political point he was trying to make; I’m more inclined to think a better explanation lies less in terms of active malice than being trapped in orthodoxies and mind sets). This logic puzzle is an old one that I remember appearing in the old Tom Baker Dr. Who episode, The Pyramids of Mars and saw recently in some Japanese anime thing. The Gardner variation is:

Suppose you have two villages, one that always tells the truth and one that always lies. At a fork in the road, you meet one member of each village, and you can only ask one question of either villager. Down one branch of the fork is your destination village; down the other is more wilderness. What do you ask the pair of villagers to know which way to go?

Basically, you phrase your one question so that the truth values of both individuals are taken into account. You ask one of them, what road will your companion point me down? Then you go the other way, because he would have pointed you down the wrong road.

The discussion of the puzzle becomes more interesting once you remove the constraints of binary logic. In the original variation, the villagers are like computers: they either answer truthfully or falsely, without regard to the intention of your question. Because they’re constrained by logic, you can use logic to get the right answer. However, suppose that the village of liars is peopled with real deceivers, whose goal is to mislead you. In this case, the deceiver would know the intent of your question, and act to confound your intentions. There is no logical way to get the truth out of this situation.

One of Gardner’s readers notes this, and proposes that the proper question is, did you know there’s free beer at my destination? The truth-teller would head down the right road to get the free beer. The deceiver would be in a quandry: does he want free beer more than deceiving you? Following him, in the worst case, may mean that you don’t get to your destination any time soon, but you’ll have the satisfaction of denying the deceiver free beer.

Just an extra note: Brad DeLong has a page of what will eventually be 100 interesting math calculations that he’s using to teach his children about logical/mathematical thinking. Interestingly, it’s a Wiki, so other people can contribute.

Digital Makes Better Photographers

January 20th, 2004 | 17:53

This MetaFilter post points to a BBC pop-tech column giving a few reasons why digital makes people into better photographers. Out of five reasons, only two sort of touch on the reason (at least as how I see it), and not in the correct way: yes, experiment and shoot-at-will. You’ll get a lot of pictures, some good, some interesting, though perhaps more in a million monkeys way than anything else.

No, the main reason is that digital gives instant feedback: you can tell much more quickly that you’ve composed badly, or missed the exposure, something you can only guess at with film. You quickly get the hang of exposing more correctly on the first shot, composing on the fly, moving for better lighting, and so on. It’s sort of like learning to use a piece of software: it’s better to play around with it (presumably with a manual) than to try and learn it without touching a computer.

Oh, I used an older version of Grimm’s Basic Book of Photography to learn about F-stops for my manual film camera. It’s a very good book. I don’t have the hang of all the details, but at least I have a good idea about what the dials do.

Actually, that brings up a different point: with my Nikon FM10, I feel I have very fine control over the image, in terms of twiddling all the dials the way I want. With the digital, I don’t feel I have much control at all. Yes, many of the same functions are available through various menus, but they’re a pain to get to; the process is simply clumsy. Basically, it’s a snapshot camera, with all the limitations of a snapshot camera. I shouldn’t be complaining about it, because it is what it is. However, you can’t really learn how photography without the ability to twiddle the dials, so to speak. So, even though I have instant feedback with my Sony digital, it’s not as useful as it could be. There’s feedback, but there’s nothing to be done about the feedback. At most, one can discard obviously bad images.

What I really need is a digital SLR.

Cisco IOS Notes

January 16th, 2004 | 18:30

We recently had to set up a IPSEC VPN from one of our offices to a client. The client recommended a Cisco 806 with the crypto module, which we picked up on EBay for about $150 or so. The main problem was to figure out IOS, with a bit of help from the client to set up the crypto sections, in a couple of days. I had played a little with IOS before to set up an ISDN box for a particular project, but that was a long time ago

Here’s a tutorial: http://www.fantek.org/cisco/wpbascom.htm, which is nice for starting out in IOS but not too useful for what we wanted. The main resources were the two O’Reilly books, Cisco Cookbook and Cisco IOS in a Nutshell, which we ordered from Barnes & Noble, mainly for the same day delivery in Manhattan. Both seem pretty good, with the Cookbook very interesting: you have scenarios, sample scripts and discussions on these scripts

The script provided by the client established the basic VPN:

crypto isakmp policy 1
encr 3des
authentication pre-share
crypto isakmp key XXXXXX address 999.999.999.999
!
crypto ipsec transform-set MyVPN esp-3des esp-md5-hmac
!
crypto map MyVPN 1 ipsec-isakmp
set peer 999.999.999.999
set transform-set MyVPN
match address 100
!
access-list 100 permit ip my.vpn.ip.addr 0.0.0.255 client.vpn.ip.addr 0.0.0.255
access-list 199 permit udp any any eq isakmp
access-list 199 permit ahp any any
access-list 199 permit esp any any
access-list 199 permit ip client.vpn.ip.addr 0.0.0.255 my.vpn.ip.addr 0.0.0.255
!
interface Ethernet1
 ip address 999.999.999.999 255.255.255.0
 ip access-group 199 in
 no cdp enable
 crypto map MyVPN

So, we define the IPSEC parameters (pre-shared key, peer, mechanisms) and define the crypto map with an access-list to match against for the transformation. This is different from, say, CIPE, in that we don’t set up virtual ethernet interfaces and route through those. The crypto map is then assigned to the external ethernet interface of the router

The problem with this basic script is that the my.vpn.ip.addr was assigned to us by the client, and doesn’t match up with any of our defined networks. Ethernet0 on the Cisco actually uses a completely different IP address. To accomodate this, we have to define a NAT that will take our IP addresses and map them to the my.vpn.ip.addr network:

ip nat pool NATPOOL my.vpn.ip.2 my.vpn.ip.100 netmask 255.255.255.0
ip nat inside source list 15 pool NATPOOL
ip nat inside source static my.real.ip.1 my.vpn.ip.1
access-list 15 permit my.real.net.0 0.0.0.255
!
interface Ethernet0
 ip address my.real.ip.1 255.255.255.0
 ip nat inside
interface Ethernet1
 ip nat outside

This creates a NAT pool, so that we have 1-1 mappings of real hosts with NAT’ed hosts, or at least 98 such hosts. Any traffic from list 15 to Ethernet0 undergoes this mapping. We also create a static NAT so that the router’s Ethernet0 interface is pingable by the client. Ethernet0 is then tagged as the inside NAT and Ethernet1 as the outside NAT.

This setup then fulfills the crypto requirement of translating our IPs to the client-assigned IPs required for the VPN (or at least the firewall on their side). This took a while to figure out, since I kept on leaving out a keyword or two in various places when trying to set up the NAT. Also, debugging was slowed because I didn’t have a box in the DMZ from which to test from: I think half the testing time went into looking for IOS scripting issues when the problem was most likely with the iptables rules I had set up for DMZ access.

Conclusions so far? IOS is mysterious and powerful. The O’Reilly books are surprising large for a language that is meant to just set up routers. What we did was also relatively minor stuff: the real magic, why Cisco CCNAs get paid the big bucks, is in setting up routings. That’s the bulk of these books.

Fish Poached in Tomato Sauce

January 16th, 2004 | 12:51

Fish Poached in Tomato Sauce

1/4 cup garlic, chopped
1/4 cup ginger, chopped
1 onion, chopped
1 28oz can whole tomatoes
1.5 lb basa fillets, cubed
1 bunch basil, de-stemmed, loosely chopped

Saute the garlic, ginger and onion until they start to brown, then pour in the can of tomatoes, with juice. Use a spatula to break up the tomatoes, and bring to a boil.

Reduce the temperature to a simmer, then add the fish. Adjust temperature so that the mixture will poach the fish (just below simmer). Stir occassionally until the fish is done, in about 15 minutes.

Scoop out the fish into serving dish, then turn up heat to boil and reduce remaining tomato mixture for 10 minutes for the sauce. Pour this over the fish, add the basil and mix. Serve over rice.

The Guy Responsible For That Thing Sitting On My TV

January 10th, 2004 | 08:31

Slate has a neat piece on Kenneth Snelson, a mathematically orientated artist. Apparently, he and Buckminster Fuller invented tensegrity, where structural elements both push and pull on each other (the origins are murky, though Snelson seems to have more responsiblity, with Fuller coining the term). Snelson has a FAQ on the whole concept behind his art. It’s the triangulation of forces found in high school physics books manifest as steel and cable.

The Slate piece notes that he recently had a show in a New York gallery, but there was very little notice of it. (Sadly I missed it; Slate is where I’ve first heard of the artist, so I wouldn’t have known about a show at all. Admittedly I’ve let my New Yorker subscription lapse. If anything, that magazine might have had a blurb I would have seen when I browse through the events listings.) The main reason he’s ignored by most art critics is because he’s one of those rare mathematical artists, so he doesn’t fall into any well-defined school or movement. And, anyway, “art critics were the kids who failed high-school math.”

Like buckyballs, tensegrity has wider applications than just art. The piece finishes with a note about a biologist at Harvard who, after seeing Snelson’s work, realized that animal cells must follow similar principles; they’re not just little sacks of fluid. This work has implications in how tissues form, how cell structure fails, how humans might have to deal with space.

In another life, I bought a mathematical toy based on tensegrity. After I got it into its current shape, I never tried building other ones from the pamphlet. If only I had a larger kit, I could build something wacky that look like Snelson’s Rainbow Arch.

Beyond Fear

January 8th, 2004 | 13:01

I finished Bruce Schneier’s new book over New Year’s, while passing through various airports around the country, wondering what the recently raised Orange Alert actually means and reading the Al Qaeda Threat of the Month online. This arguably wasn’t the best book to read while navigating the new air travel universe, especially when you the passage about how security screening bunches a large number of people (who haven’t yet been screened) in a small space.

I haven’t flown that much before, but the TSA seemed very professional in handling the long lines, even if they could only process everyone with a certain deliberateness. Of course, we get silly stuff every once in a while, like the Scissors Babe episode at Newark, and lines can be catastrophically long at airports that aren’t prepared. But if the line itself becomes a target, the TSA screeners at work there won’t be at fault; instead, they’ll be victims, as much as anyone else, of a flawed security policy where a minor risk was displaced to create a larger potential danger.

And that’s basically what Schneier talks about: where does security policy come from? What are the costs and benefits of a given policy? A lot of these topics appeared in his previous book, Secrets and Lies, though more specifically applied to IT security than to the world at large. Beyond Fear is his post-9/11 book, and attempts to analyze real-world security policy in general, while giving us tools to do so on our own.

Probably, the main lessons to draw from Beyond Fear are that security policy tends to be political, that security failures are inevitable, that security systems are best characterized by how they break, and that there will always be criminals whose motives haven’t really changed for millennia.

  • Security policy is political: this is relatively obvious, but should be firmly restated. Consider TSA screening policies, especially the early ones where anything valuely pointy or sharp got confiscated. This came about because of the need to do something right after 9/11, mainly to restore public confidence in air travel, resulting in a lot of silly items on the prohibited list. Note that, because there are always costs and trade-offs with any given security policy, there will be a political process deciding on the policy, whether it’s a CEO asking that holes be poked in the firewall to increase convenience, or the choice to deploy air marshals on flights. The hope is to manage the political process so that all interests are well represented (air travellers were not well represented when airport security policy was decided).

    But even a well-managed process may result in contradictory policy. Different interested parties will want to secure different things. The Club in a car is perfect for the individual driver, since a thief will just go over to an un-Clubbed car, but is not useful for a police department interested in cutting down auto thefts overall. Something like LoJack would be prefered to reduce overall theft, but is relatively costly to individuals.

  • There has always been and always will be assets that are protected from attack, be it oneself (a repository of good eats to a tiger) or one’s possessions. Defenses can be pierced with sufficient or unexpected force, or may be made obsolete by improved technology or reconceptuatlizations (high castle walls being vulnerable to cannon fire, the West Coast Offense, Al Qaeda’s 9/11 plan). What may be secure now may not be in the future, and we have to plan for that possibility.
  • When a breach happens, how do the security systems respond? Are they brittle, where a security breach compromises everything? Do they instead bend, and eventual breaks are localized? Techniques such as defense-in-depth and chokepoints help to improve security, by localizing breaches and giving the security system time to respond. One characterization of the brittleness of a system is how many secrets are required to secure the system: do we require an algorithm to remain hidden in order to keep the password secret? The more required secrets, the riskier the system.

    Note that, given the absence of flexibile machine intelligence, humans are always, always, always required in the system, because they can better respond to unexpected situations. Security systems should be built with human response to security breaches in mind. For example, safes are rated at how long it would take for a skilled safecracker with the right tools to breach (30 minutes, 1 hour, 1 day, etc.). You then have to design your security system so that guards will come by and check on the safe within that time period. Any security system will have humans at the heart of it, with technology used to leverage human response rather than replace it.

  • Basically, there have always been people who want to kill you or steal your stuff. The only difference now is that they no longer are required to come up close to you with a sword to do this. Schneier argues that the motivations of the bad guys can be broken down to greed, a desire for fame or publicity, revenge, curiosity, and so on. Knowing why someone wants to break your security helps in planning security. You would erect different barriers against thieves compared to people who are merely curious. You would take different measures against people who are out to kill you rather than steal from you.

There’s also sections on the elements of security policy, such as detection, response and audit, and the difference between authentication (you are who you say you are), identification (how can I figure out who you are?) and authorization, and how confusing authentication and identification leads to security lapses. One example of the latter is the attempt to use face recognition technology to identify terrorists in airports and failing miserably. Face recognition is useful as an authentication technique (we do it naturally when we look at people), but not for identification. Further, we run into the base rate fallacy if we sound the alarm on every hit.

There, of course, are a number of anecdotes on security. Schneier recounts a flight he was on, where one of the pilots went to the lavatory: after the stewardess blocked the aisle with a beverage cart and let him into the lavatory, she took his place in the cockpit, so there’d always be at least one person in there (even if she didn’t know how to fly). While this was happening, a husky gentleman in the third row took off his headphones to watch over the process. Schneier saw it as good procedure, though he wasn’t sure where it came about. Possibly, it was because of EgyptAir 990, so that the one pilot cannot take the plane down by himself without at least someone trying to stop him immediately (armored cockpit doors won’t work if the screen process for flight crews is faulty). Alternatively, two people in the cockpit makes it more difficult to take by external assault. Or perhaps both.

Schneier makes a reference to Martin Shubik’s remark that technological eras can be characterized by how many people ten determined men can kill before being killed themselves; technology since the industrial and post-industrial eras has raised this number exponentially. However, he doesn’t quite follow up this frightening thought beyond saying that we should think more clearly about security than we are now. In particular, he doesn’t seem to apply this thought to Al Qaeda and the swamp from which Al Qaeda arises, a swamp which seems infinitely capable of sending forth murderous, determined men.

While Schneier is correct in that there will always be bad guys, no matter what we do, he seems to be mistaken in the best way to deal with Al Qaeda, whose terrorism he ascribes to publicity seeking on a grand and deadly scale. Yes, we should create security systems that can help against the suicide hijacker — air marshals, armored cockpit doors, etc. — and we should be more conscious of the trade-offs we’re now unconsciously making. Schneier thinks it’s hopeless to expect any better, that there will always be people trying to kill other people. But we can drain the swamp: Al Qaeda seems to be driven by a mass pathology, a cult of death bent on apocalyptic murder. Are there any strategies that we have that can address this mass pathology?

We’ve done it before, when we purged Japan and Germany of their cults of death, though it cost a great deal of blood, treasure and time (war between, say, Germany and France is now unthinkable, in the far better sense of the thought never arising, rather than being too horrible to contemplate). And now, after 9/11, we will have to do it in the Islamic world, hopefully with less blood, though probably with more treasure and time. Steven Den Beste has an overview of this war and what we’re doing in it (or at least should be doing in it) which I agree with in general. In a sense, the War on Terror is misnamed; a better description of what’s happening may be to see this is instead as a War on Bad Philosophy: our safety in the long term hinges on our ability to convince the populations of nations that apocalyptic murder is something that cannot be permitted, to free them the mass pathologies that make such thoughts thinkable and reasonable. This will be the work of generations, a change in psychology similar to but greater than our domestic Civil Rights struggle, but over many more people, using fewer instruments of compulsion, from a far deeper pit of hatred. And we won’t know if we’ve won.

We can’t stop everything, and there will still be individuals bent on mass murder. Tim McVeigh is an example. This is where Schneier’s security policy recommendations would be useful: we can harden ourselves against the “small” attacks, so they become more difficult to successfully pull off and cause less damage when they get through. But we may be able to do something about the ability and the willingness and ability of hundreds of individuals to band together to commit mass murder. We may be able to do this by freeing their societies — freedom for others is safety for ourselves. The alternative may be far darker.

Residency Match and Cooperative Game Theory

January 5th, 2004 | 16:21

Grace is interviewing for the residency match right now at various hospitals in the area and further out. The residency match system is actually interesting from a math/game theory point of view: the system arose out of a need to address a market failure for residents back in the early 1950s, and followed an algorithm shown to be more or less optimal by Gale and Shapley in the early 1960s.

According to various capsule histories of the match, medical residency was introduced around 1900 as a form of postgraduate education for medical students. The capsule histories don’t go into details on why, but salaries were standardized across the various residency programs: every resident makes the earns an equal salary no matter how exceptional they are, with small adjustments for local cost of living. (Presumably, there’s a notion that residency is a learning opportunity/apprenticeship, so salaries should be equal) Given that hospitals can’t compete on wages, they started competing in timing, where offers were presented to students earlier and earlier, so as to bind up their resident work pool before rival hospitals could. This became ridiculous: by the early 1940s, residencies were being finalized by the beginning of Third Year, before students had any clinical experience to make intelligent decisions on what they want to do. To address this problem, schools were prohibited by their governing association from disseminating transcripts or reference letters before early Fourth Year.

This caused grief to the hospitals, who now faced severe uncertainty as to their resident labor force: if a student rejected a hospital at the last minute, that hospital would wind up short of doctors, or at least with a less desireable applicant in that slot. Students on the other hand had no incentive to respond quickly to offers from hospitals: if their third choice offered first, why not wait for their first choice to respond? So hospitals began requiring students to very quickly decide on their offers. By the early 1950s, students were given only hours to reject or accept offers.

Because no one was happy with this situation, all the parties decided to centralize the process and developed what would be called an ordered list matching algorithm. Remarkably, they stumbled on what’s more or less the optimal solution, as shown by Gale and Shapley a decade later. This SIAM paper has a nice lay exposition of their work: suppose there are n boys and n girls (yes, Gale and Shapley wrote in the late 1950s), each boy with an ordered list of preferences for the girls, and vice versa. We proceed through a number of rounds (at most n2 rounds, actually). In the first round, each boy asks his first-choice girl to marry him. Each girl considers the proposals she’s received, and says “maybe” to the highest ranked boy on her list, and “no” to everyone else. In the second round, the rejected boys go to their second-choice girl and asks the same question. Each girl looks at all her received proposals (including the one from the first round), and then says “maybe” to the highest ranked boy and no to the others; she may change the earlier “maybe” to a “no” if she receives an offer from a higher-ranked boy. We continue until there are no rejected boys, at which point the existing “maybes” are changed to “yeses” and the matched pairs are finalized.

The result of all this is a “stable” list of pairings: there does not exist a possible pairing such the partners prefer each other to the ones they are paired with. The algorithm then produces an equilibrium, what economists would call Pareto optimal (there exists no arrangement that would benefit one person without making someone else worse off).

The actually residency match is more complicated than this — hospitals don’t rank all the student that rank them, there are provisions for couples to be matched together, there is the possibility of not matching anywhere, etc. — but the algorithm follows the model well. The remarkable thing is that the algorithm was developed ad hoc, without a theoretical underpinning.

There’s currently a lawsuit by a number of residents who feel that the match system is unfair, possibly because they don’t like the idea of computers “governing” their fate. The problem they have with their lawsuit is that they would have to come up with a system to replace the current match, and this system would have to be shown to be more “fair” or optimal than the one in place right now. Given the mathematical background behind the current match, it’s unlikely they’ll be able to come up with anything better, and, in fact, they haven’t attacked the math behind the match, only that scary computers are running it (as opposed to, say, reliance on personal connections).

One further implication shown by Gale-Shapley is that there’s an asymmetry: the above model is “boy-optimal” and “girl-pessimal”, where each boy gets the best girl he can for a stable pairing, but the girl gets the worst possible boy. In the case of the residency match, where hospitals were the “boys” and students the “girls”, the hospitals have an “advantage”. The roles were reversed after a revision in the match process in the late 1990s, with the students playing the “boys” currently, though, empirically, only one resident in a thousand would wind up in a different place.

(Actually, from an economics point of view, all this match process does is to try to come up the best response to an initial market distortion, the inability of hospitals to use wages to fill out their residency slots. Arguably, a better system would be to allow wages to float. The problems with this idea are that this may be culturally distateful (wages were fixed for a reason to start with), it may harm smaller hospitals who depend on the cheap labor (which brings up other issues touching on the right to health care, and whether there are compensatory mechanisms to address this), it may hamper the teaching aim of residency, and so on. There may also be less obvious market failures involved, so floating wages may lead to worse outcomes; this may all touch on the theory of the second best. I’m not sure how many studies have been done on the effects of freeing wages for residents, though the above cited SIAM paper does refer to some work done to produce differently priced “residency slots”, to allow hospitals to offer varying wages. On the other hand, it’s not clear whether any of the more elaborate match models would have a practical effect, and may just increase stress and confusion in an already stressful and confusing process: not only would hospitals have to be ranked, but salary levels would have to be ranked, and so on.)

In a totally unrelated aside, when I first read about the cooperative game theory for the residency match, I was all excited because Shapley had a hand in the theoretical work. Long ago and far away, I did a summer math internship for the Research Experience for Undergraduates NSF program, and spent a couple of months in suburban northern New Jersey contemplating the Shapley-Shubik power index, which describes the relative power of members, say, a voting body like the UN Security Council. We actually got a paper out of this that I presented at some math conference in San Francisco; granted, it was a “look at what the undergrads did!” thing, but still, it was an opportunity for me to flub a public speaking engagement. We did submit the paper for publication, but it was rejected in a couple of places, and we didn’t pursue it much further.

This morning, after I dug up my old paper from the Mac archive .ZIP file, I typed “shapley shubik banzhaf” into Google, and, holy crap, it looks like someone did similar work on the axiomatic foundations of these indexes. I don’t have the mental circuitry anymore to evaluate what they did, nor do I remember what I did particularly well, but it looks a whole lot like what I did. Sigh.

Notes on Wireless

January 3rd, 2004 | 15:54

A couple of weeks ago, I picked up a Netgear MA111 USB 802.11b adapter for an older laptop we took on a trip. I was about to buy a PCMCIA version (it was a laptop, afterall), but realized in the store that I wouldn’t be able to use if for anything but a laptop once I got home. The USB one, on the other hand, can be plugged into my Mini-ITX router, giving me a Linux-based AP once the software becomes available. If the HostAP software never comes out, then in the worst case I pick up a Linksys, and my network topology will still have my wired computers behind my own firewall, rather than the Linksys one.

This isn’t happening anytime soon: we don’t own a laptop, so the wonders of WiFi are lost to me in our apartment. On the road, it was great: pull into any Starbucks/Kinko’s/some Border’s, and you got Internet access through T-Mobile. The days of hunting for obscure, out-of-the-way Internet cafes are past.

So, for the future, here are some notes on securing the whole thing. The main point of departure is a Slashdot posting discussing wireless risk management. This leads to an ArsTechnica article for a wireless security How-To. I’ll read these later; the main point is to bookmark. BugTraq also had a thread on wireless best practices, but their web servers are down right now, so I can’t link to it. Anyway, Googling for those terms gives pages like this one.

Oh, one amusing thing from the BugTraq thread was to use an SSID of SST-PR-1. Apparently, this is the SSID used by Sears service trucks, and are therefore weirdly ubiquitous and less likely to draw the attention of wardrivers.

New York City in the 1940s

December 31st, 2003 | 16:41

Gothamist has a nice link to Charles Cushman’s photos of New York City from the 1940s. The neat thing about these photos is that they’re in color, which gives a far different view of the city than the usual black and white posters and archival photos you usually see.

The Bowling Green shots (45 through 52) are interesting, mainly because my office overlooks the area. I had no idea there was a statue where the fountain is now. And we have this photo of skyscrapers that no longer exist, or no longer stand out among the tall buildings built after the war. I guess all this can be paired up with Celluloid Skyline, a web site and book about the idea and myth of New York City’s skyscape as a backdrop to Hollywood movies. Sic transit gloria mundi.

Aikido

December 31st, 2003 | 16:01

While in Seattle, I had the opportunity to stop by a couple of aikido dojos around lunch time, to see what was offered in the area. I would have prefered to have gone to a judo club, but only the aikido guys had anything scheduled for the early afternoon when I was free.

The first place I sent was Two Cranes Aikido. They actually didn’t have a class that day, but there were brown and black belts doing exercises. Everyone seemed friendly (though everyone is relatively friendly in the smiling/waving sort of way on the West Coast), and I watched for about twenty minutes.

Their movements are arguably prettier to look at compared to what we do: more circular, and a tendency not to have uke thud on the ground at the end of techniques. There were a couple of techniques that seemed to require more “cooperation” from uke: something with a shomen strike and tori entering to be behind uke, and then both turning to face each other for the completion of the technique. I wouldn’t have assumed that uke would turn without contact, though, on the other hand, if uke doesn’t turn around, so much the better. There was also a bit of body bending when getting out of the way of shomen, where tori would bend over to the side, or turn the hips and bend backwards. This happened on a couple of techniques with a tori wearing a hakama. This is different from what we would do, where maintaining posture and not bending (backwards!) so early in the entry are signs of good technique.

The other dojo I went to was Tenzan Aikido at the Seattle Holistic Center. This was a big place: it’s the largest mat I’ve seen, complete with very high ceilings. From the signage at the Holistics Center, they offer a lot of yoga classes, which must help pay the bills in a way a dedicated dojo may not. The main instructor wasn’t there, but I got to take a class in my sweat pants and t-shirt. By the time the class started, there were three or four more senior students (one in hakama, the others wearing white belts but relatively experienced), along with me and a white belt who was taking her sixth or seventh class.

The class was a bit over an hour long: warm-ups/stretching was very yoga-like (assuming my limited yoga experience at the gym is any guide) and not particularly physically challenging. Then we walked around in suwari for a few minutes, which is useful practice, if only to save myself the embarrassment of stumbling up to shake sensei’s hand after a test. Then we did ukemi, which, as far as I could tell, consisted of just forward rolls for this class. Since I didn’t know how they did forward rolls, I volunteered to go with the newbie white belt for instruction while the other students rolled around.

Their rolls put the hands down around the front foot, leading hand’s palm down, fingers pointing back. I think the other hand goes down next to or behind it, fingers facing forward. The resulting roll is fairly soft for putting energy down at the feet: in Eizan-ryu, we try to project energy outwards, putting out the lead hand as far away from the lead foot as we can, and roll along the edge of that hand.

Later, we were doing an exercise where, in Eizan-ryu, uke takes a back fall or side fall. I was actually told to do a back roll for this. They spotted a flaw with my ugly back roll — more stepping back, and then tuck the back leg, with the instep of the foot on the ground, rather than toes as I had been doing — and helped me fix that, which is a very good thing. I’ll be able to do Eizan-ryu ukemi games a bit better now.

The differences in the ukemi might have to do with the expectations of energy in techniques, at least for beginners. We tend to use a higher level of energy, with a lot more down than I saw and felt in these classes. Expecting to back roll or putting energy down at the lead foot may not be advisable in these cases.

In particular, the main exercise we did was iriminage. The senior students did a variation that involved more movement and circles, and what looked to me like uke and tori not maintaining contact during this transition circle, which makes me think this isn’t going to work. But the newbie white belt and I, and later, one of the more advanced kyu students, did an iriminage very much like what we do in jujitsu (the instructor actually mentioned that there should be something in my repertoire that looks like aikido iriminage). The main difference was that uke backrolls out at the end of the technique, whereas we put more down into the effort and uke is forced to breakfall. It’s easy to do the same with the aikido version, but perhaps they reserve that for when uke’s ukemi is better (though, arguably, one gets better ukemi by doing ukemi), or, more likely, it’s a difference in philosophy — both dojos I visited emphasized peace and cooperation in their literature. I didn’t slam uke into the ground during this iriminage. (As a side note, I was chatting with the newbie before class started, and she mentioned that she didn’t watch a class before taking one, otherwise she would have been too frightened of the ukemi. I thought of sensei’s observation that a lot of people who watch us from the start of class get up and leave when we do breakfalls, and that the aikido people must think that we’re some sort of Visigoths with sharpened incisors and tattoos covering our faces.)

That was the class: warm ups, ukemi and three or four techniques. One thing I noticed was they didn’t have a section devoted to taisabaki, which the newbie and I wound up doing anyway when we starting off doing iriminage, since she couldn’t get the footwork down. So, the instructor had us stand on the seams in the mat and practice taisabaki/tenkan for a little while. I guess in the context of an one-hour class, it’s not feasible to devote ten or fifteen minutes to this exercise.

This whole dojo touring was useful: I’ve now taken some aikido and had the instructor tell me I had to lead and extend more, even if I have a handle on the basic mechanics; I have a good idea what’s wrong with my back roll so I can work on it; and I got to practice iriminage a bit, a technique I kind of suck at (though last month Sensei Coleman got me started on keeping my momentum/energy going forward while actually moving backwards in a small circle, which helps fix our iriminage variation where uke is lead around and around until tori gets bored and slams uke into the ground).