Monday, September 30, 2013

Technologist in the Architectural Discipline

There is a difference between architects and architectural technologists. I obviously have a bias because I was trained as an architect but I will try to state the differences as fairly as possible.

Architects are trained to design. We are taught to think about things like narrative, tension, positive and negative space, in short the art of building. Technologists are taught how buildings are constructed. They learn the technological side of a profession with two faces. The difference is sort of like that between the clothing designer and the pattern maker. In a previous post (Separation of Church and State) I discussed some ways this division of labour is reflected in architects' offices and the work they produce. Now I want to discuss something with more immediate ramifications for me.

The OAA (Ontario Association of Architects) has a job-posting page; their version of classified ads. In the latest posting there are 6 positions for Technologists, 5 for Intern Architects, and 6 for Architects. This is only unusual in the number of positions for Architects - it's typically lower. When a firm hires someone fresh from school, they can either get a technologist or they can get an intern architect. The technologist will be able to start working on drawing sets immediately and will require less on-the-job training. Technologists do not need to be taught as thoroughly as interns need to be taught technology because technologists will never be expected to have mastered both sides - architects will. The investment involved in training a graduate of an architecture school ultimately results in a person who can design buildings as well as draw them. But it costs money in the short term. And graduates from an architecture school expect to be paid more than technologists. Architecture school takes 7 years. When I graduated (from a co-op program, meaning I was earning money as I went) I had six months before I had to start paying back my student loans - at the rate of $650/month. That's a financial pressure technologists don't have. I only include that to point out interns expectations of higher pay are not based purely on snobbery.

Unfortunately, architecture is often a short term business. The owner of a firm I worked for told me for the purposes of long-term planning, she considered her family a "one-income household" and that income wasn't hers. This was a successful firm that had been around for more than a decade but if the economy slows down, or you don't get any new jobs for a couple months consecutively then you are in the deep weeds. The only architecture firms that can plan more than six months in the future either have repeat clients who are always building or sustain themselves by sheer size (a firm of 5000 has different problems than a firm of 50). A firm of a dozen or so people can't possibly plan more than six months in advance with any accuracy. It is feast or famine.

I don't have a problem with technologists. I don't think they are stealing architect's jobs. I don't think they dilute the purity of the profession. I do think it is extremely short sighted for a firm not to balance the number of technologists and interns they hire. No matter how expert a technologist becomes they cannot get a license to practice architecture. So any firm that relies on technologists exclusively is setting itself up to definitely fail in the long-term.

High levels of uncertainty combine with economic pressure to lead firms with one or two architects to hire as many technologists as are required to produce the drawings. The architects spend all day on the phone, the technologists spend all day on the computer. No matter how successful the firm is they will eventually face a time when one, or even two architects just aren't enough to handle the workload. And that's the worst possible time to bring in an intern. Since the architects are already over-worked, they don't want to take on even more work training someone to replace them. The temptation is to hire an architect who interned somewhere else. But instead of the $45-$50k/year an intern would expect, they are going to want $80-100k. And what kind of loyalty are they going to have? If they get a better offer from another firm, why wouldn't they take it? Your firm hasn't done anything for them. An even more pressing question is where are the licensed architects coming from if no one is making the investment in training interns? The most recent posting was exceptional in that firms are looking for either interns or technologists. Most ads are for "Technologist or Intern" - meaning if you are an intern who can do the job of a technologist (and you are willing to work for less) you have a shot. These ads often ask for interns with 5 or even 10 years experience. As discussed below, if you have 10 years experience, you shouldn't be an intern anymore.

There is a place for technologists in architecture firms but (excuse me for stating the incredibly obvious) there must also be a place for intern architects! Right now most interns are training in giant firms - the kind of size that can weather a downturn. That's problematic for a number of reasons. Big firms are highly compartmentalized. Technically, the OAA requires around 3750 hours experience before an intern can qualify to take the exams for licensure but those hours need to be in very specific categories. So in a big firm it might take 5 or 10 years to get the required hours instead of 2 years. It's really inefficient. And it produces precisely the loyalty problem I discussed above.

Why is loyalty (or its lack) a problem? Because institutional knowledge is hugely important in a profession like architecture and it only gets passed on through stability in the institution. The OAA is trying to simulate something like institutional knowledge by the requirement all interns have a "mentor" - someone to whom they will, presumably, forge the connection they do not forge with their firm. But it is a stop-gap solution.   

Saturday, September 28, 2013

On David Gilmour (Author not Guitar G*d), Sexism, and the University

Canadian author David Gilmour is taking a lot of shit right now for comments he made here. The gist of it (most often misconstrued, intentionally I suspect) is Gilmour only teaches books by men. Most often American men. He won't teach books by women or Chinese people. The (completely unsurprising) reaction is claims Gilmour is sexist, rascist, homophobic, and an all-round asshole.

I disagree. We could go through the points in detail - for example the reason he doesn't teach Virginia Woolf isn't because he is a misogynist but because she is too sophisticated for his students (first and third year). The quote he is really getting heat about is, "Usually at the beginning of the semesters a hand shoots up and someone asks why there aren't any women writers in the course. I say I don't love women writers enough to teach them, if you want women writers go down the hall." To me this quote says a number of things - David Gilmour doesn't love women writers and, significantly, there is a course (offered just down the hall) that either has a large number of women writers in the syllabus or is explicitly about women writers. What is does not say is women writers are shit. Only that David Gilmour doesn't love any of them except Virginia Woolf. I don't love Virginia Woolf. If you read the whole interview you will also see he is not saying Canadian authors are shit - only that he doesn't love any of them enough to teach them. I do. I don't love David Gilmour enough to teach him but I suspect that he has absolutely no fucks to give about that.

There is a reason professors cannot be told what to teach - so they do not get embroiled in bullshit identity politics / political correctness shit fights like this. You want David Gilmour to teach something by a gay Chinese woman? Why? Would the quality of his teaching improve? Of course not. The issue is not what David Gilmour teaches but what authors he loves.

Here is the complete list of all the authors he loves enough to mention in the short interview: Proust, Tolstoy, Chekov, Woolf, Elmore Leonard, Scott Fitzgerald, Philip Roth, Henry Miller. Gee, no wonder everyone wants this guy away from impressionable youth!

There is another way to look at the problem. As far as I know, Gilmour isn't teaching a survey course nor a "Modern Literature 1850 -2000" kind of course. He has no obligation to offer a thorough view of literature in any part of the world at any time. He is teaching what he wants to teach - which, as I have written previously, is the best way for professors to teach.

When I was given the opportunity to teach a Cultural History class at the University of Waterloo I had to think of twelve novels for my curriculum. This is amazingly difficult. I started off with a list of about one hundred serious contenders. I whittled down from there by comparison (1 is similar thematically to 2 but 2 has something extra, more compelling, whatever). In many cases that "whatever" was fewer pages - the curriculum demanded the students read a book each week. And that's fine for an English program but this class was for a design school. So I limited myself to three books of more than 300 pages. Certain authors absolutely had to be represented - Michael Ondaatje, Don DeLillo, Haruki Murakami, Primo Levi. Five remaining. I wanted to introduce the course with an experimental short story - John Barth's Lost in the Funhouse. Four remaining. One graphic novel. Three remaining. Anne Carson's Autobiography of Red would have made the list if it was half the size so two remaining. They were on a field trip for 5 days during which they were supposed to also read a book, toss in an easy to read piece of "trash". One remaining. I don't know, or care, if any of the authors I taught was or is gay. Some are Canadian, some are American, Murakami is Japanese. As for the rest, I don't know. Nor do I care. That isn't why I chose them.

I had a chance to force one group of students to read a very small number of books and I wanted the ones that meant the most to me.

I honestly don't know why this interview is getting so many people so mad. I think it has more to do with the massive demographic dominance of straight white guys in academia than David Gilmour. He will probably lose his job for this and that isn't a terrible thing as far as this one situation goes - U of T without David Gilmour isn't very much different than U of T with. It is a terrible thing is professors start writing syllabi with a checklist beside them - "Gay", "Canadian", "First Nations", "Female (x4)", and so on.

Monday, September 23, 2013

High Tech Product Low Tech Mind

I just upgraded my computer - another way of saying I just demonstrated how far technology has left me behind. This new one (a MacBook despite the suicidal Chinese workers and the kids being force to mine minerals in Congo) has the app store built right in.

One of the first apps I got is called 1Password. I has a simple premise that appealed to me - go to every site that you have a password for, use the app to generate an incredibly long and basically random new password and then the app will fill in the form for you ever time you visit the site. All I have to do is remember the enormous password with which I protected the password generator. That's the theory anyway.

Instead, I have changed every one of my passwords to random strings of 20 characters and can't figure out how to get the app to fill the forms in for me (or even tell me what the new passwords are). The only reason I'm able to write this is because Google sent me an email saying "You got hacked, no one would be stupid enough to change their password to 2kH%N6kdin(sm$nald2(dnas)#an1kO" and let me change it back.

On the down side, since I have a terrible memory for things like passwords I use the same one for absolutely everything. It used to be "BobaFett" but I thought people would be able to guess that, so I changed it.

Every other change Apple has made in the years since I switched systems seems designed to annoy me. The scroll function on the pad works in the opposite way - why did they do that? And 1 finger is something different than 2 fingers or 3 fingers. That makes a kind of intuitive sense - if you are desperate to avoid the right-click (as Apple are for some reason) you want as much from your pad as possible but I end up spending 20 minutes a day licking my fingers so the track pad will understand they are actually human appendages and not something else. As to what they might be if they weren't fingers, I have no idea. Still, on the plus side I am keeping this computer as clean as an operating theater - since I am licking it all day.

Saturday, September 21, 2013

Twenty Minutes in Manhattan

I stopped reading about technology because it was getting depressing. I started reading Michael Sorkin's Twenty Minutes in Manhattan. It's hard to say what it is yet, I'm only a chapter into it. One thing I can say is Sorkin drops names like mad but the names of places not people; he seems to expect his entire audience to have PhD's in Architectural History. I was thinking about a long term project where I post images for everything he references (excepting his own building and personal stuff). I would take on the role of Google Image Search. The only reason I would even consider this is I'm certain I would benefit from actually looking the things up myself and it isn't that much harder to go the extra step and post the images here.

Still, there is an added element of reverence here I am aware of. Sorkin edited a book called Variations on a Theme Park which is both the best title of an architecture book ever and the closest anyone has ever come to Robert Venturi's Complexity and Contradiction in Architecture in a strange category of "exactly what you would expect given the title". The text of Complexity adds almost literally nothing to the title. By almost literally, I mean two words, "I like..." Having given his view, Venturi lists some prominent examples. My view is the title is so captivating, like an epigraph from Tacitus, it convinced an entire generation of architects to follow Venturi's lead. They sold themselves for a song (and the rejection of the pitiless, cheap, careless shell of a style Modernism had degenerated into). Variations is composed of essays, so there is more meat to it but you still don't need anything more than the title to understand the argument. I tried to buy a signed first edition once because I think it is the best title of an architecture book ever (edging out Colin Rowe's The Mathematics of the Ideal Villa only because Rowe means geometry, not math) but it was strangely inexpensive so I assumed it was damaged. It turns out it just isn't as popular as the great title would suggest.

On Advertising, 2

I am learning, slowly, that super-computers can do things other than offer extremely accurate aggregate statistics about the average person. They can microtrade and manipulate financial regulations and hide money from taxation. But these were things we already knew. And they don't account for the enormous value of Amazon, Google, or Facebook.

Of the big three Facebook is the most difficult for me to understand (in terms of its valuation). It is possible Amazon might one day displace Walmart as the place you buy everything. And that is valuable without questions. Google seems to be taking over the Internet - the format for this blog and, presumably, everything written in it belongs to Google. Google also owns many many other internet sites and applications. Owning the internet would be about as valuable now as owning Standard Oil was 100 years ago. But what does Facebook own? Just a bunch of data about a highly coveted demographic. It is purely the domain of advertisers and their algorithms. I think this accounts for its plunge from a $100 billion company to a much less than $100 billion company once people actually started trading the stock.

One point I failed to make in the previous entry concerns advertising and its interaction with human nature. There is much more to the "fuck you mechanism" than I let on. Say your phone rings and it's your service provider. They want to give you $1 million. How many people actually get that money? I wouldn't. I would hang up before they could spit out the offer. And even if I didn't (for some inexplicable reason) I would still never agree.

There is something deep inside us that resents being told what to do. As soon as we can see through the tricks advertisers use they mostly stop working. The extent to which they continue to work is something of a mystery or maybe it's just the desire we all have to believe the world is better, simpler, prettier than it actually is. So maybe if I drink that energy drink I will have friends that beautiful and rich (with nothing to do other than run around and go to parties). Of course, at the same time one part of my mind is thinking that, another is screaming, "Wake up you dumbass!!!" And I don't think advertising can ever overcome that reaction. The only time we like being told what to do is when we ask very specifically. And this is something technology currently sucks at.

The combined servers of the world might know more about me than my own parents but they can't tell me why I can't get my DVD drive to work, or how to hook my computer moniter to a television screen. And the advertisements for services that do just that are ignored because no one believes them. I paid a substantial amount of money to a service that my dad could call when his computer didn't work - predictably, it fucked his computer up worse. The worst part was I knew this was going to happen even as I was giving them my credit card number. But it was my dad, what are you going to do?

It isn't ignorance advertisers have to overcome. It isn't even cynicism, which would be much more difficult. There is a contrarian strain in everyone (more pronounced in some than others) that prevents us from doing what is good for us. Or even what is best for us. This is what advertisers must overcome and it just can't be done. We are contrary with our friends. We are contrary with ourselves. How do advertisers expect us not to be contrary when it comes to little bots that do nothing but apply algorithms to statistical data?

Why do people smoke? I offer this as ultimate proof of our fundamental resistance to advertising - both in the forms of persuasion and pure data. Everyone knows it's terrible for you. It's expensive as hell. It kills 50% of people who smoke more than a pack a day. Strokes, heart attacks, yellow teeth, $10 a day, erectile dysfunction, I know all these things. I smoke and I don't want to quit. If the picture on my pack of cigarettes of a 20 year old who has suffered a stroke or the guy with a tracheotomy won't convince me to quit, how are you going to convince me to buy your cell phone?

Smoking is about as stupid as something can be. And yet people continue to do it. Those who wish for a rational world find this fact depressing. I find it encouraging. In the last few posts I've been writing about technology - people trying to make a simulacrum of a person, an artificial intelligence. No one is trying to make a machine that embodies all the idiocy and contradictions of humanity. And I think that's a good thing. Sometimes, maybe a lot of the time, it's the really stupid and contrary shit we do that is the interesting stuff. We do a lot of things for no good reason. Advertisers can take advantage of that. We also do a lot of things for really bad reasons. That is more advertising resistant.

Burning Books

I found an email in my inbox today from someone calling him- or herself "Burning Books". They claimed to know me and offered a link to a podcast - LINK

I have a deep and abiding fascination with Nigerian Princesses desperately in need of a North American to help them claim a fortune stolen from their murdered father by a tyrant of some kind. This is the new century's first completely new literary form. If I wasn't interested in the many and various ways people will attempt to scam bank information from me via email I probably wouldn't have followed the link. Plus, anyone who claims to know me ought to know "Burning Books" is a good way to get my attention but not in a positive way. Instead of a banking scam, I got an 18 minute review and discussion of a book I've never read - written and delivered by "Burning Books", nom de diffusé. (Google Translate so it may or may not make sense).

I'm passing it along. Why? I spend between 10 and 20 minutes writing these. Read aloud each would take less than 2 minutes. Time required to write something increases exponentially with time required to read it. This person (who, it turns out I know but since Burning Books is the name sent to me and the name on the podcast, it's the one I'll stick with) wrote 18 minutes worth of essay, criticism, highly specific pondering for no better reason I can think of than the love of books. I must assume BB is a faster writer than I am. Certainly BB is better. Still, I would have needed 40 hours to write 18 minutes. How can I not recommend, in the strongest possible terms, both the activity and the results it produced? I really hope this isn't a one-off. I subscribed (although I'm not sure how that works) and if and when more are uploaded, I'll pass the news along.

I've written before about bad writing. And I've written about really good writing. Most of the former were amateurs who should never touch a keyboard for anything other than tweets. Most of the latter were professionals at the top of their game. This is a rare case of someone who is technically an amateur (but think Olympics and not mini-put) with fantastic prose. The podcast seems like a wonderfully Quixotic enterprise for such a gifted writer and highly trained thinker. In essay form the contents of the podcast could have been published (an absolute necessity in the world of academia) instead it was uploaded anonymously and notification was sent out to friends via email. I kept wishing there was a button I could click on to recommend or +1 or "Like". I don't know what the podcast equivalents of those things are. So I'm writing this.  

Friday, September 20, 2013

On Advertising

In one of Malcolm Gladwell's books he makes an effort to convert best-estimates of historical fortunes into a common currency. In this way we can see Carlos Slim Helu's massive $70 billion fortune or Bill Gates $65 billion or even the $19.5 billion made by Spain's Armancio Ortega last year alone are really not as astronomical as they seem. Don't get me wrong, they are insultingly huge accumulations of personal wealth but compared to the great robber barons of the era immediately following the American Civil War, they are not that impressive. Both Rockefeller and Carnegie had personal fortunes bordering $300 billion, more - according to the sources Gladwell sites - than Cleopatra or any of the Roman Emperors. But if we look at the list of the richest people we see, unsurprisingly, they all had products to sell. Helu's product is telecom (more of a service than a product but still something we can easily understand paying for). Rockefeller's fortune was based on oil, Carnegie's on steel, Gates on software. Still, the most impressive fortunes, those accumulated most quickly and by the youngest billionaires are only superficially in software.

Amazon's founder Jeff Bezos, while not as famous as some other internet-tycoons, is worth an astounding $27 billion. Google founders Larry Page and Sergey Brin are both worth around $23 billion. Facebook's Mark Zuckerberg is worth around $20 billion. While these people technically sell internet services, their astronomical worth is generated by the hypothetical potential value of the information they collect about their users, if and when it is sold to advertisers. In fact, it seems like the best way to get (incredibly immorally) rich is to be able to advance the cause of advertising.

This strikes me as very strange indeed. Amazon's advertising works. They recommend books to me and I occasionally buy them. But that is value for Amazon, not any third party advertiser. Google's advertising (in the sense of the "sponsored links" that appear on the sidebar) never works. I have only ever clicked on one of those links and that was purely out of curiousity. The closest I come to buying things from Google is using their search engine to get product recommendations (or directions to a physical store). And Facbook? Forget about it. What am I going to buy? A cheater kit for Farmville? Still advertisers must know their business better than I do.

Yet it seems to me like if you want to get rich, you might want to make something people can buy. Like a product or service. When tech writers talk about the enormous power of the servers Amazon, Google, and Facebook possess to generate wealth, they mean people are going to pay them for access to that information. And the people who pay are advertisers. I don't know if I am so out of touch with the realities of modern economics I am suffering from compound ignorance (I don't even know what I don't know) or whether tech writers have skipped everything about the economy that does not involve a better way to sell things. Seriously, these people talk about the mega-servers as if they will soon control every aspect of our lives through their control of advertising.

I respect advertising - in the sense I am aware it has a profound effect on human actions. I respect it like I respect radiation, not like I respect Salman Rushdie. But isn't this assigning too much power to an inexact science? If someone (or some group) invents a really cool product like, say, a generator the size of a canned ham that can power an average household forever and all it gives off as waste is flowers and cute pictures of kittens - isn't advertising power going to be beside the point? Because I'm buying one.

There is a cynicism to the value technologists place on advertising; a special kind of cynicism that is demeaning to both products and people. The basic tenet is there are categories of things and within each category every unique example is essentially equal (the presumption being "equally shit") and people can be swayed into buying one particular example of shit from each category by advertising because we lack the critical apparatus to differentiate any other way. There is a limited sense in which this is true - the trends in beers or jeans or various other temporary "must-have" objects. But in another sense it is just insulting. There are categories of things I take seriously (another way of saying I am interested in them). These include fountain pens and other writing implements, watches, men's cases (brief or messenger), furniture, etc. In each of the cases advertising actually plays a negative role. I lust for a pre-WWII Sailor 1911 fountain pen. I have never seen an advertisement for one and if I did I would want one less, not more. To be clear, I'm not entirely sure such a thing exists. Fountain pens prior to WWII have more give in the nibs - they are more flexible and consequently "inflect" the lettering more. I also know the Sailor 1911 is widely regarded as the best "writer" - the most comfortable to use. When Sailor started making the 1911 is something I could easily find out and I think I might have already but I forget and, at the moment, don't feel like looking it up. This might seem like the perfect place for advertising but it isn't. My initial statement still holds: I would want one less if I saw it advertised. I love Charles and Ray Eames lounge chair and the more times I see them in design magazines the less I want one (and more I consider a chair by Adrian Ferrazzutti instead). Advertising works on mass sales, not on what some people call "objects of desire". Unless you really desire a new pair of Dockers.

There is a lot of fetishism that advertising seeks to build upon but in many cases the effect is paradoxical. A commodity can't be both artificially scarce and advertised. It doesn't work that way.

No matter how successful advertising gets at finding precisely the right product for me, there will always be a reaction. I think of this as the "fuck you mechanism". I have experienced it most frequently in conversations with friends and family. It is also known as "truth hurts syndrome"; someone will tell you something about yourself you know to be both true and something you should have seen for yourself and the response is almost never, "Gee, thanks."

Perhaps I am being arrogant. Maybe the entire world economy can be reduced to the point people and the products they buy are largely irrelevant compared to the massive power (and value) of the mechanisms that connect the two. But I find William Gibson's "Hubertus Bigend" and his speculations on advertising much more entertaining and informative than I do those of people who actually understand the technology in play. What makes Bigend both interesting and frightening is he doesn't rely on algorithms and massive data - he does straight to personal manipulation. He inserts himself (through his agents) into the lives of others in a way big data can't.

Still, advertisers can't take 100% of the profit of selling actual things. So I don't understand how the makers of things will ever be relegated to second-class capitalists. It seems to me like the financial world is, at the moment, more interested in the pronouncements of the Great Oz than they are in the man behind the curtain.

Post 2007 crash the Onion ran a headline, "Enraged Public Demand New Bubble to Invest In". And I think they found it.  

"Niceness" as a Quality in Architecture

I have written before about my theory of niceness as relates to architecture. Briefly stated, niceness (as in "that's nice") is one of the most desirable characteristics architecture can possess. If one makes is a necessary characteristic it voids the possibility of really terrible architecture.

There are big problems with this theory. The biggest is architects. I don't know if it is nature or nurture, whether architecture attracts people who think they are destined to produce the next Fallingwater or if architecture school either creates that expectation or eliminates those without it. I do know the slightest criticism of a new design (particularly a shocking new design) will cause an uproar of trolling - accusations of timidity, Ludditism, praise of mediocrity, etc. If you don't celebrate the avant-garde (even if it is terrible) you are ignorant and should A) shut up, and B) get out of the way of progress (beautiful beautiful progress).

A big part of why niceness seems like a concession to mediocrity is what we consider important architecturally. Look at any magazine and you will see museums, restaurants, private houses, stores, maybe University buildings. What you won't see are public schools, retirement residences, community centres. Why is it we place more value on the design of places we hardly ever go than on those in which we (statistically) spend most of our lives? If you went to a well-designed public school, to a well-designed college or university, with well-designed dorms, then moved to a well-designed apartment (or house or condo) while working in a well-designed office, architecture played a significant role in making your life better. To be clear, I'm not talking about efficiently designed, I'm talking nice. So where are the schools, workplaces, residences and apartments in our architecture magazines?

I interviewed for a job at a company that designs buildings that will never get published in major architecture magazines - not because they are low quality but because they are not flashy. There is no great claim in the form or materials for authorship, no desperate attempt to have an individual genius recognized. They are just well designed, well built parts of a community. Not accidentally, the firm specializes in building types that don't grab the spotlight - schools (but not Universities), small churches, retirement residences. Parts every community needs but no one thinks about much from an architectural perspective. This is nicenesses wheelhouse.

I think it is clear architecture has been focusing on the wrong things. And architects are abetting this mistake when they celebrate fantastic (but otherwise irrelevant) design. If our duty is to society (as I believe it is) we should be focusing on those places where society happens - and I don't mean the playgrounds of the society pages. I mean where regular people live their regular lives.

Peter Zumthor put the town of Vals on the map with architecture. Gehry did the same for Bilbao. This is going to sound like heresy but who cares? By celebrating these achievements (very rare achievements) to the extent we have, by making that level of design and that type of claim of authorship (where everyone can recognize a Gehry or a Libeskind) the only goal worth having, the only level of success we can agree on, we have effectively marginalized ourselves. We are now makers of museums and over-priced restaurants.

Anyone can recognize bad typography. I maintain this is because no one would dream of publishing something without a typographer. So why are so many buildings designed without architects? Are the aesthetics of books and webpages really more important the the aesthetics of the built world? Of course they aren't. Is there something for a typographer equivalent to a major museum for an architect? Kerning is kerning, leading is leading. Please understand I'm not diminishing or making light of typography, I have the utmost respect for the art. And I appreciate how it is approached. There is text to be set, let's set it. I think architects could do a lot worse than (I was going to say "taking a page" but, G*d, I hate puns) following the example of typographers. Really great typography disappears until and unless you look for it.

I dream of a time the typical quality of our buildings is high enough even well-designed buildings blend in and it isn't until or unless someone looks for it they see, "Hey, this is a really nice building!"  

Professional Darts versus Professional Golf

There is only one thing interesting about professional darts competitions - the guys with the microphones. I think these guys possess some of the most under appreciated skills out there. A typical darts broadcast has three people commentating. The first handles the call. He says things like, "Davis will be trying for a triple eighteen" and "He can still keep the pressure on with a double nine". This guy has one amazing skill - he can do math in his head better than an engineer at NASA. As you may or may not know, the rules in the most common variation of darts state the winning dart must score either a double or triple. So if it's your turn and you need 97 points (wait a second, this is going to take me a long time to calculate whereas the professional commentator would have solved the problem in less time than it took me to type "97") you will want the most possible chances to score a double (because doubles are larger in area than trebles. So you will aim for treble 19 (19x3=57, 97-57=40), leaving you two tries at double 20. But if you miss the treble 19 you might get 19 or treble 7 or treble 3. The commentator will then solve the resulting math problem almost instantly. So will the player - and this strikes me as something most people don't give professional darts players enough credit for; not only are they good at throwing darts, they are quick with very particular math problems.

The second commentator supplies the colour. This role was best personified by Sid Waddell, click the link for almost ten minutes of his classic bombastic style. He was the uncontested master of the one-liner. People like Waddell are the only reason people like me ever watch darts. I should make it clear I have spent about two hours in my entire life watching darts but when I do it's because of people like Waddell.

The last member of the commentators team is the man who stand next to the board and announces the results of each players' turn. This seems like an easy job. all you have to do is 1) add very quickly, and 2) provide the precise inflection for each announcement. These guys are mostly famous for their various renditions of "one hundred and eighty!" The most common is "ONE hundred and EIGH-TY!" but some announcers scream it, some work up to a crescendo, some go for a more precise and analytic delivery, equal emphasis on each syllable. But that's the easiest part of their jobs. They have to use tremendous discretion for every other score. If a player screws up badly and scores 24 (which is easy enough to do since 20 is bordered by 3 and 1) the announcer can't sound disdainful or try to pretend it didn't happen. And what if someone scores 80, which is not a good turn at the professional level? I have to admit, I'm kind of fascinated by these guys (and they are always guys). They have to be interesting but can never be more interesting than the guys who are competing. A difficult task since professional dart throwers are not known for their charismatic stage presence.

Professional golf, on the other hand, has only one thing going for it - everybody whispers. I have never known why this is. The golfer is lining up a put so obviously the people watching (the ones who are actually there in person) should be quiet but the commentators might be somewhere else on the course or in another state. Still, they whisper. This is very good if you want to take a nap. The person who invented the screen saver that keeps changing between pictures of beautiful scenery must have been a golfer. That's what televised golf is to me, a series of beautiful images of places I can't afford to go accompanied by whispering.

I learned to appreciate golf as a younger man, when I still drank. When you wake up with a crushing headache from a hangover you pray will kill you, televised golf is just the thing. Even the cheers are muted. Golf is as good as non-prescription soporifics get. Anything stronger and you risk becoming an addict.

The crowds at golf tournaments and darts tournaments could not be more different. If they met accidentally, there would be a riot - one the darts fans would win easily. Darts (at the professional level) seems to exist only to provide an excuse for organizing drunkenness. Going to either type of tournament makes zero sense to me. There is no possible way even the front row at a darts match can see what it happening; even the announcer has to lean in to see where the darts hit and the cameras have zoom lenses on them the size of wedding cakes. I've actually been to a professional golf tournament. When I was a kid my Dad took my brother and me to the Canadian open and unless you have a front row seat at one of the greens, you might as well be watching on TV. You can watch the golfer swing but once the ball is hit, it takes off at about 2000 kph. You have no idea where it went, whether it was a good shot or a bad one, so you clap anyway (quietly).

I have to admit no professional sports make any sense to me (except professional wrestling and monster truck rallies). I admire professional athletes for what they are able to do and even more for the dedication it takes for them to get that good at it. Amongst the professional sports I like golf and darts the best because you can (or at least used to be able to) be a fat drunk and still win. There should be more sports where people who are fat can still will. I don't mean fat like sumo wrestlers; that's a kind of professional fatness. I mean regular fat. You should be able to show up with your gut hanging out, wearing an old pair of jeans and a Metallica t-shirt and win a big trophy. I can't think of a single sport where that applies.  

Thursday, September 19, 2013

Architecture and Insecurity

Some of you might remember when Kanye West released his latest album Yeezus was inspired by Le Corbusier: "I would go see actual Corbusier homes in real life and just talk about, you know, why did they design it? They did like, the biggest glass panes that had ever been done. Like I say, I’m a minimalist in a rapper’s body." Or so West told the New York Times

This was huge news among architects. The biggest thing since Brad Pitt started hanging out with Frank Gehry. Architects are profoundly insecure - especially when it comes to other artists. Today, I read West wants designer Peter Saville to create a logo for him. You can read about that here. I have two favourite quotes from this article. The first is when Saville says about West; 


"He said today he likes great people and wants to put them together and get them to do some great things and get some great people to check the things by these great people and really end up with some great things."


It's possible Saville is mocking West. That's not the way the interview reads as a whole but that sentence is so fucking bizarre it seems like he must be taking the piss. Either that or Kanye really does want great people to do great things (which will be vetted by other great people). And the final product of this procedure will be some great things. It might also be an artifact of translation (Saville is from Manchester). 

The second line that grabbed my attention was Saville's explanation for why he agreed to meet West in the first place, "Someone said to me that he [West] would like to meet you so I thought it would be rude to say I'm not available."


Saville isn't an architect. He's a graphic designer. The feeling it would be bad form to be anything other than available at a complete stranger's convenience is an artifact of celebrity and not attributable to anything characteristically architectural. 


As a mental experiment I swapped Kanye West's name with Kazuyo Sejima (one half of SANAA and one of the best architects in the world). I don't think a graphic designer in Manchester would think it was rude to be unavailable for her. I don't think anyone would every trust her with another project if she said the inspiration for her previous project was a Kanye West album. 


It would be even stranger trying to imagine her gathering a group of really great people (musicians, fashion designers, graphic designers, etc) to do some really great things. She and Ryue Nishisawa already did that. It has architects, graphic designers, digital modelers, photographers and other besides. It's called SANAA.


Something I Said I Wouldn't Do

I just spent half an hour 'fixing' the appearance of this blog. I said, in the very first post, that was something I would never do because blog templates are inherently ugly and the only way to get a really nice one is to pay a graphic designer to do it. What prompted the change?

There is a tracker on each of these posts by means of which people can +1 them. I don't know what +1ing something does or whether it is desirable. But a couple of these things got +1ed. One even got +2ed. I knew the site was getting visited because the counter told me so but I assumed it was by data miners scraping every bit they could for the purpose of creating a more accurately digital model of middle-aged males. It really hadn't occurred to me real people were reading this stuff. Now I believe at least two people have read at least two posts. So I'm throwing the welcome mat down.

On Redundancy

I'm reading books about the advances in technology (current and expected) and it leads to some questions almost as frightening as those cause by NTE. Imagine a world in which every single remunerative activity, everything that constitutes a skill or task for which one would currently be employed, can be done faster, cheaper, and better by a machine. It struck me there will inevitably be some holdouts - poetry, music, fine crafts, design. For some reason I think the safest jobs belong to those on Savile Row although it isn't hard to imagine a combination of software and hardware that can produce clothes of a higher quality than any human. Cobblers also seemed safe for awhile. I have immense respect for cobblers. Here, for example, the the website for Gaziano Girling and Alfred Sargent who make bespoke shoes and boots that are at once extraordinarily lovely and expensive. Still, if a robot can make a fine suit, a slightly different one should be able to make equally fine shoes.

But indulge your imagination for a moment and allow yourself to conceive of a world in which humans can contribute nothing of material value. Our technology creates music, poetry, novels, regulates our economy, allocates material wealth to everyone, leaves nothing that requires human participation. People are not even necessary for the maintenance or construction of the machines. Each generation of machines is capable of both repairing itself and creating the next, improved, generation. This situation will almost certainly never come to pass - there will always be at least a few things requiring human participation - but imagine that leaves 99.9% of us with nothing useful to do. It raises an interesting question - what can human beings only do for themselves? What cannot be simulated, replicated, or performed for us?

The answer that leapt to my mind was sex. Robots might be able to handle human reproduction better than we can; as it currently stand human reproduction is a haphazard and inefficient process but there are substantial rewards for trying. And those rewards bring up that other kind of sex - recreational. That is something robots might be able to do for us, either through physical manipulation or virtual reality, but there will always be something about the real thing. At least, I hope there will be.

There are other behaviours tied inextricably to our physiology that will not be replicable - we might be able to get our nutrition some other way than eating but drinking has provided pleasure for thousands of years and can, perhaps, be relied upon to continue to do so. Still, not a lot to hold on to philosophically speaking. I get drunk, therefore I am. The real value of humans, if we are to be something more than entirely solipsistic, is in our interactions with others. There are currently many projects (and many very bright people) working on machines that are sophisticated enough to convince us of their reality. I believe there is also a Spike Jonze movie on the topic.

Computer scientists and other technologists are relying on the seemingly endless human capacity for invention to prevent their own irrelevance. Still, if a machine is able to convince us it is real (a successful human surrogate) and we've already stipulated robots capable of improving themselves, where is human invention still required? I, for one, am not ready for a world where the only humans who live useful lives are fashion designers. I use the example of fashion designers rather than, say, musicians, because music operates according to a constantly evolving structure of rules - one that migt actually be better comprehended by a machine than a person. What machines aren't good at, and might never be good at, is arbitrariness. But if I press on with the fundamental question I must allow even fashion designers will ultimately be replaced by software. And the question remains, what do we do when we have nothing to do? Or is it, what do we do when we have nothing useful to do? We couldn't possibly be relegated to lives of endless vacations. Suicides would sky-rocket.

This scenario forces me to an unexpected conclusion: the highest form of human activity is labour. I have long assumed it was either art or causing social change of a magnitude and inventiveness that it resembled art. Either the creation of new objects, ideas, or worlds. Yet it seems we could live without these things - or, more accurately, if another agent stepped in to perform these tasks for us (assuming such a thing might one day be possible) our society would change but our ontology would remain. But if labour is taken from us both our society and our ontology become unrecognizable. I have inadvertently proven, to my own satisfaction, an axiom I used to consider dubious, "There is dignity in labour." That is assuming there is dignity in humanity, something I am willing to grant.

Wednesday, September 18, 2013

On TED

I really don't like TED and am deeply conflicted about it. I love the premise: really smart people (the best in their respective fields) giving short lectures that can be distributed electronically and introduce literally millions of people to people and topics they had no idea were so interesting.

I've spent a lot of time trying to figure out precisely what it is about this that bugs me. At first I thought it was the duration of the talks. If you look for Noam Chomsky on Youtube you'll see posts featuring him are rarely shorter than an hour. He will sometimes make television appearances where he has about half an hour but less time than that is insufficient for him to do anything more than spout aphorisms and build his personality cult.

Chomsky is a wonderful and rare figure who has built a personality cult by doing all the things you are not supposed to do and none of the things you are. I love that grumpy old man. Anyway...

I am currently reading Jaron Lanier's books and one thing he makes abundantly clear - and would be in a position to know - is the internet tends inexorably towards monopolies. This makes intuitive sense. Most of the internet is massively complex and this complexity is daunting. For people like me (and I think that is a large demographic) both time and a fundamental inability to evaluate for myself are factors. When I want an app that does a particular task I don't set about evaluating them for myself, I ask people which is the best (or the simplest). And the most popular answer is the one I go with. This is especially relevant in those apps that involve sharing (as most do). If my app can't communicate with yours it isn't much good to me. Even in apps and sites that don't involve two way (or many way) sharing, once a site establishes a slight dominance it will generally progress into a massively dominant position. I think TED is at this threshold right now. I used to have several outlets that served a similar function - NPR, the Canadian equivalent of NPR, Big Ideas (a provincial TV program that broadcast on in internet), and others - but they have all either disappeared or radically changed their formats.

Most forms of academic expression have a system of peer review that has been carefully considered to maximize the quality of the expression. In the case of journals (the most popular form of expression) this is done through peer review. A potential article is submitted for review to a group of the author's peers. This is one case where anonymity is beneficial and typically preserved - the author does not know who the reviewers are. So politics, friendship, social pressures have no impact on the outcome. The work either meets the criteria for publication or it doesn't. The system was constructed to prevent natural, human flaws from colouring the outcomes. Now, because the reach of TED is orders of magnitude greater than the reach of peer-reviewed academic journals our intellectuals are being selected (curated might be a better word) based on a system that is opaque but depends, with something approaching certainty, on things like popularity, the ability to act like you aren't on camera, and the limitation of being able to reduce your work to between ten and twenty minutes.

My situation also reveals something about the current state of "learning" (and why it earned scare quotes). New ideas should be frightening. The bigger the idea, the more frightening it should be. We each have a constructed world that includes the amount of uncertainty we can handle; part of the reason for Universities and other forums for education is to shake your constructed world apart and force you to create a better one - one that is more faithful to the state of things. I recently wrote a couple posts on NTE (Near Term Extinction) but didn't delve into it because that idea is about as scary as ideas get. I have deliberately ignored the merits of the argument simply because it is too unpleasant to contemplate. This is a luxury education should not give us. The Civil Rights Movement scared the hell out of a whole lot of people because it was such a radical change. So it isn't just theories that scream "You are about to die!" that frighten people - anything sufficiently new and radical will do it. TED will not and cannot present these kinds of ideas. It is ultimately entertainment. And, for that reason, must confine itself to things people already know or want to believe.    

On Cities

At some point in human history we stopped living in groups that were exclusively based on family or tribe and started living in larger conglomerations. My experience in academia has been no one seriously questions why this happened. Off the cuff answers have been given from time to time - the most common being larger groups of people generate economies of scale so things become cheaper and economies more efficient. To me the patently obvious fact that when cities started forming (in the case of Rome, which I know best, we are talking sometime between 750 and 600 BCE) no one knew anything about economics makes this answer less than convincing. The second most popular answer is people formed larger groups based on the necessity for protection. A large tribe will defeat a small tribe if a conflict cannot be resolved by any means other than collective violence.

I think this is a misreading of history. Early historians wrote about wars (often to the exclusion of almost everything else) because they were recording events that seemed important. And since wars are large affairs, involving big groups of people, they often seem more important than they are. I think there is a tendency to assume the ancient world (particularly the pre-urban world) was more violent than it actually was. There are numerous examples of peoples who have maintained a traditional way of life up to the present day and one thing that is frequently remarked upon is among these peoples violence is rare - organized violence even more rare. Amongst ancient civilizations war was often a matter of stylized and ritualized display rather than physical violence.

Historians encountering new cultures bring their assumptions with them and in Western society one of those assumptions is politics prevents wars. Those societies without political systems historians and explorers (they are often one and the same) can understand can be assumed to be more violent.

These are just some of my thoughts on the subject of pre-urban violence. I'm not an anthropologist. I can't marshal empirical evidence or academic work to support this thesis. My point is this - I'm not convinced people chose larger (urban) collectives as protection from predation. In fact, I think the opposite might be true.

Michel Serres was the first person I read who discussed just how violent Livy's history of the foundation of Rome is. It can be read as a collections of lynchings. It's easier to relate them chronologically than how they appear in the history. Rhea Silvia (Romulus and Remus's mother) is raped by Mars, and gives birth to twins who are sent to be murdered. She is then buried alive as punishment for being raped (lynching #1). Romulus and Remus grow up, form a gang, learn their uncle usurped their father's throne and convince the people of the town to kill their uncle and return their father to his rightful place (lynching #2). But the twins have bigger plans than some Podunk little town in central Italy so they go off to found their own city. Then the famous scene with the wall happens and Remus gets killed. This is usually attributed to Romulus but Livy translates it as "in a storm of blows Remus was felled" (lynching #3). Then the Romans get in a fight with their neighbours, the Sabines. Tarpeia tells the Sabine how to sneak into the Roman encampment and is murdered for her trouble (lynching #4). Finally, at a military review while sitting amongst the dignitaries of his new city Romulus is snatched up by a tornado and ascends to heaven. Or, and this is the version I find more believable, the dignitaries murder Romulus (lynching #5).

Livy seems to be indicating the city is not only a violent place but violence is essential in its creation. I think this is true. At least metaphorically. The history of Saskatoon Saskatchewan is not marked by serial lynchings. The metaphor indicates either a form of or the possibility of violence. Cities are where violence makes sense. I think I wrote this before but you never hear of a group of people being murdered in a corn field. Or, if you do, it's national news because it's so weird. Murders happen in cities all the time. They only cause a fuss if the killer was a police officer or the victim was so obviously innocent the crime makes no sense. And that qualification, obvious innocence, is important. It isn't that there is an assumption of guilt for all other victims. No one thinks, "well, he had it coming." The acceptance of murder is based on its location.

"A man died early this morning from stab wounds. Police say he was attacked at roughly 3:00 a.m. in his apartment. This is the nth homicide this year in Toronto, up 2% from last year." That's the news. Unless the killer turns out to be someone unexpected - a priest or a politician - or unless the victim is someone society believes ought be immune from violence - a kid or a person with a disability - that's all that needs to be said.

I think one of the major reasons cities started to form as soon as our population got big enough to support them was to take violence out of the family or tribe. We gathered ourselves into collectives so we could kill people to whom we owed no ritual or religious obligation. I'm not saying we approve of killing or at any point someone said, "Hey! If we form a larger group we can all kill people!" I'm saying violence, even violence unto death, is a component of humanity. Killing is something we do. Or better said, something a very small percentage of us can be relied on to do with some regularity. And the formation of larger groups allowed for that aspect of human nature with the least disturbance to the political whole.

There are other types of violence (beside the one on one murder scenarios I've been discussing above) that is a predictable aspect of human nature; we form crowds that turn into mobs and act violently. This is more rare than simple murder but potentially far more dangerous. One thing it has in common with murder is it only makes sense in cities. Mobs form where people are. Contrary to television shows mobs are infinitely less likely to form in small towns than in big cities. It is a common trope in movies and television that outsiders show up in a small town and wind up facing a lynch mob. In the United States organized lynchings (associated with the Southern and Mid-Western states) are statistically insignificant compared to the "random" crowd violence in cities. Detroit, Chicago, Los Angeles (repeatedly), New York, Washington, Miami, Atlanta, Seattle, Philadelphia, Toledo, and many more cities have all been the scenes of riots in the last half century. The most serious were political but other causes have been sporting and music events, campus disputes, and celebrations that turned into something else.

I believe the human predilection for violence has been an important, and unrecognized, factor in our urban development. At times I am tempted to suggest it is the reason for urban development.  


Tuesday, September 17, 2013

On Jaron Lanier's Brand

I just finished reading You Are Not A Gadget by Jaron Lanier. It's a summation of ideas that took Lanier about two decades to form so I can't comment, or even summarize, for a little while. One thing that struck me about it is Lanier is probably the most recognizable commentators on digital culture (and the overlap of digital culture and culture more generally). Strange that this realization should occur from reading a book because it concerns his appearance - his personal equivalent of a logo. Some people are extremely changeable in their appearance. Think David Bowie's first decade. Lanier, like another favourite of this blog Slavoj Zizek, is remarkably constant.

About ten years ago, Lanier provided a metaphor for the digital landscape. It is, he said, like a new continent. The first people to go there are the explorers. They are (and here he used himself as an example) funny looking or unkempt, maybe not that great in social situations, unconcerned with fashion. The next group to arrive are the settlers. Their goal is to make the new place look as much as possible like the place they came from. So the result (in this context) is stores showing up where there had been none previously, systems of navigation (akin to streets) where previously there had only been paths accessible to those who knew the terrain. But it is the first generation of settlers who produce the first generation of natives and those kids have an experience no one else has ever had - they grew up in this new place and, consequently, know it in a deep and personal way not even the explorers can match.

This metaphor makes sense and appeals to our desire to understand the digital realm through analogies to the physical one. But, if it was literally true, Lanier would have disappeared from the scene long ago. He would be as obsolete as the computer systems he was working with 20 years ago. That he isn't is due to his astounding intelligence, creativity, ability to become fascinated with new projects, and (I maintain) the instantly recognizable brand he has cultivated. Jaron Lanier is not just a musician, a computer scientist and theorist, an engineer, and social commentator. He is an imago of the virtual explorer. That is the cornerstone of his brand.

Lanier is a brilliant man. You Are Not A Gadget is a very good book - the prose are not especially great but he is writing exclusively from his wheelhouse. No one has given this material more thought (or thought about it more creatively) than Lanier. At least, no one so instantly recognizable. But as I was reading it the question that kept popping into my head was, "Would I be reading this if Lanier was a skinny guy without dreadlocks?" The question got so obtrusive at times I had to put the book down and take a minute to reconsider what I had just read - evaluating the ideas separate from my image of the man.

Here are two pictures. One is Jaron Lanier. The other is a man named Pete Lee about whom I know precisely nothing. I did a Google Image Search for "boring guy" and his face showed up. Sorry, Mr. Lee - I don't know why Google thinks you are boring and assure you this is nothing personal.

Regardless of its virtues I never would have either purchased or read You Are Not A Gadget if it had been written by the guy in the bottom picture. And neither would you, I suspect.

That Lanier is overweight is meaningless in itself. As is the fact he has dreads. But together and with the classically Lanieresque shaggy beard and t-shirt it constitutes a kind of logo, a stable iconography that has accompanied every clever thing I have ever heard him say, every fawning introduction by an interviewer, every appearance on TED or Charlie Rose or other socially sanctioned venue where smart people are curated from amongst the rest of us.

I don't mind Lanier cultivating a brand. I admire him for being able to keep at the forefront of an industry to blatantly predisposed to youth for so long. What I do mind is the subtle manipulation branding exercises on my thinking. As Lanier goes to great length to point out in the book, people do not think like machines. We have very little idea about how people think. Scientists are getting good at figuring out certain aspects of brain activity but this is a classic case of the thing being greater than the sum of its parts. Many millions of dollars have been spent figuring out how to make people buy things in extremely subtle ways. The people who are spending that money don't care how it works; they care if it works.

You Are Not A Gadget deserves more than some cursory observations about the fact Lanier is a brand as recognizable as most of the internet projects he writes about. And I will get down to that - probably after I read Who Owns The Internet? But at the moment I am having difficulty overcoming my own assumptions and prejudices about the author's brand. I don't know what impressions are genuinely mine and which are the product of expectations. How does my expectation of Lanierism change my reading of the book? I suspect it will take a while for the fact it is Lanier's to wear off. In the interim, it does bring up some intriguing ideas I will want to address.

On Getting My Hair Cut

Haircuts used to be my single biggest guilty pleasure. I never would have admitted it because I would have then been called upon to explain why. The truth is I used to be a handsome man. This is pure vanity but I do have something approximating empirical proof. A few years ago I was trying to convince my roommates (more than a decade younger than me) that when I was their age I had very long hair. They refused to accept my word for it and demanded photographic evidence. I tried to explain to them they were referring to an era they did not understand - when photographs were made exclusively by exposing a material coated in a "film" of silver nitrate (or something) to light for a very short period of time and the only way to see the resulting image was by having it "printed" by shining a light through the exposed and chemically treated film onto another chemically coated medium. They refused to be put off. When I found an image of myself with long hair I produced it triumphantly, "How dare you doubt my word!" but before she could help herself one of my roommates exclaimed, "Oh Sean, what happened to you?!? You use to be so pretty!" The answer, I suppose, was twenty years happened.

In the interests of full disclosure, I was only pretty from the neck up and then only from certain (very specific) angles and under (very specific) lighting conditions. I have a head like a concrete block. A friend once drew a caricature of me by sketching a concrete block like shape (standing on the thin end), rounding the corners very slightly, and adding a nose. It was surprisingly convincing. I'm not pretty at all now. If I grew my beard (and had someone to sculpt it on a daily basis) I would look a lot like Nicholas II, the last Tsar of Russia. Still, people would only notice the resemblance if I dressed like I belonged on the cover of Sgt. Pepper.

Getting a haircut used to be fun. I would wait until I was so scruffy my friends started looking at me funny and then go sit in the chair, have that piece of paper wrapped around my neck, the sheet spread over me and stare into the mirror while the slovenly, unkempt me disappeared and was replaced by a fresh new me.

Now trips to the barber are a sophisticated way of measuring just how middle-aged I am. I don't remember the first time a barber had to trim my eyebrows. The men in my family have pronounced eyebrows so I didn't think much of it. I do remember the first time a barber had to trim my ears. If started off as a quick buzz with the trimmers. These days it involves, scissors, trimmers, and a straight-razor. Not less than four minutes per ear. I timed it. He also used the straight-razor on my forehead and the bridge of my nose - both of those were firsts.

I don't mind the big events that tell me I am, like everyone else, aging. I read somewhere a man is an adult the first time he goes a week without masturbating and doesn't notice. That was a long time ago. My brother once posited you are not an adult as long as you consider "getting drunk" a thing to do, in and of itself, without provocation or social justification. Well, I don't drink anymore so that one doesn't apply. Women (for whom the social and psychological consequences of aging are so much more severe than men) have that dreaded moment someone calls them "Ma'am" for the first time. Men go from "kid" to "Mister" and it doesn't really mean much.

Maybe the big events have just been kinder to me than to others. I don't know. It is the little, cumulative events that force me into contemplations of my own mortality. Like a barber trying to trim my nose hair with a straight-razor. Still, why it should bother me is something of a mystery.

I already know I'm middle aged. I have gotten used to my friends from high school posting pictures of their kids on Facebook. I'm getting used to people I went to architecture school posting pictures of their weddings. It doesn't bother me - except in that it takes up valuable space that could otherwise be used for interesting links to the collected detritus of the internet. I suppose it's a form of grieving - for something I can no longer enjoy. And the vision of the future it suggests, when I've made the transition from "not pretty" to "old and hairy" is difficult to avoid and impossible to enjoy.  

Saturday, September 14, 2013

On Sherlock Holmes Reboots

I'm going to keep the list fairly short. There was Sherlock Holmes (2009) and Sherlock Holmes; A Game of Shadows (2011) - Guy Ritchie films starring Robert Downey Jr. and Jude Law: the British television series Sherlock (2010) starring Benedict Cumberbatch: Elementary (2012) an American television series starring Jonny Lee Miller and Lucy Liu. Those are the English language works that leap to mind. There was also Detective Dee; Mystery of the Phantom Flame (2010) directed by Hark Tsui and starring Tony Leung Ka Fai. Detective Dee is the only one not to explicitly reference Sherlock Holmes but I think this has less to do with a desire on the part of the writer and director to distance themselves from the body of Conan Doyle's work than the difficulities Mandarin speakers would have pronouncing the r-l combination in "Sherlock". In a side note, I have often wondered why so many peoples are saddled with names they cannot pronounce themselves. The Scottish are "Sco-itch", Irish are "Oirsh", and I had a professor of Oriental Philosophy who could only manage "Hor-rendl". Perhaps this accounts for my earlier question about why Asian is now the preferred term. Back to the Holmes question. I'm not going to say much more about Detective Dee but it is a very entertaining movie, well worth watching.

I am anxiously awaiting the third season of Sherlock. Cumberbatch is the least civil, most arrogant, least likable of all the reboot incarnations. All of the recent English language Holmeses have a disarming arrogance and compulsive need to solve problems but Miller's and Downey's versions are made to seem more human - they are given emotions we can relate to. They feel things we can understand and react accordingly. Cumberbatch's Holmes has a few moments (most notably in Season Two's The Hound of Baskerville) in which he struggles to express something like friendship for Watson (and fails miserably) but his motives are the most obscure. And, of the three, the visual style of Sherlock appeals most to me. Elementary has less sophisticated crimes than Sherlock but it also has Lucy Liu. Given the choice between looking at Lui or looking at Martin Freeman, I'm not going to strain myself pondering. Lui, it should also be said, makes Watson a more complete and complex character than either Law or Freeman but I am biased for the reason stated in the previous sentence.

The obvious question presenting itself is why no less than 4 reboots in the same number of years. What Holmes, in any of his incarnations, does is create the semblance of omniscience by careful observation and deductive reasoning. At least that's how the writers explain it. He is a hero, and narratively (or dramaturgically) more satisfying the less he acts like one. This is why Cumberbatch is my favourite Holmes; he's the rudest, least sensitive, more egotistical, and least likable. The sense in which he is a hero is more like Achilles than, say, Martin Luther King Jr. He is not a great person. He is a hero because he is capable of things beyond mere humans. His heroic gift is understanding.

The Hulk is strong, Spiderman shoots webs, Wolverine regenerates, Sherlock Holmes figures things out. Not people of course, they are completely beyond him. All heroes need a tragic flaw and the inability to bridge the gap between himself and everyone else is Holmes'. He is capable of understanding because he exists in a world where cause an effect follow each other in a completely rational manner. One thing all of the reboots have in common is there are no crimes of either passion or stupidity. Even in those cases where irrational motives cause the crime (hate, jealousy, envy, etc) the crimes themselves are meticulously planned. Sherlock Holmes' world operates according to rules and there are never exceptions. This is what makes the Holmes stories so satisfying (or more accurately, reassuring).

We can choose to believe the world we occupy functions like the world Holmes occupies - that things happen for reasons and the whole is ultimately comprehensible. The first of those choices (the belief is cause and effect) might be correct; I don't believe it entirely but it is a useful belief in many cases. The second is manifestly incorrect. The world, in its entirety, is not comprehensible. Not even those parts created by and for humans can be understood. Complexity, on a massive scale and with all its emergent properties, makes this an impossibility. Holmes real super-power is the ability to reassure us someone out there understands what's going on.

Look at the release dates again. The earliest is 2009. Given how long it takes to get from the first idea to the finished project, Ritchie's Holmes probably started to seem like a good idea around the same time the world's financial system started falling apart. In other words, about the same time some of the smartest and most knowledgeable people in the world had to admit even they had no clue how things really worked or what was causing them to stop working.

I know I have a fixation with the financial collapse. I relate it to our fascination with zombies, with various tropes in architecture, and now with Sherlock Holmes reboots. I think it marks an extremely important moment in our history. Previously the only proof we had the world was ultimately incomprehensible was provided by mathematicians and physicists - the practical definition of "people who can easily be ignored". In the last five years we have been forced to decide whether the financial systems that run our societies are controlled by wicked people who wish us harm or people who are basically well-intentioned but are the metaphorical equivalent of monkeys trying to fix a space shuttle. I vacillate. Most days I prefer to believe the people "in charge" (to the extent one can be in charge of something one doesn't understand and can't control) are not especially good people whose incompetence causes more damage than their ill-will.

And it isn't just our financial system that defies understanding. In my post on Piblokto Madness I agreed with (my hypothetical opinion I put into the mouth of Haruki) Murakami - the entirety of our existence is staggering complexity and continuous randomness combine in a world we can nevertheless navigate. Holmes, the hero, is the guide to this world. Nothing is too complex and randomness is a non-factor.

Friday, September 13, 2013

More on Colleges and Money

In an earlier post, On Money and the Lack of Same, I wrote about the wonderful collection of material collected at University College London on Jeremy Bentham, utilitarian philosopher, jurist, and all-round smart guy. I also wrote the reason I never pursued the issue was because of the savagely expensive tuition at UCL. Interesting sidenote, UCL also made an appearance in the posts about the Carbuncle Cup - I'm not out to get them and have no personal grievance. Sometimes coincidences are just that.

So, lacking the resources to go to London and study the Bentham material firsthand, I thought I would buy the books produced by UCL's team of Bentham scholars and follow along in a spectatorly kind of way. Then I checked out the price. You can see for yourself here. The list price is $525 for a book 512 pages long. Books printed by scholarly presses are almost always over-priced - the people who buy them are people who have to have them for professional reasons and, in this case, me if I had $525 (reduced to $420 if I act now). You might also have noted this if volume eleven! So scholars who want to keep up with all the latest in Bentham studies are out $6k. And that is just for the correspondence. There are many other volumes on different topics all at the same (seemingly insane) price. Keeping up with the Bartlett's will cost not less than $10k.

I don't get it. For me, books come in three price categories. 1) $5-20: this is for mass market novels and things I crush through in a day or two. 2) $50-150: Books I need for reference material or collectibles. 3) $200-300: Books I shouldn't buy but can't resist because it's 4 a.m. and I haven't slept in two days and I need (and deserve) a reward. I almost inevitably regret type 3 books (but keep them anyway, in a special box marked "Rare and Valuable"). The most I ever payed for a book was $350 for the complete works of Piranesi. I have lusted after books that cost more than $350, in some cases much much more. An ultra-rare first edition of Gibbon's Decline and Fall of the Roman Empire, a complete set of signed firsts of Cormac McCarthy's Border Trilogy (two-colour, first print), etc. But I can't bring myself to buy them. If I ever bite and scratch my way into the middle-class I will spend a disproportionate amount of my income of books but I still won't pay $500 for a book published less than a decade ago.

I suppose what really bothers me is the possibility (or probability) UCL students are forced to buy these books. There is only so long Universities can keep piling insult on insult on insult on injury. Enormous tuitions plus exorbitant rent to live in a building certified as the ugliest in Britian plus booklists that run into the thousands. I have, in the past, advocated against University reform on the grounds the institutions provide a stabilizing influence in societies where the pace of change is already too fast. But sometimes I think University reform is happening constantly and the only way to exercise any control over the process is to take overt control over it. I don't mean I should do this. I would if anyone was willing to let me but I can't imagine even a single person thinking this is a good idea. I mean society, the stake-holders, taking control to prevent the evolution (or more accurately unintelligent design) of our Universities into something we no longer recognize.

On Rendering 2 - Arthur Erickson Chimes In

I recently wrote a quick piece on rendering and its implications for architecture (particularly competitions and such). In that piece I bragged about being smart one day a week. It turns out, and this might not come as a shock, that Arthur Erickson was smarter than I am. Or perhaps he spent more than one week writing a lecture he delivered at McGill University.

Let me find a link. What he notes about the practical and theoretical problems with using photography to document and communicate ideas about architecture is something I missed completely. You can only photograph positive objects - things that have a physical existence. You can take a picture of a wall but you can't take one of the space it delineates. So what architectural photography ends up being is a catalogue of floors, walls, ceilings, and (most particularly) stairs. Architects and photographers love stairs. There are whole books dedicated to stairs. I was going to add a whole bunch of links but it's too depressing. The reason both architects and photographers love stairs is they are the only sculptural objects in most buildings. Space, on the other hand, is neither sculptural nor an object. Yet it is absolutely essential to architecture.

The inability to photograph what isn't there goes a long way to explaining why certain architects have exploded in popularity in the last couple decades. It is surface that makes good pictures so the most interesting surfaces get photographed most frequently, and consequently published most frequently. I guess this is yet another symptom of the Bilbao Effect. Significantly, I'm not saying architects who produce buildings that photograph well (Gehry, Calatrava, etc) are producing bad buildings. I'm saying the fact they photograph well doesn't enable one to judge. I've seen a million pictures of Bilbao and the only reason I think it is probably a very great building is that people I know and trust have been there and told me it is.

To be clear, no matter what grandiose claims architects might make architecture does not create space. What it can do and, at its best does very well, is something like defining space. Define isn't the right word - what I am looking for is a word that captures precisely what a haiku does to (or for) its subject. I wanted to find some pictures of Erickson buildings to illustrate what I'm on about but I don't know his work that well. Instead, I'll chose an extremely famous building - Mies van der Rohe's Farnsworth House.
 Almost all pictures of Farnsworth are taken from the outside. I think that's because their isn't much inside to photograph. Here's an example. Despite the complete lack of walls, the photographer can't help documenting what is inside the building.
It's almost like the view through the enormous windows was a photo-realistic wallpaper. This is documentation of "what there is" despite the fact the whole point of the building is that it's transparent. I found another image online - I have no idea who is in it and for that I apologize because it's a really great shot.
This might not be a very good photo. It's a terrible photo in terms of documenting the building. But the person behind the camera actually gets the point of the design. He can't even put the person in the center of the frame. The house is yelling at this guy, "Look where you are!!!" He doesn't care about the Barcelona Chair or the travertine tiles. He's taking a picture of the river through the trees. I love it.

Since I bailed out of trying to explain architecture's relationship with space by referencing haiku (and since "borrowed views" are an established technique in Japanese architecture) I should show you my favourite example of "created space" - Tadao Ando's Church on the Water:
I wish I could credit the photographer - I found the image on Ando's Pritzker webpage. There is almost nothing here to photograph - in the sense of positive objects - just a floor and ceiling. But what this image captures better than any other I've seen is the sense of precariousness created by the space. That edge is so unnaturally severe the result is a palpable tension. It isn't unpleasant (and I certainly don't think it would be as unpleasant space to be in) but you wouldn't fall asleep during the service.

Ando has an unmatched hand at creating non-things that can be photographed. For example, the Koshino House:
Again, barely anything actually there. Floor, walls, ceiling, and two super-puffy sofas. But the light not only illuminates and charges the space, it forces it into your awareness the same way a poem can charge ordinary objects with special meaning. The extreme contrast between the light wall and the surrounding darkness also forces an awareness of the space of the room.

Since I've tried to give actual illustrations for my points I am forced to admit this isn't the best argument against the influence of rendering in architecture. A talented renderer could have created all these images (except the candid of the guy in the chair). I can only hope the points the illustrations support clarify my argument (which, it bares remembering, is actually Arthur Erickson's argument).

Eating with my Hands (and Wearing Denim)

It is generally considered a faux-pas in civil society to eat with one's hands. That's what cutlery is for. Sitting in the food court (Urban Eatery, to use the marketing department's name for it) at Eaton Center I was struck by the great diversity of diners, from all cultures and ethnicities, joined in the process of evolutionary back-sliding.

We judge our level of civilization in very odd ways. It is strictly forbidden to smoke within 9 meters of the entrance to any public building. I wonder who came up with that distance and whether it was the subject of intense negotiations. 9 meters is 29'6" - a number with no significance whatever. It isn't a half chain (33'), or two rods (also 33'), or 39 spans (29'4"), or any other division of any unit of measurement I can think of. But it is a measure of how civilized we are. We don't dump our refuse on the streets, we don't spit indoors, and we don't smoke anywhere we like. But we do eat with our hands.

I can hardly remember the last meal I ate requiring cutlery. This is mostly attributable to my unhealthy diet rather than a complete lack of social grace. I guess it just seems strange to me that the desk I used in the Physics and Astronomy building at the University of Western Ontario had two indentations on the top - one for an inkwell and one for an ashtray. The first students to use those desks wore suits to class, smoked continuously, and wouldn't dream of eating with their hands.

We have, as a society, made trade-offs in the last half century. Convenience in return for, what? I'm not bemoaning the availability of an incredible assortment of food (all of it at approximately $10 per meal) I can get in less than ten minutes, eat with my hands and walk away from. This is one thing I wouldn't care to give up. But I would like to know precisely what it was I traded for this convenience.

I'm not going to suggest the trade was one for one. It was a lot of things all combined, some gained, some lost. And the gains weren't necessarily gains, nor the losses necessarily losses. Similarly, what seemed like a gain might have been a loss in the long term and vice versa. But here is where denim enters the equation.

If you look at a picture of any public gathering 50 years ago (a parade for instance) you will see the men all wear jackets (and most wear ties) and the women all wear dresses (and most wear hats). A picture from a parade taken this decade, you will see guys in t-shirts, women in sweatpants with slogans across the ass, and so on.

The terms blue and white collar might not seem to have much relevance these days. Financial workers and lawyers are white collar - everyone else is something else. The availability and stability of blue collar work simply does not exist anymore. The categories are broken, largely irrelevant. I think it is worth considering what they once meant, what they originally meant. A blue collar worker wore a blue collar at work. Everywhere else he dressed in a shirt and jacket - same as a white collar worker.

We gave that away at some point. More accurately, we won the right to surrender it incrementally. My sister-in-law's grandfather doesn't come to family events much anymore. He is old and has reached the point where infirm might be a better description. He worked in agriculture - I forget whether he owned a farm or sold farm equipment but either way he was about a blue collar as you can get. Yet he always wore a shirt and tie, typically under a sweater, for family gatherings. His clothes always fit and he was always tidy. He wasn't a dandy; no one would dream of accusing him of being a fancy boy. He just looked dignified. It wasn't that he took pride in his appearance so much as he maintained a very high minimum standard for himself. He wore the most comfortable clothes he could while dressing in a way he considered appropriate.

That is what I see when I look at pre-denim era pictures. The people captured in them are not slaves to fashion. They are not automatons afraid to express their individuality. They are people who would consider going out in public without a certain level of formality in their clothing undignified - their standards are higher. Somewhere along the way, in our quest for (what? convenience? self-expression?) we have surrendered the opportunity to assert the dignity of equality, of community. We have lowered our standards for ourselves and consider it a victory.

Many people would say that trade off was actually the freedom from the expectation of conformity (and all the repercussions should that expectation fail to be met). I disagree. Today's jeans and a t-shirt has replaced the jacket and tie of a half century ago. You can go to almost any public event in jeans and a t-shirt and not draw attention to yourself. Go wearing anything else and you stand out. The pressure to conform hasn't gone anywhere; all that's changed is the standards. Given the choice, I would rather conform with a standard that allows me to look dignified.

There are so many ways society conspires to strip individuals of our dignity. Those ways of preserving it that remain to us are worth fighting for. So I wear a shirt and jacket almost everywhere now. Certainly everywhere I expect to speak to people. I'll go to the store in jeans and a sweatshirt but not many other places. To be clear, I don't dress particularly well. I can't afford it. Good suits are extremely expensive. The effort I make is not in the hope I will be mistaken for someone with style - it is with the expectation I will be recognized as someone who will stand up for myself and not allow my dignity to be stripped from me. At least, not without a fight.

Wednesday, September 11, 2013

On Rendering

A couple days ago I was informed a group of students from my alma mater had swept the podium in a design competition. This is more prestigious than it sounds. Some professors deliberately assign projects that precisely meet the criteria for submissions in a particular competition and then encourage their students to enter. I see nothing wrong with this strategy - it gives the students something impressive to put on a resume, it brings prestige (of a limited sort) to the school, and it makes the professor look good. Three birds, one stone. Efficient. In fact, the only thing I have against it is it's efficiency. Anyway, I looked through the entries to see for myself what it takes to win competitions these days and, while flipping through (or paging through, whatever the correct internet terminology) I immediately spotted the winner. I was looking at the entries page, not the results but I had absolutely no doubt. One project practically leapt off the screen screaming "Professional!"

What made this project so spectacular was the quality of the panels - everything about them was top notch, the composition, the drawings, and most importantly the renderings. I don't know what software the winning team used (my guess is 3D Max with an add-on rendering tool like V-Ray) but the results were spectacular. The effect was like finding an image by a professional photographer mixed in with a bunch of cell-phone pics taken by (slightly drunk) people. That is kind of demeaning to the other competitors but, trust me, the difference really was that dramatic.

I pride myself on being intelligent at least once a week so I'm not embarrassed to say it took me two days to start wondering whether really excellent rendering (and graphic design) is sufficient to win an architecture competition. Or rather, if it should be sufficient. And since this is my smart day I can say the answer is not as clear cut as I would like it to be.

Architecture is, as I wrote in an earlier post, something that is either a) built or b) not really architecture. Completed buildings never look like their drawings. Ever. I know of some cases where the differences are very slight. I have a friend who can model the world with a degree of precision I find frightening. More frequently, the differences are huge. At the very least, images are deliberately constructed to show the proposed building in the best possible circumstance. And then they are edited to make both the building and the world just a little more perfect (a scary and fascist idea when applied to the world). In school I heard a professional renderer give the odd bit of advice, "add more cats." Apparently, in his experience people associate cats with happiness. He also advocated including a lot of children holding balloons (something of a cliche now). What that lecture drove home in a very un-subtle way is potential clients can be manipulated by rendering tricks into approving buildings of dubious architectural merit. If an image of a building requires a truckload of cats and large groups of ethnically diverse children holding balloons to convince a client, the building itself can't be very convincing. And it is the building that ultimately matters. It will be there, occupying space in the real world, with or without cats and balloons, for a long time. If it is ill-conceived and poorly designed the world is measurably worse for its existence. Architecture should stand on its own merits or not at all. Tricks used to make buildings more attractive are under-handed and unsuited to (what I consider) a noble profession.

On the other hand. Architecture is a business. It is the business of getting people who often know almost nothing about architecture to pay millions of dollars for a building. Reading orthographic drawings is not a talent; it is a learned skill. And not a skill people engaged in the earning of millions of dollars are likely to acquire. The design, the ideas behind it, its architectural merits have to be communicated to the client one way or another. This used to be done with models and perspective drawings but both of these have one major drawback when compared to computer models - they are difficult to edit. Once a physical model is built any changes to the design result in the necessity for a new model. But a digital model can be altered (and any number of new images generated) in a fraction of the time and at a fraction of the cost.

I think architectural models are beautiful objects. If I owned a firm (and had sufficient money to do so) I would have models built for every project. One of my former professors used to build models from brass - gorgeous sculptural objects I lusted after. I also think the very best computer renders are beautiful objects; the friend I mentioned above can create amazing images that look like the work of an incredibly skilled photographer who happened to be in exactly the right place at exactly the right moment.

The contradictory demands made on architecture by its twin nature as art and business are problematic to say the least. Selling architecture is a good thing. Convincing people good design is worth good money is a good thing (stop me before I write something controversial). The manner in which architecture is currently sold can be very bad for the profession. No building can be summed up in a single image. Even the simplest building is too complex. Whether the image is hand-drawn, a photograph of a model, or a digital image doesn't really matter against the ruthlessly reductive impulse to present a building with a single iconic image. That one image, reprinted again and again in advertisements, is what the building becomes in the public mind. Architecture periodicals and books combat this by the (perhaps ill-conceived) remedy of printing many images. The strategy is still reductive - buildings judged by how well they photograph. But I don't know a better way to do it. Until 3D smell-o-vision is a reality, this is what we have. And if completely immersive virtual reality does become a reality, touring libraries isn't going to be high on the list of things people use it for.