The Myth of the All-Nighter

I’m entering my sixth year of architectural education very soon. It’s been a long, often frustrating, but fruitful journey. At such a time as this, reflection becomes a key point as the final stretch looms. One of the things that has intrigued me so far, both looking inward to the profession as an outsider (before I began my architectural education), and as a young “newbie” to the professional world of architecture, is this fascination with the all-nighter.

It’s sort of expected that the architecture student must labour continuously on their projects, whether their body yearns for sleep or their mind has become a tangled mess of meaningless mulch. The architecture student is expected to pull off countless all-nighters whilst still maintaining a particular standard of work, and failure to do so means instant discrediting of one’s entire stature as a student studying this field. It somehow suggests that one is not putting in the requisite “effort”, that a little more time spent on the work might have meant a different letter grade – and in an abstract field such as design, doubt becomes a prevalent spectre that haunts the self-critique of ongoing work.

I feel that this fascination is disturbing and entirely unhealthy, both physically, and in its fixation on working hard rather than working smart. The subtle distinction between these two things means the difference between a productive, happy young architect who is energised to start a promising career in the profession, and a burnt-out student who might be on the verge of giving it all up for something else.

A serious paradigmatic shift is necessary to move the mindset from working hard, where the number of hours somehow correlates, to some degree, the quantity/quality of work produced, to the idea of optimising workflows, exploiting the benefits of technology and ultimately adopting a smarter way of getting things done. Of course I’m not arguing for a generation of lazy architects who find every excuse to avoid work. Work is an essential part of our culture, and it’s a fundamental aspect of living, of building something meaningful both to society and to the builder’s life, of leaving a true legacy to benefit future generations. But this morbid fascination with a culture of sleep-deprivation, which itself propagates an aura of anxiety, stress, and unpleasantness, needs to stop. Right. Now.

Much needs to be done in reforming architectural education today. One aspect we can begin with is a critical rethinking of what studio culture is. Lack of sleep and deriding physical and mental health runs diametrically opposed to the kinds of environments that we as architects are expected to produce for the betterment of society.

Judgement of work based on the hours put in does not paint a proper picture of the final product. Rather than overworking oneself in order to satisfy this arbitrary time-centric idea, a more intelligent workflow is needed. This is the exciting part: designing is intrinsic to us, so why can’t we design better means of production? Instead of shirking from advanced computational technologies, this is the time to be adopting those tools. Truly understanding the power of BIM technologies, parametric tools and modern productivity strategies such as Pomodoro are just a few examples of the potentials lurking beyond that sleep-deprived horizon.

It’s time we got over this myth that the all-nighter is a necessity to architectural education, and embraced a healthier, smarter way of learning and working.

The Business of Aesthetics

Patrik Schumacher, partner at Zaha Hadid Architects, recently took to Facebook to voice his opinions on Alejandro Aravena’s Prizker award earlier this year. His formidable position in our contemporary architectural discourse, coupled with his work in the arena of parametricism as style and his collaboration with the “queen of the curve”, the late and great Dame Zaha Hadid, add a certain gravitas to his sentiments and indeed reignite the debate over architecture’s societal role:

The PC takeover of architecture is complete: Pritzker Prize mutates into a prize for humanitarian work. The role of the architect is now ‘to serve greater social and humanitarian needs’ and the new Laureate is hailed for ‘tackling the global housing crisis’ and for his concern for the underprivileged. Architecture loses its specific societal task and responsibility, architectural innovation is replaced by the demonstration of noble intentions and the discipline’s criteria of success and excellence dissolve in the vague do-good-feel-good pursuit of ‘social justice’.

– Patrik Schumacher, ZHA

Architecture as a practice has long sought to root itself within a societal discourse. And rightly so, for the artefacts it produces stand as anchors in time, reflections of the zeitgeist, and responses to various social flows – the flows of people, of money, of technology and of power.

Yet at its core, I believe, architecture holds firmly to the business of producing artefacts. For, once stripped of all the intellectual mist that surrounds a piece of architecture, the thing that remains, the concrete and brick and mortar that form the geometries so intricately laboured over by the practitioner, lies firmly within the realm of aesthetics. It is an artefact, an object that was created to appeal, at its very base level, to certain rules of beauty that have been argued over for millenia.

In pop culture, the way something looks is paramount to its success. Let’s not kid ourselves about this. The aesthetic conception is something that pervades contemporary society; it’s the veil that draws one in to whatever intellectual (or “abstract beauty”) lies behind.

Perhaps there is another debate lurking here – what is it that defines beauty? Ideas of cultural bias, of historic prejudiced views, mathematical proof and geometric arguments all play pivotal roles in discussing this. But that’s not the point of today’s post.

My argument is that, in a world that has become susceptible to politically correct language, it is very easy for discourse around architecture to become dramatically defensive and deny the unavoidable (if perhaps harsh) truth: that aesthetics is the name of our game. We should rather embrace this discourse, and begin to tamper with it: to engage in the idea of aesthetics being a crucial part of architecture, and to interrogate its various virtues and disadvantages.

Architects are not politicians. We’re not activists, nor are we philosophers. Yes, we may harbour sentiments that are shared with these other groups, but at the heart of our profession is a desire to shape worlds, through imagination and the pursuit of the creative spirit. We are ever aware of the gravitas that underscores our duty to society, yet that doesn’t mean we can’t also have a little fun too.

Alejandro Aravena on the Force of Architecture

“So be it the force of self construction, the force of common sense, or the force of nature, all these forces need to be translated into form, and what that form is modelling and shaping is not cement, bricks, or wood. It is life itself. Design’s power of synthesis is just an attempt to put at the innermost core of architecture the force of life.”

– Alejandro Aravena

Aravena was this year’s Pritzker Prize laureate. The Chilean architect has redefined the role of the architect in society, and the relationship between architecture, the economy, and the forces of life. As he beautifully articulates in the quote above, architecture moulds life, whether consciously or subtly. His social housing projects (the famous “half a house” concepts like Quinta Monroy in Chile) prove that inventive ideas, combined with strong collaboration with the societies that are directly affected by, and for whom the projects are designed, can truly change the world.

Design is a Function of Infinity


When does design end and use begin? Is there a definitive beginning and end to the process of design, and does design concern itself with the solving of problems, that is, working within a specific problem set, defining the end goals and working toward those within a closed system?

No. Obviously not.

Nothing in design is ever straightforward. Design is a function of infinity. It never begins, and it never ends. It exists as a fluid system where ideas come and go, where they are moulded by ideologies and philosophies, in the hope that the present iteration may benefit society in some way.

As a function of infinity, there is no definitive solution that will solve all problems, or that will please everyone. And there is no designed solution that will be perfect, by the very fact that design is a human construct, and we are a notoriously imperfect species. Thus lies one of the age-old challenges of any design discipline: how can value be attached to something that is so vague, to a service that has no concrete end to it? The key would, most likely, be in attaching value to the process, that is, educating clients and endusers on the value in collaboration, in the techniques and tools required to sculpt a formless idea into the tangible object that exists in the material world.

And at the other end of the spectrum, of course, is that feeling every designer has: the endless possibilities of a blank page, akin to the infinity of our Universe. Everything is possible, with the only constraint being the designer’s imagination (itself a function of infinity).

Infinity itself looks flat and uninteresting. Looking up into the night sky is looking into infinity—distance is incomprehensible and therefore meaningless. The chamber into which the aircar emerged was anything but infinite, it was just very very very big, so big that it gave the impression of infinity far better than infinity itself.

–Douglas Adams

Digital and Analog: A Tale of Two Mediums

I watched Quentin Tarantino’s latest film, The Hateful Eight, recently. The big hype about this picture – apart from it being a Tarantino western, a surefire classic if ever there was one – is that he chose to film it in Ultra Panavision 70. This is a format that hasn’t been used in mainstream cinema since the 60s. In fact, Hatful Eight was filmed with the same lens used in classics like Ben Hur.

Tarantino is one of the most prolific proponents of shooting on film. He believes that this traditional method is something that should be preserved, and that it adds an ineffable quality to the cinematic experience that is surely lacking from modern digital cinema.

This argument – film vs. digital – is an age-old debate. It’s not unique to the realm of cinema, in fact. As a young architect, I have already faced this debate, first encountering it in my undergraduate years. Architecture is a field that is constantly facing the challenges of technological progress. On either end of the spectrum (construction and design) technological development poses fundamental issues that deeply affect the very core of the practice.

The argument for an analog approach to creativity is that it brings one closer to the work. There is no denying that the connection between brain and hand is inextricably strong. So yes, when introducing a new stream of knowledge (in this case, the many aspects of architectural education) beginning with analog methods is critical. Most importantly, it gives the student a better sense of proportion, geometry and scale. These are aspects that are oft distorted if one were to begin in a virtual sphere.

However, the paradox ensues when we face the fact that most architectural students are educated by teachers who were themselves taught in the older craft of analog production. This method, when one gets to the higher level, is anachronistic in a highly digital world. Digitally produced work is thus frowned upon; a seemingly easy-way-out approach to the creation of architecture. Yet in the “real world”, a digital production environment is critical to the bottom line – to staying relevant in a fast-paced and rapidly developing economy.

Here’s the thing: the virtual world is just another type of canvas. In many cases, digital form making brings forth accelerated creativity as complex geometries are made possible. Proponents of the analog method will argue that digital work lacks the je ne sais qua of hand-produced work. I would rebut by saying that digital work can be just as expressive as its analog counterparts – it all comes down to the practitioner, to how the creator wields the tools available to him.

At the end of watching Hateful Eight, it didn’t really matter to me that the movie was filmed in the Ultra Panavision 70 format. Yes, it looked beautiful. And the cinematography accented an already entertaining and gripping story. But what mattered most was the experience: I cared less for how it was made, and more about how entertained I was – the final goal of any cinema. Likewise, it matters not how the piece of architecture was conceptualised. What matters is that it conveys the right information, it describes the idea in the best way possible, and it ultimately enriches the viewer or occupant.

The Nature of Design

Perhaps it is because design encompasses such a wide range of economic tiers — from the high-end ultra luxury to low-cost housing and solutions for disaster relief efforts —that we tend to become confused about its purpose. Thus we tend to fixate on its nature as an entity aligned with exclusivity, where it has an aura that is seemingly detached from the plight of the everyday. This is its aesthetic conception – its surface value – something that is far easier and neater to understand than the complex beast that it really is.

It is unfortunate that our society sometimes perceives the vocation as such, because design is such an intrinsic part of what makes us human. It’s an inherent part of our evolutionary story; it’s a validation of our ability to have adapted as a species that has emerged triumphant from every challenge nature has thrown at us. In essence, design has played a significant role in getting us to where we are today: a highly evolved, intelligent, dominant species capable of astonishing feats. We were able to overcome these challenges through innovation: through using our intellect to design solutions, to streamline mundane tasks and thus free our minds to begin contemplating the deeper issues that began presenting themselves, and thus continuing this cycle of development. Design has brought us mobile phones, bridges, cities that claw at the skies, and eyes that see into the dawn of time.

For me, design isn’t about what something looks like. Aesthetics form such a tiny part of the entire story. Design is about how something works. It’s about how a multitude of pieces have been intricately woven together to form a coherent whole. It’s about the collation and understanding of seemingly disparate ideas, of making unconventional connections and sifting through a multitude of thoughts to retrieve those tiny fragments that are the true gems, the ones that will assemble to provide a meaningful solution. It’s a messy, daunting, multifaceted pursuit. It’s much, much more than just the skin of an object.

Bionic Citizen

Apple WatchWearables are the hot topic in technology today. What once seemed like something out of a Star Trek episode is now reality. And whilst Samsung, Motorola and others are veterans (if you could call more than two years in the product category “veteran”) I truly think that Apple’s design and brand-respected clout will be the final tipping point in solidifying this curious new niche as a distinct product line.

The Apple Watch’s gutsy move in positioning itself as a piece of haute couture is indeed a bold move; I have many questions about the longevity of such a device when the ephemeral (technology) is juxtaposed with the eternal (high-end watch design). It seems like a product conceived of contradiction as much as it is a device seeking to bridge two seemingly disparate factions of society (fashion and technology). However, it represents a progression of technology. A powerful progression, I might add: we are on the cusp of transcending the notion that technology exists as a realm separate to the organic body that is us. The Apple Watch’s ideal of being a fashion piece means it encompasses the design traits distinct of fashion: personality, individuality, intimacy.

But even more than that: wearables are a step closer to the assimilation of the inorganic — the world of binary code and cold, calculated lines of coding — with the organic: us, human beings, sentient creatures who have created these strange contraptions. Are we on the brink of becoming bionic? Is it possible that, by following this progression to its logical destination, we can assume that there will be a convergence: that these two worlds will properly collide; that we will become bionic creatures, beings made just as much with technology as we are with blood and muscle?

The singularity theory postulates a point in time in the distant future, called an event horizon. Beyond this point, we cannot predict, using the cognitive skills we have honed over millennia, what will occur next. The idea that we are assimilating ourselves with the technology that we use can be considered as an event horizon. The current state of technological development suggests that this is the case, but I think that if we examine the nature of our current society, we can further understand why this might very well be the eventual outcome of our species.

We live in an increasingly connected world. The Grid rules our lives: we are beings that connect to it daily; it is what provides us with a large amount of our daily intellectual sustenance. We cannot function without the Grid. Our society is hyper-connected, and devices like smartwatches illustrate how reliant we are on the Grid. That such a device exists signifies the ever-encroaching grip our digital lives have on us. Our lives are lived as much in the virtual sphere as they are in the realm of reality. When technology begins to disappear yet exert an even more potent force upon our existence, we begin to step closer towards that event horizon of the bionic being. Technology is beginning to disguise itself. It is no longer taking the visage of a “device that looks like a computer” — that traditional aesthetic is being challenged as microchips get smaller and more powerful, thus making it possible to insert them into devices that we are accustomed to for centuries.

As design becomes invisible, it will allow technology to ingratiate itself far more easily into our daily lives, slowly tightening the grip the Grid has on our psyche. As microchips become smaller and their power more potent, we will begin to subconsciously slip into a symbiotic reliance with the devices and the virtual networks we’ve created. This is going to change society in incredible ways. Everything from culture to politics to economics to the built environment will be affected. One area I’m particularly curious about is, of course, the urban architectonic. How will our bionic beings navigate an architecture of the future? How will our built structures evolve to support the new lifestyle that will emerge? How can virtual reality coexist with the concrete and steel that is so intrinsic to maintaining our human existence at the base level?

One simple device, oft derided and questioned for its very purpose, can have the potential for a significant sociocultural impact. For now, though, we play the waiting game: watching, patiently, as the progression of technology and society slowly merge, an intricate dance orchestrated by a plethora of parameters and the organic ebb and flow of time…