Perhaps it is because design encompasses such a wide range of economic tiers — from the high-end ultra luxury to low-cost housing and solutions for disaster relief efforts —that we tend to become confused about its purpose. Thus we tend to fixate on its nature as an entity aligned with exclusivity, where it has an aura that is seemingly detached from the plight of the everyday. This is its aesthetic conception – its surface value – something that is far easier and neater to understand than the complex beast that it really is.
It is unfortunate that our society sometimes perceives the vocation as such, because design is such an intrinsic part of what makes us human. It’s an inherent part of our evolutionary story; it’s a validation of our ability to have adapted as a species that has emerged triumphant from every challenge nature has thrown at us. In essence, design has played a significant role in getting us to where we are today: a highly evolved, intelligent, dominant species capable of astonishing feats. We were able to overcome these challenges through innovation: through using our intellect to design solutions, to streamline mundane tasks and thus free our minds to begin contemplating the deeper issues that began presenting themselves, and thus continuing this cycle of development. Design has brought us mobile phones, bridges, cities that claw at the skies, and eyes that see into the dawn of time.
For me, design isn’t about what something looks like. Aesthetics form such a tiny part of the entire story. Design is about how something works. It’s about how a multitude of pieces have been intricately woven together to form a coherent whole. It’s about the collation and understanding of seemingly disparate ideas, of making unconventional connections and sifting through a multitude of thoughts to retrieve those tiny fragments that are the true gems, the ones that will assemble to provide a meaningful solution. It’s a messy, daunting, multifaceted pursuit. It’s much, much more than just the skin of an object.
“Only a very small part of architecture belongs to art: the tomb and the monument. Everything else that fulfils a function is to be excluded from the domain of art.”
– Adolf Loos, architectural theorist
There is an elegant truth to this statement. Whilst I might argue that all built structures have architecture in their DNA – that architecture is an inclusive aspect of society rather than something relegated to the realm of the privileged – architecture is also something that carries a gravitas with it. It forms a sociocultural marker in time; it is a manifestation of this very abstract of human comprehensions that forms an indelible mark on our landscapes and cityscapes.
Monuments are an important part of our collective architectural language. More than anything, they serve as these markers in time: as iconic images that remind us of particular periods in our history. Parts of that history may be good and parts of it may be bad, but it is history and as such serves an incredibly intricate and vital purpose of the human experience. History teaches us: it teaches us how to live, it teaches us about the mistakes made by human beings driven by passionate purpose and ideology, the great triumphs of mankind and how they were achieved. But above all else, it equips us with the cognitive skill set requisite in making complex decisions as we chart a brighter future for our society.
There are far greater issues at hand that plague society than the idea of mere destruction of landmarks. Yes, the argument for destruction in order to create something new out of that debris is a rather romantic and thus enticing notion, especially for someone of a creative inclination. But selectively destroying portions of history in order to create a tailored version of it siphons-off valuable intellectual energy that could be employed to better effect in actually doing positive work that can uplift our existing society.
The eradication of these edifices is therefore counterproductive to building a stronger, intellectually healthier society. Instead, it diminishes all that was done to achieve the present victories. We cannot be selective when it comes to something like history. History, time… these are entities far greater than any single human being. All we can do as citizens keen on architecting a better society is to learn from them, both the good and the bad, to internalise those lessons, to properly comprehend them, and then begin to formulate our blueprints for the future.
Wearables are the hot topic in technology today. What once seemed like something out of a Star Trek episode is now reality. And whilst Samsung, Motorola and others are veterans (if you could call more than two years in the product category “veteran”) I truly think that Apple’s design and brand-respected clout will be the final tipping point in solidifying this curious new niche as a distinct product line.
The Apple Watch’s gutsy move in positioning itself as a piece of haute couture is indeed a bold move; I have many questions about the longevity of such a device when the ephemeral (technology) is juxtaposed with the eternal (high-end watch design). It seems like a product conceived of contradiction as much as it is a device seeking to bridge two seemingly disparate factions of society (fashion and technology). However, it represents a progression of technology. A powerful progression, I might add: we are on the cusp of transcending the notion that technology exists as a realm separate to the organic body that is us. The Apple Watch’s ideal of being a fashion piece means it encompasses the design traits distinct of fashion: personality, individuality, intimacy.
But even more than that: wearables are a step closer to the assimilation of the inorganic — the world of binary code and cold, calculated lines of coding — with the organic: us, human beings, sentient creatures who have created these strange contraptions. Are we on the brink of becoming bionic? Is it possible that, by following this progression to its logical destination, we can assume that there will be a convergence: that these two worlds will properly collide; that we will become bionic creatures, beings made just as much with technology as we are with blood and muscle?
The singularity theory postulates a point in time in the distant future, called an event horizon. Beyond this point, we cannot predict, using the cognitive skills we have honed over millennia, what will occur next. The idea that we are assimilating ourselves with the technology that we use can be considered as an event horizon. The current state of technological development suggests that this is the case, but I think that if we examine the nature of our current society, we can further understand why this might very well be the eventual outcome of our species.
We live in an increasingly connected world. The Grid rules our lives: we are beings that connect to it daily; it is what provides us with a large amount of our daily intellectual sustenance. We cannot function without the Grid. Our society is hyper-connected, and devices like smartwatches illustrate how reliant we are on the Grid. That such a device exists signifies the ever-encroaching grip our digital lives have on us. Our lives are lived as much in the virtual sphere as they are in the realm of reality. When technology begins to disappear yet exert an even more potent force upon our existence, we begin to step closer towards that event horizon of the bionic being. Technology is beginning to disguise itself. It is no longer taking the visage of a “device that looks like a computer” — that traditional aesthetic is being challenged as microchips get smaller and more powerful, thus making it possible to insert them into devices that we are accustomed to for centuries.
As design becomes invisible, it will allow technology to ingratiate itself far more easily into our daily lives, slowly tightening the grip the Grid has on our psyche. As microchips become smaller and their power more potent, we will begin to subconsciously slip into a symbiotic reliance with the devices and the virtual networks we’ve created. This is going to change society in incredible ways. Everything from culture to politics to economics to the built environment will be affected. One area I’m particularly curious about is, of course, the urban architectonic. How will our bionic beings navigate an architecture of the future? How will our built structures evolve to support the new lifestyle that will emerge? How can virtual reality coexist with the concrete and steel that is so intrinsic to maintaining our human existence at the base level?
One simple device, oft derided and questioned for its very purpose, can have the potential for a significant sociocultural impact. For now, though, we play the waiting game: watching, patiently, as the progression of technology and society slowly merge, an intricate dance orchestrated by a plethora of parameters and the organic ebb and flow of time…
A good friend of mine, Wazir Rohiman, recently started a blog for creatives, by creatives. I met Wazir whilst studying architecture at UCT, and we’ve become excellent friends over the past three years. We share an appreciation for all the possibilities that exist at the intersection of science and design, and it is here that Wazir’s new blog excels.
Creatamin (love that name) posts inspiration, tips, interviews and thought pieces all related to living the creative life. It can be daunting for young designers embarking on their journey out in the big world. Creatamin’s posts aim to provide a platform for us young thinkers to share our opinions and get inspired by our peers.
I’m really excited to be following this blog, and I highly recommend it to all creatives, whether you’re in architecture, product design, web development, writing or fine art.
A few of my favourite posts recently published on Creatamin:
I invite you to check out Creatamin at www.creatamin.com.