In Imaging the Urbane- Control, and Representations of “Culture” I suggest that Serlio’s backdrops for the three Greek performances increased the agency of Architecture through the use of carefully constructed architectural images. The effectiveness of the backdrops was increased by the performance itself, as the characters in the performances were validated by the image behind them. Furthermore, in each case, the street becomes the space in which the spectacle of class and culture play out.

If the street is where the spectacle plays out, are there instances where this plays out? In Walking in Roman Culture,  Timothy O’Sullivan suggests argues that there is a long history of awareness associated with walking in public spaces, illustrating purpose, and class in society. What is not often taken into account is that this behavior was/is present North American culture.

Hence the intention the three following quick montages layering images of people cake walking on top of three backdrops prepared by Sebastiano Serlio.  The cake was a type of performance present in slave communities in the United States. Slaves that were able to leave the plantation and go to more urban areas would carefully observe the behaviors of slave owners and other whites as they promenaded through the city. Upon return, they would imitate the behaviors they witnessed to the amusement of the other slaves.

drama.gif

comedy.gif

Screen Shot 2017-06-05 at 2.18.25 PM crop.png

But what is also interesting is that this active form of display gave way to passive forms of learning, potentially paralleling the relationship slaves have with religion in the United States.

 

In 1545, Sebastiano Serlio published the first two volumes¹ of his Treatise on Architecture; On Geometry and On Perspective. As part of the emerging discipline of Architecture, they serve to develop and complicate the role of the designer as the curator of form. Collectively, the 5 books made a case for styles of architecture based on history, form, geometry, and representation while at the same time making it accessible to a wider public through printing and illustrations. With this conscious decision to use a more accessible format based and a combination of common and images, Serlio places the Architect in the role of cultural curator. This is especially notable in three images in On Perspective.

In On Perspective, Serlio explores the role of perspective as a mode of construction, a way to both image and imagine how geometry would be represented in physical space. The argument culminates in constructed images of urban spaces for three types of the Greek theater, the  Tragedy, the Comedy and the Satyric.

Screen Shot 2017-06-05 at 2.18.19 PM.png

The Tragedy

The Tragic backdrop is the most formal of the three, constructed with a high degree of control along the edges of the street. An arch gateway with figures suggests a formal entry to the city along with an obelisk. Additional monuments in the background suggest both the surrounding landscape while referencing the antiquities, thus grounding this hypothetical city in history.

 

Screen Shot 2017-06-05 at 2.18.13 PM.png

The Comedy

 

In comparison, the streetscape of the Comedy has more variation in building placement and suggested construction methods. Where the buildings in the  Tragic streetscape are suggested to be made of stone, the Comedy has buildings that appear to be made of wood. The tower in the background appears to be incomplete or damaged. The entire collection of building forms is more chaotic, with a greater degree of vertical variation.

 

Screen Shot 2017-06-05 at 2.18.25 PM.png

The Satyric

 

The Satyric backdrop represents the greatest degree of contrast between the three. In this setting, the viewer is decidedly placed in an exurban setting. The clearly delineated street present in the other two settings gives way to a path that is more of a byproduct of habitual use than designed intent. The buildings lining the path are dominated by the tree and the steps illustrated in the foreground are rusticated and show a great deal of deterioration.

While these comparisons may not appear to be significant, it is important to remember that theater was not only entertainment but also provide instruction in behavior. As a result, when Serlio illustrates the Tragic performance with a more formalized approach, he is making recommendations about the types of persons who are meant to occupy the spaces and the type of construction that is best suited to their station. When combined with the presence of the performers, urbane life is described.

This makes Serilo’s decision to use the vulgar tongue instead of the learned language of Latin and important one. The choice to use Italian creates accesses to a larger base of individuals, encouraging patronage. But it creates a group of people who are consuming the images and the implied behavior, arguably crossing over from information into one of the earliest forms of media.

1- The Third and Fourth Volumes had been published on 1537 and 1540 respectively.

Arguably the two most significant historic situations that must be taken into consideration by the mapping practices of Ian McHarg and the projects that actually brought GIS into fruition. Firstly, landscape architecture already has a computational culture embedded within it, which became significant in the late 1960s to the early 1970s, and is associated with prominent practitioners and educators such as Ian McHarg. Secondly, the actual development of computational practices related to landscape inlaying news took a different trajectory than that of landscape architecture. This resulted in parallel trajectories of development over time, and while landscape architecture did take advantage of maps made by computers they were not wholly folded into the discipline, arguably due to limitations of representation and datasets available at the early onset.

Among landscape architects, Ian McHarg, the author of Design with Nature is frequently referred to as the “grandfather of GIS,” based on his physiographic maps. The process of making these maps involved making a set of drawings diagramming isolated spatial qualities (referred to as themes) keyed to a base map of a specific geographic region. These drawings were subsequently overlaid in a serial fashion and used to curate a given landscape as part of a larger decision-making process. In fact, McHarg states:

Overlay map from "Design with Nature"

Overlay maps from “Design with Nature”

“We can identify the critical factors affecting the physical construction of the highway right these from least to greatest cost. We can identify social values and rank them from high to low. Physiographic obstructions– the need for structures, poor foundations, etc.- will incur high social costs. We can represent these identically. For instance let us map physiographic factors so that the darker the tone the greater the cost. Let us similarly map social values so that the darker the tone, the higher the value. Let us make the maps transparent. When these are superimposed, the least–social–cost areas are revealed by the lightest tone.” [1]

In this description of a specific project, McHarg makes an implicit argument for a standardized approach to determining best land use practices through mapping. He also suggests that there two types of conditional sets that need to be considered- those of need or program (in this case a highway), and those of social value (the aesthetics of landscape). These sets are placed in contrapposto to one another in order to determine best use.

However, it is important to note that the process described is not a digital but an analog   form of evaluation. The overlay drawings are created not to rasterizing previously mapped content but through the use of the then conventional ink on transparency drawings, with hand applied halftone dots patterns. This spoke to the lack of accessibility to come plotters at the time and to be more important standard convention of how into this world made “press ready” for publication purposes. This is also a method of coordinating architectural drawings using reprographic machines to subsequently coordinate drawings among disciplines and to produce blue line drawings.

As the process matured it took advantage of those resources made available through GIS and the associated technologies, including the plotters and large format printers. Furthermore, as the process matured it also led to a shift from treating thematic content not as a series of overlays that were physically arranged and organized, but as a series of layers that could be called up at will on the computer. This is one the foundation for computational thinking in landscape architecture, and arguably one of the earliest forms of computational process in architecture. The use of GIS in landscape architecture embodies many of the assumptions regarding the best use of computer applications in the design process, based on misassumptions regarding the origin of GIS, along with the reasons for using the software, and most significantly the limitations of the software with respect to how data is used. It is also interesting to note that McHarg opted for using a qualitative motive analysis versus the quantitative modes of analysis that were offered and ecological and economical analysis.

However, the suggestion that McHarg is in someway an originator of GIS fails to acknowledge that geographic information systems have an older, more complex history. At the same time (and prior to) McHarg’s promotion of the physiographic method, there were several projects with the intent of treating landscapes as forms data in North America.[2] Of these projects to stood out as being significant in how the software evolved over time. The first of these was the creation of the Canada Geographic Information System and the second was the founding and development of the Harvard Laboratory for Computer Graphics and Spatial Analysis.[3] Both of these projects explicitly explored the use of computers as an instrument to create thematic maps for decision-making purposes. However, it is interesting to note that both projects failed to address issues of representation in a manner that was seen as appropriate within landscape architectural practice at the time. Presumably this had to deal with issues of conventions of reproduction within architectural practice versus emerging technology of GIS.

The Canada Geographic Information System was initiated by Roger Tomlinson in the 1960’s, and came about as the confluence of three situations. The first was that Tomlinson was already employed making large-scale photogrammetric maps of the Canadian landscape using aerial mapping, and was very familiar with that labor-intensive process. While Tomlinson was working for an aerial survey company named Spartan Air Services, the company was contracted to complete a survey in South Africa, which made Tomlinson openly speculate about the use of computers to expedite the mapping process.[4]

In 1962 Tomlinson had the opportunity to test this theory out with the help of the Canada Land Inventory.[5] Tomlinson met Lee Pratt, from the Canadian Department of Agriculture, who had been recently appointed with the task of mapping 1,000,000 mi.² of resources in the Canadian landscape. Pratt’s desired outcomes were very clear. He required content that would enable the Department of Agriculture to assist farmers in the nation. Tomlinson argued that conventional mapping systems would prove to be costly, labor-intensive, and take a considerable period of time.

 

From Data for Decision

From Data for Decision

 

Tomlinson makes his case for using computers for mapping clear in his 1967 movie “Data for Decision.”[6] Made for the Department of Forestry and Rural Development, the film depicts the problem of mapping such a large vast landscape very. In the film a small number people are seen walking through shelves of maps that are presumably areas of the Canadian landscape. The combined voice narration and film present the problem that the number of technicians available to evaluate the maps was limited. Furthermore the content in the maps varied in content and scale, placing a burden on technicians to make correlations by hand. This often required that technicians correlate the maps using criteria based on requested information, essentially creating datasets manually. Tomlinson’s solution was to digitize all existing maps without any scale associated with them. Using a drum scanner each analog map could be rasterized and later referenced to vector-based maps, to accurately identify boundaries and measure areas. As part of the film, a series of separate maps are shown as being overlaid on top of one another, illustrating a technique and graphic representation method that have since become convention.

The notable benefit to using a computational system was that technicians were able to provide Maps to administrators in a much more efficient manner. Numbers that had originally been made by hand count from several different sources could now be associated within a single drawing. The digital format also gave technicians the ability to supplement, replace, and rescale requested data and content in a far more efficient manner. Overall the film presented a method by which data and content, or information in situ could be correlated in such a manner that the administrative decision-making process could be expedited efficiently within the Department of Agriculture. However the data resources were limited, primarily representing census data in the form numbers. The representation of spatial content was limited. Output was also limited, and was seen in the form of vector graphics as a map was generated for further examination. Upon approval, a drawing could be outputted to a large-format flat bed ink pen plotter.

 

Perhaps the most limiting factor of the Canadian geographic information system was it’s limited client base. As a project that was financially supported by the government for national and provincial purposes, the project goals were specified from the onset. This affected the process of writing the code for computers that was designed to perform within a highly specified process based on a culture of technicians and administrators that was created prior to the software. The software was also designed to address a specific set of problems and outcomes that affected types of data that were seen as beneficial in the process of making maps.

In contrast, the Harvard Laboratory for Computer Graphics and Spatial Analysis was not limited by working with a single client government entity. Founded in 1966[7] by architect Howard Fisher, the lab was more instrumental in experimental cartography and designing hardware/software interfaces that were to be disseminated to interested parties. Of note was the Synergraphic Mapping and Analysis Program (SyMAP) initially coded by Betty Benson with Fisher was at Northwestern University.[8] It that was the predecessor to several other mapping systems designed in the laboratory,[9] and had a similar technical and hierarchical workflow to that of the CGIS. Analog drawings were turned into vector content that was then stored as data. One point interest is that the administrator was replaced by a master programmer who would examine the punch cards prepared by technicians, but the overall hierarchy remained in place.

SYMAP user manual,  1972

SYMAP user manual, 1972

The SyMAP program was capable of producing three basic map types: conformant, proximal, and contour. However, the drawing types appeared to be more or less similar to one another in content they represent and difficult to read compared to the halftone pattern used by McHarg. This was the result of a key output feature built into the system that assumed all potential users of this software would have access to line matrix printers. SyMAP had ability to present the drum from advancing to the next row of text, allowing a line to be overwritten with different characters. The resulting effect was set of patterns and textures that created discernible areas, made through the use of characters from the standard QWERTY keyboard. Areas defined using the over overwritten pattern were discernible, but lacked the level of clarity and detail present in the hand applied halftone patterns used in analog overlay maps. In addition, lines were not as clear as line work outputted from the flatbed plotters used by the CGIS.

Examples of overprint output from the SYMAP manual

Examples of overprint output from the SYMAP manual

The patterns did have the benefit of allowing cartographers to directly locate data in a place. This is a great deal more beneficial as compared to the maps generated by the CGIS, enabled data to be shared across a larger audience given the use of a visual language and not referential system. This attention to represent data spatially carry through a number of projects in the lab, including a film by Alan Schmidt.[10] This represented a breakthrough at the lab as the first instance in which this data was visualized as a film. More importantly it was the first occasion in which GIS content was animated in order to describe change.

Contour map illustrating the populations of Chicago in 1960. Made with SYMAP.

Contour map illustrating the populations of Chicago in 1960. Made with SYMAP.

 

Both the CGIS and the Harvard laboratory for computer graphics and spatial analysis were instrumental in the production of software and associated conventions of use. The relationship between a photographer, or technician, and the administrator or curator became something of a standardized workflow. It is not uncommon to understand GIS not as something important to design practices, but as a niche skill that ensures job security. Data organized in GIS is also seen as a discrete entity that is not capable of being manipulated for design speculation or projection. Most importantly output from both organizations was not in alignment with standards of representation in landscape architectural practices, nor was it seen as relevant with the postmodern aesthetic that was predominate at that period of time.

The representational and technical limitations experienced in purposing GIS is hinted at the essay by James Corner “The Agency of mapping: Speculation, Critique and Invention,” Corner argues that mapping is not the process of the present what is already evident, but is something that describes in discovers new condition- things that had not been seen prior to that point. Corner argues that:

As a creative practice, mapping precipitates its most productive effects through a finding that is also a founding; its agency lies in neither reproduction nor imposition but rather in uncovering realities previously unseen or unimagined, even across seemingly exhausted grounds. Thus, mapping unfolds potential; it re-makes territory over and over again, each time with new and diverse consequences. Not all maps accomplish this, however; some simply reproduce what is already known. These are more ‘tracings’ than maps, delineating patterns but revealing nothing new. In describing and advocating more open-ended forms of creativity, philosophers Gilles Deleuze and Felix Guattari declare: ‘Make a map not a tracing!’ [11]

 

James Corner from Take Measure Across the American Landscape.

James Corner from Take Measure Across the American Landscape.

 

Within the context of this statement, early modes of GIS would be tracings. Despite innovative uses of data and technology, representation was limited to recording what was already present and presented little if any projective potentials. Taken further, the three key points described by Corner– speculation, critique and invention, valuable in identifying outcomes of the projects. The maps made by McHarg were a form of critique, but relied upon “reading” landscape has a compositional act, with limited numerical sets of information. However data was not spatialized, limiting accessibility to the information represented. The CGIS focused on critique, providing administrators with data referenced to landscapes in order to make decisions. Decisions were made as a separate part of the process, marginalizing technicians and suppressing the agency of data. This had the effect of reinforcing existing hierarchical structures and reiterating normative landscape types. The Harvard laboratory for computer graphics and spatial analysis focused primarily on invention, given the focus on making new software and hardware platforms for dissemination. Well this was a great benefit, the project still relied upon programmers to curate how data was organized and represented, supporting another set of existing hierarchical structures.

The reliance upon existing hierarchal structures of making, analysis and decision-making has continued have the effect of marginalizing data as part of the design process, resulting in normative solutions. This had potentially two negative effects on the use of data to create maps and invent landscapes. The first is that the workflow to make a map has become standardized, limiting its design agency. In fact, Charles Waldheim has argued that the potential of contemporary mapping practices to provide design insight have been exhausted,[12] and argues that projective modes of modeling with respect to time (e.g. animation) provide you trajectories for representation and research. Implicit in his argument is the need to reconcile the compositional approach taken by McHarg versus the data driven approach taken by the CGIS. Added to that would be the need to create platforms that make these resources accessible eliminating the need for the curator/technician dynamic present today.

Given the development of computational systems for the past five decades there are number of factors that make this possible. Landscape data is more readily accessible, making it easier to represent landscapes using digital modes. As an example land surveys are increasingly made as digital models with contours located in three-dimensional space, versus the historic convention of two-dimensional representations. Datasets are more readily available, especially in urban environments. Indeed an individual can compile their own datasets using traditional measuring tools or through the design and creation of electronic devices based on open-source hardware and coding platforms, such as Processing and Arduino. Perhaps the most significant development over the course of the past 50 years has been the advancement in computer drafting software. Object oriented programming environments such as Grasshopper and Dynamo have given designers the ability to use data in order to make decisions directly in the form of parametric design.

There are limitations to applying parametric modeling to landscape process. In the field of architecture, parametric design is applied to the design of discrete objects. Most notably it has been closely associated with the direct fabrication architectural components capable of being assembled to create a greater whole. Landscapes are not discrete, overlapping in type, scale, and systems. Furthermore, computational systems are not capable of processing an entire dataset for a given landscape given the complexity of ecological processes and material relationships. Rather than treat this as a shortcoming of technology or of landscape, this presents an opportunity to leverage the technology and advanced it beyond conventional practices, creating new modes of mapping and new modes for making landscape.

 

[1] McHarg, Ian L. Design with Nature. Garden City, N.Y: Published for the American Museum of Natural History [by] the Natural History Press, 1969. Page 34.

[2]Coppock, J. Terry, and David W. Rhind. “The History of GIS.” Geographical information Systems: Principles and Applications 1.1 (1991): 21-43.

[3] Ibid.

[4] Ibid.

[5]“Fall 2012.” The 50th Anniversary of GIS. N.p., n.d. Web. 20 June 2014. Electronic

[6] “CGIS History Captioned.” YouTube. YouTube, n.d. Web. 23 June 2014. Electronic

[7]Cf. Nicholas R. Chrisman, Charting the Unknown: How Computer Mapping at Harvard Became GIS (Redlands, CA: ESRI Press, 2006). Electronic.

[8] Nicholas R. Chrisman, History of the Harvard laboratory for computer graphics: a poster exhibit

[9] SYMAP Time-lapse Movie Depicting Growth of Lansing, MI 1850-1965,ESRI 2004 Award.” YouTube. YouTube, n.d. Web. 24 June 2014. Electronic.

[10] Ibid.

[11] Corner, James. “The Agency of Mapping” in The Map Reader: Theories of Mapping Practice and Cartographic Representation. Dodge, Martin, Rob Kitchin, and C R. Perkins. Chichester, West Sussex: Wiley, 2011. Internet resource.

[12] Waldheim, Charles. “Provisional Notes on Landscape Representation and Digital Media” in Landscape Vision Motion: Visual Thinking in Landscape Culture. Girot, Christophe, and Fred Truniger. 2012. Print.

It’s in the Water:
An Eccentric Polemic on Washroom Propriety and Corporeal Hedonism

Abstract:
This paper explores the role of the washroom and it’s associated electrical technologies as a gateway into architectural hedonism. Automated sensor technology in this location passively exploits our bodily function and corporeal discomforts, presenting itself as a solution to a problem. This is the one of the first steps in enabling people to accept the increase in computer-automated systems within our built environment. While they may not be architectural, they are corporeal hedonisms calling into question the relationship between our bodies and the materials we use to maintain them.

Hedonism! It’s becoming more and more common in built environment these days. Buildings are being designed to take the ethical discomfort out of the being in the physical comfort zone. Mechanical systems are being made more efficient, so you are putting the energy you put into the system to better use. VAV boxes allow spaces to be environmentally customized to suite the needs of the occupant. Lamps are being reengineered to use the latest light emitting diodes, reducing heat loads, increasing lamp efficacy and giving you a longer lamp life. Lighting systems can be controlled remotely to shutdown until someone arrives to use the space. Computers control entire buildings, reading them as dendritic systems that can be selectively placed in dormant states, so long as it can be connected to an electrical sensor.

Systems and services have become such a driver in design methodology we find ourselves in a position where we are providing clients with solutions and strategies that allow for both a level of relative luxury in matter and conservation of material. As a result, contemporary architectural practice is increasingly based on “both/and” scenarios with respect to project delivery. Within practice we are to both provide low impact solutions and high tech systems. This is not to say that these approaches are invalid, because they aren’t. Material conservation is a real concern and should be a part of practice. Standards of excellence in conservation such as LEED should be recognized for what they bring to the table with respect to bringing our attention to construction waste streams. Further to that, the visualization of architectural performance on larger electronic displays has done a lot to make operational (in)efficiencies a tangible part of occupation and management.

This is the basis for the architectural hedonism that is being heralded by some as the next age of building. Scarcity is no longer a concern when the performance of a system can be calibrated against another resource such as money or time. The ability to model performance in the computer enables the designer with the ability to predict the impact of usage patterns. More importantly, the occupants or the operational analyst are able to make real time decisions based on real time data. As promised, all that data- small, medium and otherwise- is being used to save us from ourselves almost immediately after we have sinned.

Given the amount of control being ceded to computational systems, it is interesting that the level of discomfort among people is so low. The cold fact of binary code is replacing rationalization and fears of the individual with what appears to be little or no complaint, despite decades of cautionary tales about the dangerous authority of computers and their ability to render decisions that were not in name of the common good. We’ve seen robots violating prime directives, computers in control of spaceships, robot chasing children and eventually other robots. We’ve even seen how smart computing and big data can make a potently dangerous mix. Still, we stand by our machines and let them make systemic decisions for us.

There are a number of reasons for this. The first is that we love technology. Specifically we love gadgets that allow us to do other things like parking our cars and decision-making based on large amounts of information. We like them for the capacity to do things for us, the opportunity to show off new gadgets, “I got the latest phone,” along with technologies make us appear more efficient, “hey, check out this app I got.” It’s no wonder that we need to behave in a similar manner with our constructed environment. All of this appeals to our higher, cognitive functions. It’s brain candy.

Immediate access to technology isn’t the only enticer. This is only one part of a more complex set of situations. After all, the digital feeds the mind, not the body. A physical counterpart was needed that encouraged us to accept the presence of smart technology- one that engaged every body at a fundamental level. So the material counterpart to the computer was based on water. By this I am not suggesting that bottled water was the counterpart. That would too easy and too superficial. Bottled water follows a pattern of consumption, being selected for it’s mineral content whether it is for the subtle flavor, carbonation or for enhanced performance. Added to that bottled water is still a luxury good that is in decline. Not everyone buys bottled water off the shelf, and the increased popularity of taps designed to be thermos accessible has only helped in this decrease. The taps are sophisticated enough to estimate the number of plastic bottles being kept out of the environment based, reassuring those who use that station are doing a good deed in a bright digital display. Bottle water is too conspicuous. Instead, the counterpart to digital hedonism had to cut across classes, culture, and creed. It had been a shared experience. It had to be based on bodily fluids.

Why bodily fluids? There are two key reasons. First, we all have them. Show me a body devoid of fluids and I’ll look at the museum placard stating how old the desiccated remains are. Secondly, our relationships with water are intrinsically tied to our need to be hydrated. It’s connected to our need to replace and replenish fluids, meaning depleted or excess fluid needs to be evacuated. If there is anything that ties us together is our need to use a washroom now and then, making it the perfect unifier. It’s a place of habit, or ritual for some. It’s also a very private experience and compromising experience. Also consider how mentally occupy the space of a washroom, especially one you are not familiar with. Your first goal is to avoid all other fluids to avoid and potential transfer of bacteria or any other water borne ailments. In this room you confront your bodily limitations directly, and it is all based on rituals related to water.

This makes the washroom the perfect counterpart to the positive feedback provided given to the brain by digital technology, and in some respects better. This is one of the most vulnerable moments in any person’s day. Anything that can be done to expedite this process is becomes welcome, as it allows you to get back to the really important part of your day- thinking. This is a prime location to introduce a little luxury. The clincher is that anything that is placed in a washroom that is more conducive to cleanliness is not seen as a luxury but as progress.

So what is this simple piece of luxury? The infrared sensor, an electric device connected to solenoid motor, complete with a plastic shield over a flashing “eye” reminiscent of yet another set of robots bent on destroying humanity. Typically there are as many as three sensors you encounter in a washroom. Two of these deal directly with the application of water and the third deal wit the removal. In each case, if the eye is visible, it implicitly promises to make your experience in that washroom more a more pleasurable experience.
You place immediate faith in that light, hoping that it does work; otherwise you will need to spend even more time trying to determine where the button to manually trigger the motor is located. If the sensor is located in a sink you hope it works because there are no taps to control the water, or wet hands.

What is not as clear is what sensor automated washroom fixtures offers to “both/and” scenarios. Consider that the sensor gives you both a means to manage water in a potentially contaminated environment and the security of remote control. It provides us this interface with the luxury of electronics gadgets (so it must be a good thing), and it appeals to some of our basic fears with a technological solution. Even more exciting is that it pairs both water and electricity and guarantees life safety. Access to this technology is available to anyone who needs to use a washroom in a high traffic location. It’s this passive experience that makes the automated sensor technology the other gateway into architectural hedonism. It’s hard to say no to technology that work in such a direct manner, even as they are scaled up to address larger issues and complexity. While they may not sell products they sell the idea, which is a far more effective marketing campaign.

But as a hedonistic matter, automated sensor technology looses luster. Consider that a great deal of the sensor systems out there operate on battery power. The battery of choice these days is made using lithium ion, a resource gather from brine pools. Suddenly the technology both provides water on demands and uses it at a disassociated location. More significantly, it makes a strange pairing between two very different states, water and energy. We’re often taught that water and electricity don’t mix and for good reason. Water is finite material, whereas electricity or energy is matter of abundance. The association, or assumption that the production of one can be used to manage the application of another is a potentially dangerous one, encouraging best practices at an unrealized cost. Until such time that the computer model is capable of evaluating the total impact of the system, it is incumbent upon people to watch the robots.

Output= The direct result of an activity or process. A drawing could be an output of a particular process, communicating so form of content.

Artifacts= Byproducts of a process. This is a difficult thing to describe because it so embedded in outputs. However, artifacts are traces of a process, not so much as the end result. In some respects you look for the artifacts to find the artifacts to make findings and discoveries to move forward in a process.

Craft= The capacity to exert effort towards a particular intent. The negotiation between intent, clarity and ability (not to mention material ) creates artifacts.

Work= The process of demonstrating craft. Within work, two things are manifested- artifacts and outputs. Artifacts manifest themselves in the cognitive process, being present in the work. Outputs mark significant points of completion in the work.

Scored Space is a series of photographic explorations examining how bodies create space that I have been developing with Kim Wilczak, a graduate of the Landscape Architecture program at Cornell University. The project uses two cameras and synchroballistic photography to record movement. The images are then compiled and arranged to create stereoscopic images, recording motion as a spatial gesture.  Our goal was to generate images that capture the movement, both formally and spatially, in a continuous fashion. The intent behind the creation of these images is to document how liminal space is created by physical iterations without relying upon geometry. This paper describes our process and progress to date.

Screen Shot 2013-04-21 at 8.36.53 PM

The project is inspired by two significant film-based projects that record motion of persons. The first of these is the work by Eadweard Muybridge, recording the procedural movements of persons. Muybridge’s work stemmed from a series of images that he had taken to document the motions of the horse “Occident” in an effort to determine how it’s gait worked relative to the ground. The horse’s movements were captured as a series of still images from a series of cameras that were positioned parallel to the track. Panels were arranged on the other side of the track path to create as much visual contrast between the horse and the background as possible. The resulting images were then compiled into a machine device made by Muybridge that relied upon the persistence of image perceived by the human eye to create an early form of cellular animation.

Screen Shot 2013-04-21 at 8.37.04 PM

More importantly, the movie proved to serve as the foundation for a series of examination of how the body moves procedurally. After documenting the movements of Occidental, Muybridge worked in a studio space photographing then movements of people. Working this environment he was able to optimize lighting, and placed measured lines on the background, enabling movements to be measures relative to a space. It is also interesting to note that these studio constructions was the ability set up multiple cameras to record the movements of people spatially. Given the arrangement of the studio, the images are set up on Cartesian axes, but do create a set of referential images that allow the viewer to imagine the space occupied by the body in a manner that is very similar to the architectural sectional elevation drawing.

Screen Shot 2013-04-21 at 8.37.17 PM

The Muybridge photographs provide us with three points of consideration. The first is movement has been of long standing interest, not solely as a gestural set of movements, but as a something capable of being measured based on the size and shape the body and through repetition. This differs slightly from form the prescriptive nature of chorography, in which the movement of the body is described prior to the actual movement.  The second point is that despite the limitations of technology, three-dimensional space was of interest. Therefore, it can be postulated that Muybridge’s photographs as much about making space as they about motion. However, the studio environment limited the ability of Muybridge to record the motions of individuals in casual environments. The images were also limited in how the record the physical relations of multiple bodies moving through share environments.

Screen Shot 2013-04-21 at 8.37.28 PM

This is in contrast to the second precedent, in which used the “outdoors” as the lab space. William Holly Whyte is celebrated for his work recording how people used space in urban environments. Central to his work was an interest in how people interacted in in public spaces. Almost a century after the work of Muybridge, Whyte engaged in a disciplined observation of streets and plazas in New York City called the “Street Life Project.” The primary goal of the project was to identify why people use certain spaces more than others, and became widely recognized in his book “The Social Life of Small Urban Spaces.” While the findings describe in the book and the associated film with the same name has become invaluable to the practice of making public spaces in urban environments, the methods of recording movement in the spaces are more valuable to our project.

Whyte employed the use of time-lapse photography to record how people occupied numerous spaces within the city. Here we have an application of film in a similar fashion that of Muybridge, although in a more intentional fashion. Muybridge was limited by his technology. Placing strings in strategic locations completed the process of triggering the camera lens. Once “tripped,” the photograph would be taken. This severe limitation of technology was overcome over time through mechanical systems that would repeatedly trigger the camera base on a program increment. Thus time lapse became a significant part of recording the body in urban environments.

This simple calibration created a manifested a significant change in how the body was recorded in urban environments. Rather than trying to describe the body in a series physical or spatial manipulation, time becomes the driving factor. Therefore, in addition to addressing questions of “relative to what,” the matter of “at what time” is brought to the forefront. The work has the effect of confirming some information that was already observed anecdotally, but also had brought to light numerous subtle patterns of use in the spaces that were examined.

Despite this rigor applied to the work there are still some limitations to Whyte’s work. The first of these is embedded in the camera technology and scale of the spaces being examined. Whyte was working at the scale of the block and the urban plaza, thus loosing some of the immediacy with the body present in the observations of Muybridge. More importantly, given the use of cameras from a relatively close point of view, the spatial relationships that are present in between persons on the street become two dimensional. Compared to the Muybridge’s use of two cameras within his studio to record motion simultaneously, there is a loss of content in the work. Therefore, while the broad brushstrokes revealed in the time-lapse photography is invaluable, it is also problematic in that the details are lost as part of the process of recording the space from medium to long range distances. The time-lapse photographs also lack some of the subtleties of movement through space, given the increment of time between frames.

Given these two precedents, one that attempted to spatialize movements of the body and one that record adjacencies in space over a period of time, we still faced with the problem of resolution. In both cases the period of time between each image was not capable of displaying movement with any great reliability. This led us to use synchroballistic, or slitscan photography as a means to record motion. When speaking in terms of analog formats, this method of photography involves exposing a continuous strip of film along a thin lens aperture. The resulting image is distorted, revealing time through the blur of the slit scan and the presence of movement across the lens aperture.

Screen Shot 2013-04-21 at 8.38.04 PM

Common applications include finish line photography for track and bicycle races. More technical applications include evaluating the rotation of high-speed projectiles. In these cases the body in motion is the primary thing of concern. This usually requires that the camera be positioned perpendicular to the plane of movement. In our case, we are interested exclusively in how the object moves, as much as we are interested in the spatial relationships that are revealed when while the images are generated.

We use a digital platform, utilizing the open source-programming environment Processing and a program sketch based on one created by Golan Levin. We capture an aperture that is one pixel wide, and control the image width and frame rate to describe periods of time. Using a digital platform, we are able to tune the images to the motion being captured. We also intentionally positioned the camera at oblique angles to the area of study. This generated some three-dimensional qualities based on how spaces overlap or stack in the image, especially when multiple modes of movement are evident. More significantly, and the size of objects and their form become exaggerated relative to their speed and the frame rate. Objects moving across the lens at a faster rate became proportionally smaller compared to those that moved at a slower rate. In this image of people playing kickball in park adjacent to a street you can see how size become distorted as part of the image. If you were to apply rules of perspective to determine the height of the people, they would be taller than the vehicles in the foreground.

Screen Shot 2013-04-21 at 8.38.23 PM

While depth of space is suggested in the images, a “real” sense of three-dimensional space is absent from the images using a single camera. At this stage, we switched from using a single camera to record space to two cameras aimed at the same location.  The first set of test images were constructed using two smartphones with slitscan camera apps, allowing us to quickly adjust the focus of the camera towards the same point in three dimensional space.  The paired images are later combined on a page for viewing. Three-dimensional space “appears” when the viewer crosses their eyes at the correct distance from the image (which in itself present multiple problems).

Screen Shot 2013-04-21 at 8.38.42 PM

Aside from the issues related with people who cannot cross their eyes and the ensuing headaches for those people who can cross their eyes, we determined that a major boundary facing us for this version of the project was dues to the limited view of landscape based on the blurred background. These lines across the images are a representation of points that are not in motion. The successive lack of motion at these points in the single pixel wide images that are stack horizontally results in the streak. This we found to be too abstract, limiting how spatial conditions within the landscape also determine the movement of individuals.

Screen Shot 2013-04-21 at 8.39.16 PM

In the latest iteration we are using another sketch written for the processing programming environment. This one is based on a script originally written by Heino Boekhout, which was written to capture photographic images in a conventional manner, but then layers them into a single layer at a defined frame rate. Movement in this case registers as a staccato blurs across the environment being recorded. We have modified the sketch to record movement from two cameras simultaneously, allowing us to create stereoscopic images without the need for assembling the images separately. The significant advantage to this method of recording movement in space is that the space becomes an active part of the image, allowing us to identify how people interact with static objects.

Despite the promise we can see in constructing images using this method, we are presently running into technical issues. We are using camera from two different web cameras, and as you can see in the differences between the two images, the difference in how the lenses are constructed has a significant effect on how the two images do (or do not match up). The construction of the ccd’s also plays a significant role in how light is recorded in the image. Finally, there is a very significant factor that the two cameras must be focused and centered on the same point in the images to create stereographic images that are capable of being easily viewed. Therefore we will need to construct a harness that will hold two cameras accurately approximating the distance between the pupils in the eye (6.5cm), while being capable of being focus independently on a single point. Finally, while we have not tested our cameras for this, we will need to address issued of frame rate paired with images size in order to get the “best” images possible, without overclocking the web cameras, pushing the ccd’s to failure.

Therefore we find ourselves at an interesting point in the project that is shared by all the preceding work.  We are investigating movement through space in order to observe how the body moves, in a “contemporary” manner using software as the instrument for production and subsequent image production. In relying upon software, we find ourselves pushing against the limitation of the technology, which becomes an active part of making the images, or recording the content. In this way we find ourselves in a similar position to that of Muybridge and Whyte, determining ways in which we can best use the technology to accomplish the goals of the observations.

DSC_0149

Recently, I decided to make significant changes to the tools that I choose in making architectural representations. I resigned myself to the fact that the two small scales that sit on my desk were to be used almost exclusively for to stir cream into my tea and moved the mayline on my drawing surface to the bottom edge so it would be out of the way. In fact, the drawing surface became a table, a ground for my new investigations in making that rely heavily upon digital interfaces.  However, my apparent shift away from the motive practice of architecture in itself is not a significant change in practice, given that digital technology in drawing and representation eclipsed the practice of analog drawing some time ago. Indeed, the mechanics of making have an interesting relationship to the process of making drawings and models, and have evolved over time.

My contemporary tools include a set of calipers to measure material thickness, and stepper motors have replaced the gears and pulleys present in the mayline. The rigor and control offered by the mayline itself is now provided in the form of aluminum extrusions and metal plates assembled to create a Cartesian framework in which surfaces and volumes can be fabricated and envisioned. While assembling the machine, questions related to architectural representation have arisen that and become integral to the process of building the machine. In short, while I have abandoned pens and pencils in favor of other devices, but not at the expense of considering craft. This is an important revelation as representation appears to be increasingly removed from an immediate relationship with the author.  Design pedagogy has historically been formulated assuming a proprioceptive relationship between the draftsperson and the drawing sheet, based on the mechanical relationships between the hand, arm, shoulder and eyes.  Despite this the devices that are used to construct drawings, tools and objects are transforming the way design becomes manifest as a set of material relationships.

As architecture increasing embraces a model-to-factory or model-to-model mode of making, robotics have taken on an increasing level of significance. In it’s most obvious example, the desire to have access to industrial robots has become a point of completion and envy in schools. The scale and power of these machines offers students a very real possibility to fabricate architecture at a 1:1 scale.  The number and location of motorized points makes it possible to carve space, creating dynamic forms and gestures as it the machinery manipulates materials, making the motion of the machine as seduction a the byproducts of its movements.

 

DSC_0005

 

 

Some of the more “humble” robotic technologies capable of fabricating things, if only small parts, include selective laser sintering (sls) and fusion deposition-modeling (fdm) 3d printers or additive printing. These increasingly popular robots are significantly smaller and not as robust as the industrial robot, and have the ability to generate models with a fine level of detail, depending upon the resolution to which the printer has been calibrated. Both machine types must be designed to respond to the material that is being used to create a part. This is a point of distinction between the industrial robot and the 3d printer. While the robot works primarily by modifying material or surfaces, the sls printer requires that the part be assembled through an aggregate process and the fdm printer uses material that has been brought to its melting point. Still, motors are a critical part in how the machine works by enabling it to operate in a volume described by Cartesian space to create a physical object.

 

The mechanics of the laser cutter are too far off from those of the additive printer. However in this case, one motor is used to compress the three-dimensional volume of the cabinet into the smallest space allowed by a material. This is a curious situation in that it begins to emulate the analog mechanics of drawing and model making, through the process of treating a material as something that is primarily two-dimensional and modifying it to represent a three-dimensional object.  Material parts and architectural projects become the same thing, allowing the designer to transform historic approaches to surface into a material strategy.

These relatively new modes of making rely upon mechanical processes as part of their operation. What makes them distinct from one another is how they describe surfaces or occupy space, but they all are motational tools with that require interfaces to translate geometry into form. This is a important thing to consider given that this is not a new concept in itself. Plotting technology has been present since the 1950’s, using one motor to pull a pen carriage across a surface, while a second motor rotates a roll of paper over a drum. In fact, the drum plotter is more abstracted from analog forms of drawing than the laser cutter, given that the motors are arranged to operate relative to a single line along the drum, versus manipulating an entire sheet of material. Two dimensions are achieved by continuously feeding the roll of sheet material across that single line of interface.  The laser printer abstracts this further by using only one motor to rotate sheet material across surface while another set of motors rotate drums with pigmented material applied to them. In this case space is relative given that the surface area of the drum is not equal to that of the sheet material.

Taking these mechanical operations into consideration designers have been working with abstract procedures for some time, bringing up questions regarding where craft lies in the process of making.  Robotic tools and their predecessors change the base assumption about what constitutes craft in architecture. While it is safe to assume that it still relies on the basic relationship between intent of the designer, the materials that are at hand and the precision of the mechanical operations, where these things actually reside in practice are called into question. Mental acuity is increasingly working in parity with software designed to describe associative condition or to describe emergent patterns. Material has come to describe wide range of conditions, and physical states of materials, and is beholden to the devices that are used to manipulate it. This leaves the machines that are used to make these manipulations as the point of critical introspection.

In the same way that the conventional draftsperson is expected to have command over the tools and materials used to make a drawing, digital practitioners are expected to understand the mechanical limitations of their digital tools.  Quite often this exploration addresses the relationship between intent and material, or how to use a tool to produce a desired result. However, in adopting robotics a third point of resistance and complexity has been added to the design process- the resistance of the tool. The robotic system itself provides a fertile ground for architectural investigation as it does not create a direct translation between intention and material but serves as an interlocutor between the two conditions. It is in the mechanical resistance that craft resides in digital and fabrication practices.  So in making the machine the architectural investigation is immediately activated. The limitations of the tool that is constructed for either unique instances or repetitive practices becomes a framework for subsequent modes of material practices and representations.

The following is based on an essay I wrote with Jamie Vanucchi for Nadia Amoroso’s collection of essays “Representing Landscapes: A Visual Collection of Landscape Architectural Drawings.” For my part, it was the start of two conversations for the future. Firstly, the essay reflects a discomfort I had (and still have) with the reliance on two-dimensional media to represent highly complex spatial relationships along with the process of internalizing them. Secondly, this is when I started articulating the significance of process versus byproduct and artifacts.

I’m taking the opportunity to come back and reconsider some of the statements made in the original version and it has turned out to be a rebuild. One of the primary reasons for this that

The creation of a drawing or model is an important process for an individual in order to engage the site and problem. While this process of is arguably the most valuable part of making these recordings, they also create proprioceptive relationships and parallel opportunities for self-discovery within the context of the design problem. In many respects, the act of making could be described as multivalent. It serves as a means to investigate the immediate project, allowing the designer to delve into to conditions and problems of the project using a synthetic process. But it also enables the designer to interrogate her or himself, creating a ground for an internal discourse regarding process and intent. In the context of this brief essay I explore the notion the act of making and discovery in an analog operating environment considering the relationship between drawings and models. This is not to suggest that digital methods do not lead to discoveries. However, they are cognitively different methods, requiring different considerations (in addition, digital work flows were not central to my considerations the original essay).

Drawings and models are often thought as sequential pieces versus being iterative elements in a process. A more active description for the relationship between drawing and model is a synthetic construction. This allows representation to be engaged as a form of persuasive argument. In this scenario, the image or model is crafted to describe a problem with the intention of proposing a potential solution, within the context of investigating the problem. This mode of thinking also enable the designer to leverage the two modes of thinking against one another, taking advantage of their strengths and weaknesses as representational mediums.

Drawing is an act central to the process of recording process and as a convention is inherently linked to its historical reliance on the page, or paper, as a driving medium. In fact, paper plays more than a passive role, being one of the first agents of making for the architect. The limitations of a sheet of paper sets up a number of conditions that continue to define how it is used as a medium for representation. The page is primarily thought of as a flat surface, requiring a need to explore means of projection to create an image. Binocular vision must be emulated through the use of tricks of perspective. As something that only emulates space, paper has the ability to suppress or distort three-dimensional space is highly prized. Orthographic projection is one of the most frequently used methods of projection. What is important to note is the limitations of the surface, its lack of depth, is what is leveraged in the process of making of a drawing.  Drawing, or more specifically projection is not just an act of creating an image, but is also the construction of a gaze. Controlling the content on the page and using compositional arrangement to make the viewer aware of the physical limitations of the sheet accomplish this. In some instances, this is further supported by the use of drawing conventions to create an image that is capable of being shared and discussed by multiple persons- the gaze is the ground for discourse.

In contrast, the analog model creates a different set of cues and relationships. Whereas the drawing creates a series of highly specified lines and projections the model creates scenarios with the aim of inventing potential, models illuminate scenarios. This is based on the multivalent conditions that can be revealed in three-dimensional space.  This plays itself out in a number of ways ranging from access to the model from all directions to highly controlled views from a single standpoint in a manner similar to the page. However, it differs from the page in that a range of possibilities is hinted at given that it is a three-dimensional construction.  Adjacencies that cannot be explored in the drawing are reveals and drive how the model is constructed. In some respects where the rigor of the page is its strength, it is the potential for spatial ambiguity that activates the model. In addition, the inherent physical presence of a model makes it a fertile ground to explore issues related to materiality. This is not so much a direct expression of the materials as it is an exploration of those material qualities that you are interested in. Again, this level of specificity while working to emulate the intentions actually creates potentials. Control in some respects is lost, and requires additional investigation using different rules (materials, tools, scale, modes).

The ambiguity built into the model is a limitation given that it does not engender the same level of conceptual control that can be demonstrated in a drawing. This is not a reflection of craft, but a condition of materiality. To regain a precise level of specificity, one must return to a page, where absolute control of the problem is expressed through line and projection.  Therefore, it is incumbent on the designer to leverage the two modes of working against one another in a iterative fashion. Working in parallel creates gaps and inconsistencies, incomplete readings translations of an idea.

This blog reflects investigation and thoughts regarding work through three themes: craft, instruments and computation. As separate entities each one of the themes represents serves as a means to unpack particular modes of making. When combined they reveal the complexities associated with design practices and the byproduct (outputs) associated with the work.

Craft is defined as an occupation that requires skill and creativity. As an architectural concern, craft has always been linked to an ability demonstrate ideas or intent. More specifically, the ability to effectively use modes of representation such as the drawing and the model have been historically tied to talent of the designer.

Embedded in the use of craft in this context, is the implicit assumption that the architect is working by hand. The drawing and/or model are products of a proprioceptive process, created through a feedback loop connecting the hands, eyes, brain and page. Contemporary modes of practice have reduced the need for hand drawing to a minimum through the use of digital mediums.

In some respects this has created a dichotomy between the practice of making and the act of seeing. There are camps in which digital byproducts are evaluated in mental framework that is based primarily in analog methods. It would be too easy to label these people simply as the old guard, given that there are countless people who still new software that favors orthographic projections. On the other hand there are individuals who are dogmatic in their use of digital media, they consider any references to craft as a nostalgic mode of thought. This position negates any consideration of craft in a digital environment, suppressing the implication of how code is structured and the subsequent impact on the byproducts. In fact, the need to understand how lines of code are written to script is central to the act of making the desired outcome.

This need to understand the procedures associated with making a desired outcome point towards another important aspect of craft- process. This is a word gets  affair amount of use, and to some degree- abuse. Process requires two key elements. This first is a desired outcome or solution. The second of these-which is more important in my opinion- is familiarity with the material being used. In manual practices of craft, feedback (or resistance) is  critical part of the loop, being the basis for haptic relationships between the material and the hand. Granted, digital methods lack a the direct physical relationship, but still provides feedback. Often this loop is even more severe, lacking degrees of adjustment, demonstrating it’s capacity to operate in a binary pattern.

As has been suggested, instruments play an equally important role in formulating work. More accurately, understanding instruments and material is critical in the process of making, given that this is where resistance (feedback) is present. Instruments are defined across a range of experiences and outcomes. Instruments can be defined objects that are manipulated to   orchestrate a series of cause/effect relationships, serving as a means towards and end. They are are things capable of measurement and performance, with embedded rigor and whimsical elements. All of the qualities point towards the agency of the instrument in making.

Typically, when we refer to material, we are suggesting a physical condition, such as paper, clay, wood or metal. The relationship, or interface that a person has with the material is defined by the instruments that are used to modify the surface. While this part is a given, the lesser considered point is that the relationship between material and instruments creates a metric by which outcomes can be predicted and evaluated. This is a mode of thinking that allows craft to be an active part of the discussion of making. Given the increased reliance on digital interface (note: feedback loops), “the computer” is also a form of material capable of generating a work that is equally as important. It is also capable of emulation, creating a byproduct of that is similar in many respects to its analog counterparts while having a significantly different work process.

This leads to computation, the last of the triad. Computation has most recently been associated with computer processes and interfaces. However, at it’s most fundamental level, computation refer to the process of making mathematical calculations.  Certainly, a digital operating environment is more efficient as a computational tool, but is still present in the most fundamental analog practice. One that comes to mind immediately is the use of  scale to represent spatial condition that do not correspond to the page size (1:1), or to create relationships between multiple types of scales and conditions. This requires a translation, a mental calculation, to transform the subject into something that is able to be represented within the boundaries of the page.

The ability of computation to enable emulation also creates a problematic for the tool. in one sense, “the computer” is little more than a material, a ground that seemingly lacks a ground or material property. In another sense, it is a tool capable of manipulating a ground that is internal or external, depending on how it has been optimized for perform a given task. Once it has optimized with the appropriate code, it becomes an instrument capable of performing task with high levels of specificity. These ever present conditions allow the computer to repeat work previously associated with the body, or to manifest innovative solutions.

Two points come out of this. The first is that as a ground, the digital environment is incredibly generic, lacking any clarity in it’s intent. Software becomes the means by which intent is expressed and implemented. Intent creates the basis for evaluating the clarity and efficiency of the software, which provide a basis for evaluating the craft within the code. Secondly, as a limitless ground, the digital environment requires an mean by which to record work. This often takes the shape of a monitor, but can also manifest itself in a host of mechanisms including printers and robotic arms. This presents yet another means to evaluate the craft of the digital.

Therefore the digital presents a great opportunity to evaluate the direction of craft. This is based on it’s ability to represent a range of processes along with the ability to couple  with multiple forms of output. But that is not to say that analog methods are going the way of the dinosaur. Code still must be written, and output devices still must be made. Therefore it may not be the matter of asking “is craft going away, but instead asking where craft is present in the work of making.