Longhouse 3.0.5

Based on all of the great feedback and some excellent research leads, in Stage 3.0.5 of our virtual Iroquoian longhouse project, we look at fur, bark and pole positioning to envision sleeping platform construction within a 3D environment.  There isn’t a considerable amount of reference material available to help guide our visualization process and we will go into further detail later on the visual staging of the interior environment, but we have relied heavily on Dean Snow’s 1997 research entitled The Architecture of Iroquois Longhouses to determine how our interior bunks will be constructed.  We especially wanted to visualize the concept of actual “cubicles” for each sleeping compartment.

Based on European historical accounts, the sleeping platforms that occupied either side of the fire hearths along the interior length of the longhouse were raised 4-5ft from ground level (Snow, 1997). Snow challenges this assumption by citing later 1700’s era European accounts that the sleeping compartments actually consisted of a sleeping level or bottom platform that was 30cm’s (1ft) from the ground and the canopy or storage shelf on top no more than 1.5 -1.8m high or 5-6ft off the ground (1997), with storage for additional firewood and possessions below (Heidenreich, 1972).

Clearly, if we follow previous historical accounts of the sleeping platforms being 4-5ft from ground level, the young and old as well as most adults, would not only have had great difficulty climbing up into a platform of that height but they would have also been exposed to the intense layer of smoke from cooking and heating hearths, making it difficult to breath or see (Sagard, 1939; Smith, Williamson, Fecteau, & Pearce, 1979; JR 10: 91-93). These contested ethnohistorical observations fail to account for seasonal sleeping preferences or even actual longhouse height, which if architecturally higher as Wright suggests, would have greatly reduced the smoke layer well above standing height (1995).

Further, using references from oral history, the common Iroquoian building measurement was ten (Allen & Williams-Shuker, 1998; Kapches, 1993). It was believed to be 1.5 meters in length or equal to the normal size of a body in the sleeping position (Allen & Williams-Shuker, 1998; Kapches, 1993). Dodd discovered based on the archaeological record that the standard range of the sleeping compartments would have been 1.5-2m in depth based on the bunk line pole positions (1984). This assumption would have been supported by French Missionary descriptions of the time and their own general height in the 16th and 17th centuries of 1.6m in size or roughly the same as their Iroquoian hosts (Komlos, 2003). Other’s have suggested, primarily in fictional narratives, that family also slept on the top bunk as well.

Therefore, based on support post positioning within the archaeological record, it is generally accepted that sleeping platforms/family cubicles were generally 1.1-1.8m’s in width, 3.7-4m’s in length and 1.8-2m’s in height. the actual sleeping platform itself has been recorded to be anywhere from 0.30-1m off the ground level with the roof of the platform where personal storage was commonly thought to be, being 2m’s from ground level.     Measurements_MetresOur first attempt in Longhouse 3.0 had the bunk slats running the width of the platform in short 1.8-2.0 poles. Keeping in mind that pre-contact Iroquoian longhouse builders only had the use of stone axes and fire for initial harvesting of the trees, the notion that they would be chopping multiple platform poles into even length slats seemed like a considerable amount of work for relatively no benefit.  In F.W. Waugh’s Iroquois Foods and Food Preparation, he states:

A method described by David Jack was to ties some saplings around the tree, forming a small, scaffold-like structure. Sods were placed on this, water was poured over them and a fire built up below. By alternatively hacking with stone aces and burning, the tree was finally cut through. If it was desired to cut it into lengths, a double pile of sods was made around the trunk where it was to be divided , and fired applied to the space between. Chief Gibson’s description of tree-felling was essentially the same, except that, according to him, a quantity of rags was tied to the end of a pole and used for wetting the trunk and localizing the action of the fire. Both Lafitau and Kalm give similar descriptions, indicating the method to have been one in common use. *Lafitau, Moeurs des Sauvages Ameriquain, pt. 2, p.110 &  *Kalm, Travels, vol. II, p.38. (1916; p.8)

Thus, we made the decision that it was probably more efficient to harvest fewer but longer poles, which would act as the platforms for the bunk that would run horizontally along the length of the longhouse.

Also keeping in mind that poles were generally harvested around the 8-12m length and that White ash for sleeping benches were likely used.  White Ash tends to grow straight with very little branches and have consistent diameters even when it is long.  According to (http://www.na.fs.fed.us/pubs/silvics_manual/volume_2/fraxinus/americana.htm) a 20 year old White ash will generally be 4inch (10cm) in diameter and 12m in length.  So if we’re running a 24m long longhouse, we could have two 12m long 10cm diameter poles end to end for sleeping platform support beams. My estimate would be 16 beams (8 for each side of the sleeping platform).  The diameters had to be substantial enough to allow for at least 400-500lbs of weight (3-4 people) to be supported without buckling in the middle and long enough to be tied down on both ends and likely in the middle to the main structural elements.Double_supportsIn switching the direction of the poles however, it was quickly realized that there could have been a couple of additional enhancements to the bunking system to reinforce the poles and to deal with the weight of family members and their daily activities on the platforms.  Additional support poles were added at the major support posts (see above) and Craig suggested that it would have been better to tie down such long poles in the middle to keep them from shifting (see below).Middle_StrappingPosts (anything in the ground was Cedar) and beams (white ash) were tied together typically using basswood cordage (wood rope).   JV Wright supports this approach although we don’t have much visual or oral history to back it up.  Hitches or knots aren’t explained at all in the historical accounts, but this 1500’s image show a cross hitch/knot where the posts were lashed together (http://www.virtualjamestown.org/paspahegh/structure8.html).  We used a threaded looping knot and will use the cross hitch for the major support poles.

Another issue on our first try was the rounded look of the ends of the poles.  Obviously they wouldn’t have been uniformly rounded so we attempted to roughen up the ends of the poles a little more, but recognizing that over time and use, the ends themselves would be come rounded and dull.  There isn’t a lot of visual references available for wood cut by stone tools but Sensible Survival had a blog post on how to make a stone axe.  Below is an image from that blog posting which clearly demonstrates how rough the ends of a pole would be.12 tree cut 5Below is still frame from a Youtube video by freejutube, which shows a larger diameter tree that has been freshly cut by a stone axe.  As discussed above, the effort is extensive event to cut small diameter trees and the finished product is substantially rough in texture and feel.maxresdefault The image below has two end caps that haven’t been treated and the middle end caps have been modelled more to mimic the roughness.  A texture map will be applied to further enhance the visual look.LengthWise_Bunks_EndsAlthough we will talk further about these little details, a lot of this finite detail will be lost in the final gaming environment mainly because of lighting effects and the need to reduce the model complexity so the game runs in real-time.  However, seen or not, we are trying to logically address all of the visual elements that may be representative in this virtual reimagination of the archaeological record.

 

Another part of the last blog’s discussion was the notion of whether bark was removed from the support posts and bunking poles or whether it was left on.  This is obviously pure speculation because the oral, historical and archaeological records have no information on this or not. General consensus from the commentators was that removal of bark would have been preferred.  Completely by accident Dr. Jennifer Birch had suggested a great quasi-ethnographical account by F.W. Waugh entitled Iroquois Foods and Food Preparation written in 1916 (mentioned above) when I was starting to enquire about storage of food stuffs within longhouses.  In it, F.W. Waugh spoke extensively on the use of bark for a multitude of household and work related tools.  So much so that it seems impossible that Iroquoian longhouse builders wouldn’t have also harvested the bark for other needs prior to building the longhouse.  In the latest test below, among testing possible bedding, we ensured that the bark was either partially or almost entirely stripped from the poles.  In addition to the removal of the bark, the next step would be to add dirt, creosote, hand prints and other stains to the exposed wood to give the benches a looked in feeling.screenshot005Additionally, we started looking at what the potential bedding would be.  Again there isn’t much written on the subject, but everything from cedar boughs, woven mats to various furs were suggested.  Originally we thought Black Bear or Grey Wolf (current species that inhabit Southwestern Ontario) along with the common Deer, would be represented in the form of bedding.  However, the faunal (animal) remains within most archaeological sites near the Lawson Site area have limited or no Black Bear or Grey Wolf skeletal remains.  Deer, along with medium sized fur bearing animals such as Racoon, Rabbit and Beaver is much more representative .  The test image below shows a mixture of bear, wolf and deer.screenshot006Upon further discussion, we decided the next iteration to be a mixture of cedar boughs and primarily deer skin for bedding material.  As discussed above, the top level of the bunk may or may not have been used as a sleeping platform.  The historical references suggest that the smoke layer was somewhere in the 4ft-5ft level within a longhouse when all of the fires were going.  Ron Williamson reports from an experiment done at Ska-Nah-Dot in the middle of the winter during the 1970’s, that when a few warming and cooking fires were at full-capacity within the reconstructed longhouses, the smoke level was dense, leading to difficulty in breathing and to see.  I would speculate based on the references from the Jesuit Relations and Ron’s experience that the top bunk was used primarily for storage and thus for our next round of renderings, we’ll start placing household objects that might have been stored there.  screenshot007At this point, the next stages will be to add cubicle walls, the exterior walls, roofing, fire hearths and vestibules.  Again, there are several roofing methodologies and theories that can be visualized and easily reconstructed in 3D as we’ve seen in Longhouse 1.0 and Longhouse 2.0, however we will go with the Kapches model of bent wall poles that terminate at the roofs centre forming an arbour effect along the roof line.  Our decision will be discussed further in the next few posts, but for now we have provided one vision of how the initial internal structure may have been represented within Northern Iroquoian Longhouses of the 15th century.

Longhouse 3.0

In starting our virtual archaeology project to visually reproduce a 15th century Virtual Iroquoian Longhouse from the archaeological record, our assumption right from the beginning was that we would follow the process that J.V. Wright had initiated so many years ago when reconstructing a longhouse from the archaeological record.  Through experimental archaeology, Wright used the exact pole positions at the Nodwell site of an excavated longhouse floor to position and build the longhouse.  Pole diameters were matched with the archaeological record however certain logical decisions were made in the building process to determine which archaeological post hole positions were relevant for the rebuild.

Nodwell A

Traditionally, if a longhouse was to be physically rebuilt from the archaeological record, the existing pole positions would act as a guide in the reconstruction process and as in Longhouse 1.o we intended to use existing excavation maps to guide our 3D virtual longhouse build. However our pivoted goal was the phenomenological experience of being in and around a longhouse within virtual space.  Thus, we chose instead to use substantial quantitative data, to build a representative version of a Northern Iroquoian longhouse prior or just at point of European contact in the 15th century.

As discussed in Longhouse 1.5, J.V. Wright, Mima Kapches, Dean Snow and Christine Dodd along with Ron Williamson, John Creese and others generally agree based on the archaeological data, that there is a basic building process that Iroquoian builders used when building longhouses.  What differs, based on historical European visual and written accounts, oral histories and language of the Iroquoian themselves and the speculations of practicing archaeologists was how the roofing structure was built and the possible positioning of the sleeping platforms.  I will go into more detail later, but these are just a small example of the research questions being raised as we start to build.

Following Dodd, the basic building blocks of a 15th century Norther Iroquoian longhouse are:

  • An average of 18m’s in length.
  • Height is as tall as the width (note that the archaeological record only provides data on width and oral history provides data on height).  Generally the average width is 7.6m’s.
  • The centre corridor width is 4.0m’s.
  • Sleeping platforms/family cubicles were generally 1.1-1.8m’s in width, 3.7-4m’s in length and 1.8-2m’s in height.
  • The actual sleeping platform itself has been recorded to be anywhere from 0.30-1m off the ground level with the roof of the platform where personal storage was commonly thought to be, being 2m’s from ground level.
  • Average interior support post were 8.6-9.1cm’s in diameter.
  • Exterior wall post diameter was 1-3cm’s in diameter and on average there was 4.5 poles per meter along the length of the longhouse.
  • Typical fire hearth spacing was 2.9-3.6m’s between hearths.  Each hearth support two families on either side of the longhouse.
  • Exterior roof and wall shingles were 1x2m cedar or elm shingles.

The difficulty is that most academic literature describes longhouses in a similar fashion, leaving the reader to visually imagine what a longhouse might look like.  How do these measurements equate visually if they were to be represented?

In addition to the basic measurements that Dodd was able to collate through the archaeological site data of over 400 Iroquoian longhouse excavations, there is the discussion between the roofing structure, which is highly dependent on the initial support post or internal skeletal structure of the longhouse.  Currently there are three major internal structural forms or supports that make up the external visual differences in longhouse construction as described in historical accounts that have been theoretically suggested (Snow, 1997; Williamson, 2004):

  • Wright’s reconstruction of a longhouse at Nodwell suggests a π shaped internal support infrastructure existed which would have supported a visual ratio of 4:1 in height between the main building and a separate arbor roof (1971, 1995);
  • Based on extensive historical European oral accounts and two specific visual representations of Seneca longhouse floor plans from the 1700’s, Snow suggests that longhouses might have had a 60/40 split between longhouse body and a separate upper roof (1997);
  • Kapches, using Iroquoian oral history, suggested that the longhouse walls and roof might have been entirely integrated by long exterior posts lashed at the center roofline forming a continuous arbor effect (1994).

Snow_framingSo our initial variables in the construction of a digital longhouse are: width/height, length, inner support post diameters and exterior roofing/framing style.  As discussed in Longhouse 1.0, there is an ability within several 3D Animation & Modelling software applications to create a dependent procedural modelling environment.  Basically, the ability for the modeller to change any parameter at any time during the model creation process.  In traditional Animation & VFX production, this flexibility would be severely constrained due to the danger of clients changing their minds and the massive interdependencies that are involved technically when creating assets for a Film or TV production.  However in this particular project, the procedural approach does allow for the ability to experiment visually with the known archaeological data.Longhouse_v1aUsing Autodesk Maya, we started with the initial framing design based on the average building parameters discussed.  As seen in the image above, basic eometry represents the interior and exterior framing elements and a Metric Measurement standard was used within the 3D modelling environment to mimic the size and object relationship to real-world data.  Ten centimeter diameter interior support posts were used, with 3 cm diameter exterior wall posts bent in an arbour effect, similar to the Kapches theory of longhouse construction.Longhouse_v1bSeen in the image above, we ensured that the longhouse height was equal to it’s width and that the sleeping platform widths and the corridor width were distributed appropriately based on the averages within the archaeological record.  On the left of the image, the support posts were positions roughly 4m apart which corresponds to both archaeological and written data.  Lastly, the middle section of the image demonstrates the average number of exterior support poles per meter.ao_testWith regards to the sleeping platforms, written accounts from the Jesuit Relations indicated that the Iroquoian longhouse members would sleep head outwards toward the main corridor (and the heating source) and their feet towards the exterior walls.  the Jesuits indicated that the Iroquois men were on average, their own height or slightly larger. The average height of a French male in the 1500’s was 5’6″, which is just a few inches shorter then the normal 1.8m width of the sleeping platforms which would allow for individuals to lie fully prone on the bed.  The image above is a previous test to determine if a 5’6″ 3D character could lie comfortably within a 1.8m width platform to support observations of sleeping berth dimensions the Jesuit priests discussed in the Relations.Longhouse_v3aOur next iteration of the model was to add placeholder bunks, supported by long horizontal posts running the length of the longhouse and short platform and roof slats for the family cubicles.  The gap in the upper roof is based on several modern interpretations of how families might have accessed goods typically stored above and/or the “loft” for additional sleeping.  Currently the diameters of all the wooden elements are uniformly 10cm’s.  Also, we have to rely on “common sense” to determine how the bunk itself was constructed as there is no written, oral, visual or archaeological references that describe this building process.Longhouse_v2aIn an attempt to better understand how the bunks might have been constructed, we borrowed the same technique of making an “h” support system on either side of the main corridor from the modern architectural test version in Longhouse 2.5.  This made complete sense as it would almost be impossible for the outer 3cm diameter exterior wall posts to support the weight load of not only the bunks, but the numerous people and goods they would hold.Longhouse_v3bThis next image was a simple ambient lighting test.  Basic grey non-reflective shaders (surfaces) are used to determine how the light diffuses as well as identifying any potential modelling or lighting issues early on.  Additionally, we added a slight taper to the support posts from 10cm’s at ground level to roughly 9.5cm’s at the top to mimic the natural tree growth diameter as the tree matured.Longhouse_v3cA closer image reveals the typical 3D uniformity of assets that are built and copied.  What immediately sticks out is that the bunk poles and the other pole surfaces are flat faced tubes, lacking any taper, diameter sizing or surface variations.  A sparse virtual environment that lacks any connectedness to the real-life building materials or even construction techniques.  Our next task was to add some visual variables in order to convey a more realistic material environment.screenshot000The first remodelling request was to give the support posts more thickness.  In the previous images, the poles when visualized with the upper range of Dodd’s average 10cm thickness, they looked too thin to support the benches.  Now this might have been my own artistic interpretation of what I was seeing, but after talking with Ron Williamson, he had suggested that the data gleaned recently from the 99 longhouses at the massive Mantle Site, suggests that inner support posts for Mantle were actually an average of 15cm’s in diameter. We applied this diameter along with an adjusted taper in length which produced a more satisfying visual result.screenshot001As we started applying textures to our wooden posts, the first question was; “did the Iroquois strip bark from the posts before they were erected and would that act as a fire safety measure due to the proximity that the support posts would have to the fire hearths”? After discussions with Dean Snow, Neal Ferris and Ron along with an exhaustive searching of the historical writings, the answer was non-conclusive.  A chance discussion with Namir Ahmed about the problem led to the suggestion that bark might have stayed on the support post as it was erected in place, but during time, out of boredom or necessity, the bark would have been stripped away.  Thus we mimicked bark removal in areas directly adjacent to the sitting or laying parts of the sleeping bunks where it would be easily removed.

An additional layer of texture mapping will be applied later to visually suggest a buildup of creosote which would have most definitely been present within the rafters of longhouses as numerous fires would have been contributing to the smoke layer within the structure.

Additionally, no tree grows straight.  Thus the 3D posts were given a slight randomness and curvature to represent what would be typical tree growth patterns.  Tree nots and protrusions on the support posts were also added in an attempt to better visualize the natural material being used. screenshot002Finally end surfaces of poles were rounded off in an attempt to visualize a rough cut made by stone tools.  Texture maps with lateral cracking was added to the ends to also mimic the drying of the wood as it aged.  Test 3D cordage was added to determine how the poles would have been secured to the main supports.

In visualizing the initial framing process, we were able to not only raise more questions as it pertains to traditional longhouse construction, but experiment in order to arrive at variants from the existing data.  We immediately recognized that to build bunks with multiple 1.8m length poles for support and roof surfaces, would be a highly labour intensive endeavour.  It was more likely that longer and less quantity of poles would be used along the length of the longhouse instead of the width.  Also, our textures and modelling of the end caps of all poles and posts had to be rougher in order to mimic the use of stone tools.  Lastly, issues like the cordage type and even the knotting of the ropes, would have to be researched further.

 

 

Longhouse 2.5

Longhouse 2.5 came about during a long conversation with Ron Williamson from ASI.  Ron had been very generous with his time to discuss some of the issues of longhouse archaeology, the theories and methodologies for data acquisition as well as some of the personal experiences in building longhouse reconstructions.  As is his supportive nature, Ron provided me with a set of architectural drafts he had commissioned a few years back.  It had been a modern architectural interpretation of how a longhouse might have been built using modern tools but with a mix of current and traditional building materials.

Plan 1sm

The drawing represented an excellent interpretation of the archaeological data from a structural perspective.  It also blended at that time, more simplistic building code requirements which has now become the bane of current attempted longhouse reconstruction projects (see Crawford Lake Longhouse Village – personal communication with Conservation Halton Staff).

Plan 2sm

I wanted to get an understanding of 3D building techniques from an architecturally trained specialist.  Jamie Kwan, a recent Architecture graduate from Ryerson University and a current Master in Digital Media student agreed to take on the challenge. Using the plans provided, Jamie reinterpreted the material within Rhino3D, a robust but very simple to use modeling package.

structure

As in real-life construction, the 3D material also has a nature of its own. Jamie encountered some of the same questions we had been chewing over since starting with Longhouse 1.0.  Placement of the inner and outer support posts, spacing, bench attachments and smoke hole positioning to name a few.

exterior

The difficulty in modeling the curvature in the roof has also been discussed in length by Wright, Kapches and Snow, which is also apparent in how we interpret in 3D virtual space as well. Immediately when the 2D plans were visualized in 3D, questions began popping up with regards to how our final longhouse project would be interpreted.

RhinoLonghouse

However, the low resolution shaded rendering does allow the viewer to experience the potential expansiveness of this modern interpretation of a traditional longhouse.  One can also start to envision populating the space with potential cultural material, textures, surfaces, atmospherics and light.

The interpretation of an “h” framing methodology also has given rise to look for support posts in the archaeological record, immediately along the outer walls when excavating longhouses and/or reviewing existing data sets.  Overall, Jamie and Ron’s original plans provided a unique opportunity to start to tackle longhouses construction methodologies from an modern architectural design perspective.

Longhouse 2.2

Longhouse 2.2 became a watershed moment during our research primarily due to two seemly inconsequential decisions; a port of the 3D assets to the Unreal game engine and a chance tour of local High School students.

The purpose of the Loyalist College Animation students time at the SA was to help develop a pipeline for the mass scanning of 3D artifacts (see Longhouse 2.0).  However, as the co-op students were winding down on their 2 week pre-training project building and rendering a 3D longhouse, there was a delay in the delivery of the 3D scanning equipment.  A decision was made to keep the students further enhancing their skills until the scanning technology was available, by attempting to port the longhouse test assets over to a Unreal game engine to see if the interactivity between player and environment would work out.   At his time the students decided to include both the new section of the Museum of Ontario Archaeology and Sustainable Archaeology with the actual Lawson archaeological site.

Lawson Site Map

They took the original excavation map and started positioning the 3D test longhouses within a palisaded environment.  Although the physical reconstruction of Lawson site on the grounds of the excavation environment was semi-accurate in terms of the front palisade, there was only one fully reconstructed but severely deteriorating longhouse.  Thus, the students needed to map out how many longhouses they would represent digitally and the actual palisade sequencing if there was to be any interaction for the users.

site_overview

Once a rough plan was drawn up, the Unreal gaming environment was then populated with longhouse and accessory assets.  The virtual palisade was copied from the existing one and enhanced to what the Museum thought represented an expanded site.  Atmospherics, additional assets, land and sky proxies were added to give it a full and all encompassing environment.  To also combine their 3D scanning outcomes, virtual activity stations were built and when activated, would inform the player of the material or social importance of the space or artifact.  With the test successfully ported over to a gaming environment, the students began their research began on their 3D scanning testing.  Although inaccurate in many ways archaeologically, it did provide an interesting approach to non-scientific visualization.

The real “magic” happened after the game was completed.  During a random local High School class visit to the Museum of Ontario Archaeology and Sustainable Archaeology, Namir Ahmed the project lead was explaining the work the animation unit was doing.  Of course the class wanted to test out the game and so most of the excitement grew around the interactivity within an environment in which all of the High School students were not only accustomed to, but was also the first generation to have been born completely exposed to digital technology.  The “a-ha” moment came when the students, after playing the video game, attempted to relive the same virtual experience outside in the partially reconstructed palisade and single longhouse!  At that moment did we realize that the research was not about accurately reconstructing longhouses, but connecting stakeholders to the archaeological landscape through real-time, virtual, phenomenological experience.

Longhouse 2.1

Longhouse 2.1 was originally intended as a preliminary introduction to our 10 Loyalist College Animation interns to basic archaeological research and visualization of archaeological material.  As Sustainable Archaeology is located directly within the Museum of Ontario Archaeology, the students had direct exposure to the partially reconstructed Lawson Neutral Iroquoian Longhouse Village.

picture of longhouse

Additionally they were within driving distance to the Ska-Nah-Doht Village & Museum, a reconstructed Early Iroquoian Longhouse Village site which provided an excellent example of different architectural styles as well as interpretive visions.

Skanahdoht-Longhouse

The students had the opportunity to physically experience the reconstructed spaces, understand the materials used in the reconstruction and get a sense of the sound, light and atmospherics produced in such a building.

DSC_0200

Following traditional Film & TV methodology, the students used these physical references and the archaeological data from the Lawson site to start envisioning what a 3D representation of a Longhouse would look like.

longhouse_alanb

In representing what essentially was a reinterpretation of the archaeological data, the risk of this process is that there are multiple voices and competing visions as each artist, from the initial physical longhouse construction to the reimagined 3D representation is being played out visually.

longhouse_interior_light1

Yet an opportunity exists that the assets, what we like to call them in 3D lingo, are easily moved, reconfigured or even reinterpreted allowing for a more user centric approach.  Even in the two artists renderings above, little details like the direction of the support slats on the bench seating are different, each representing a different interpretation of the physical reconstruction of the longhouses visited.  Within 3D space, these tests can be played out with little effort, thus representing an opportunity for public stakeholders to engage with the archaeological record through their own perceptions.

Additional 3D models were made to represent the typical material potentially in daily use within and around a longhouse.  These assets then become props within the greater phenomenological experience, however through 3D scanning artifacts from the actual archaeological landscape can now inhabit the virtual archaeological landscape as well.

hanging_tobaccopotterycedartreecornstalks

Construction on the virtual longhouse became an interpretation of the existing physical reconstructed houses, the visual historical material and some archaeological data.  Again, the purpose was not to accurately recreate a longhouse per se, but to see what process these trained animators would use to reconstructed a longhouse within the 3D space.

LHSpin200

As the models began to materialize, the students started asking the same questions posed by Wright, Kapches and Snow.  Additionally, the challenges to model the objects in 3D also determine the visual outcomes or interpretation of the subject matter in question.

LHSpin452

Modeling within 3D space sometimes lacks the randomness that real life constantly provides.  Assets are replicated, such as the cedar shingles in the image above and thus, the interpretation looses some of the key features we would assume to be present in a typical longhouse construction.

LHSpin610

The final product, although representative of the subject matter, is in essence a copy of a copy.

 

This was a wonderful initial first run for the students and the archaeologists alike.  It provide a unique opportunity for the SA to see the production process from a traditional 3D animation methodology and it initiated the very same questions archaeologists would ask themselves when visualizing the archaeological record.  It also provided a jumping off spot to explore the necessity of having real-time, user defined and engaged content delivery systems.

The exercise provide the assets needed to continue the development process.  In Longhouse 2.2, we move into the real-time, user discovery environment.  It also represents a major pivot towards a sustainable and interactive approach to 3D visualization of archaeological material.

Longhouse 2.0

Longhouse 2.0 started as a joint project between Dr. Neal Ferris at Sustainable Archaeology (SA) and theskonkworks (SKW) to explore the possibilities of developing a mass scanning pipeline for 3D artifacts in the summer of 2012.  Working with Namir Ahmed, a Master’s student in Archaeology at UWO and someone with previous animation and archaeology expertise, this project was one of the first MITACS granted research initiatives to combine industry and archaeological research needs.  The project was two fold in its application; to work with animation students who understood the technology but not the content and to use existing Film & Television techniques to develop a mass scanning pipeline.

SAScanTeamThe project recruited 10 Loyalist College Animation Program Co-Op students to intern at Sustainable Archaeology for a 14 week period.  The students were all in their last year of studies and as such had a good working knowledge of 3D animation techniques, tools and basic pipelines.  SA provided the equipment which consisted of several variants of professional 3D scanners and SKW provide production management, pipeline expertise and 3D animation equipment and software.

The research team proved to be highly successful in not only being able to demonstrate that Archaeologists and Animators could effectively and quickly work together on very complex systems and data, but that the SA facility when properly provisioned, could easily scan over 100 artifacts per week.  The pipeline itself consisted of developing protocols for tools specific to the artifacts sizes, complexity and surface quality as well a the practical application of data acquisition, lighting, mesh integration and texture mapping.  3D3 solutions, a technology supplier, produced a case study which outlined the process (case-study-SAAU-3D3Solutions-final).

archaeology-3d-scanning-ceramic_rim-quote-resized-600

This study proved to be quite valuable in understanding the scanning needs of artifacts and how to both manage the data and the expectations and limitations of the technology.  Our research also spawned a paper for World Archaeology entitled Sustainable archaeology through progressive assembly 3D digitization. However, prior to starting our 3D scanning pipeline research, the students started warming up with an ancillary project in which they would apply standard Film & TV development techniques to replicated a longhouse in rendered 3D space, which became the start of the phenomenological gaming research into user engagement within extant archaeological landscapes.  Thus Longhouse 2.1 began as an exercise to engage the students within the archaeological record.

 

 

 

Longhouse 1.75

Longhouse 1.75 came about as a request from Dr. Neal Ferris to organize and present at a session on Virtual Archaeology at the Canadian Archaeological Association Conference (CAA) in London Ontario of May in 2014.  The presentation entitled VFX Methodologies for Scientific Visualization in Archaeology was an opportunity to expand on some of the tools and methodologies being developed, as well as provide some insight into the world of visual effects and animation to the archaeological community.  Once again I worked with longtime VFX Technical Director Andrew Alzner to establish a more robust procedural virtual longhouse modeling tool based on our previous Longhouse 1.0 and Longhouse 1.5 research and to start exploring the concept of a phenomenological experience of the viewer.  Several additional prototypes were designed, however the technology and computing power required to essentially create a real-time visual tool fully rendered put it out of reach of most general consumers and archaeologists alike.

Coincidentally during this time, fellow colleague VFX Supervisor Noel Hooper, had just completed VFX work for Yap Films Inc. and Dr. Ron Williamson, c0-founder of ASI, on a new documentary called Curse of the Axe.  The documentary narrates the discovery of a European trade good found in a massive pre-contact Huron Wendat palisaded longhouse village now called Mantle.  Beyond the discovery of the cultural material, the site itself is stunning in terms of occupation length, community size and town or city-like organizational systems apparent throughout the archaeological record.  Archaeological data indicates there were 98 longhouses within a 3 row palisaded enclosure, occupying 9 acres of living space which housed an estimated 2000 inhabitants.  This town/city harvested over 60,000 trees over its lifetime to build the community and may have farmed over 80 square kilometres of land to feed its population.  The image below is a reimagined representation of a partially constructed longhouse created for the film’s publicity.

mantle_reconstruction

Lost in the rush to embrace 3D visual effects in representing material culture, I failed at the time of the presentation to mention one of the first representations of Iroquoian longhouses in visual media; Bruce Beresford’s 1991 movie adaptation of Brian Moore’s novel, Black Robe.  The phenomenological experience of the practical set gave the audience a sense of what it was like to live within a communal longhouse.  Although practical effects heavy and some scholars would say, highly European centric in vision, it eludes to how longhouse life might have been; densely populated, laden with everyday goods and heavily saturated with atmospherics such as smoke, fire light, dust and external light.  It was a gritty visual artistic account of what longhouse living was like.

Blackrobe

Additionally, in 2012 Ubisoft Games released Assassin’s Creed III (AC3) which would take place during the Revolutionary War, presented a new direction in experiential narratives.  AC3 included a main character of Haudenosaunee decent in which part of the game play would include Haudenosaunee inspired longhouse reconstructions.  Below are screen shots of the game play associated with some of the longhouse sequences.

ACLH2

In developing AC3, Ubisoft brought on Thomas Deer and Dr. Kevin White, both Mohawk descendants, to consult throughout the project.  Obviously artistic license plays out liberally throughout AC3, however there are some areas of longhouse design and construction which seem to correspond to the archaeological record.  In the image above, the shingles are roughly 1 x 2 metres in dimension, which corresponds to the historical and oral histories of longhouse building.  The entrance and the height are obviously designed for game play, yet the outer support lattice work is suggestive of European historical accounts and drawings.

ACLH3

Although lost in the middle back portion of the image above, we see a partially constructed longhouse missing the rounded vestibule of the finished versions in front and to the right side of the image, again acknowledging the archaeological record. Although Dean Snow has indicated that Haudenosaunee longhouses were thinner and subsequently lower in height compared to Northern Iroquoian examples, these examples are virtually gigantic in size.  However, it did allow users to interact with the 3D environment and in doing so, opened up the possibilities of further expanding how the public could interact more effectively with the archaeological record.

The VFX for Curse of the Axe, the game design for Assassin’s Creed III and the set design for Black Robe, all focused on the esthetics of the narrative being told.  That the audience had to suspend belief in order to be enveloped by the story.  Although from a scientific perspective, the procedural longhouse model building methodology was more closely aligned to Paul Riley’s concept of Virtual Archaeology; the combination of actual archaeological data in the creation of 3D visualizations. Our attempt to concentrate exclusively on the mechanics of the actual longhouse build lost sight of the personal experiences and narratives today’s public and more importantly, stakeholders not only desire but expect.  A pivot had to occur in order to better embrace what we were attempting to develop visually within the archaeological environment.

In tandem with the traditional virtual archaeology approach to our longhouse research, two additional projects were started. Longhouse 2.1 explored more of the interactivity of the user within 3D gaming space and Longhouse 2.5 delved into the practical application of longhouse construction through the eyes of modern architects and architectural visualization.

 

Longhouse 1.5

Longhouse 1.5 was a further attempt to test the notion of user engagement through procedural model building within 3D space.  My understanding of the visualization of longhouses from the archaeological record arises principally from the work of four archaeologists; J.V. Wright, Mima Kapches, Christine Dodd and Dean Snow. Due to the lack of any real physical evidence, models of longhouse use, style, agency, and construction have been hotly contested for decades (Kapches, 1994; Snow, 1997; Williamson, 2004; Wright, 1995). The work of these archaeologists, in combination with continued observations and challenges from other exemplary researchers, form a base of understanding that helps to frame how longhouses were constructed. Using Dodd’s extensive quantitative research gleaned from an exhaustive review of longhouse data derived from field excavations (1984) and based on the qualitative and quantitative observations of Wright (1971), Kapches (1994) and Snow (1997) among others, a basic template for the construction of longhouses emerges. It is this template we seek to replicate virtually.

The integral structural element in any longhouse was its major support posts (Wright, 1971, 1995; Kapches, 1990, 1994; Snow, 1997). These elements framed the interior structure, provided guidance for the construction of the living areas and supported the external shell of the longhouse. Currently there are three major internal structural forms or supports that make up the external visual differences in longhouse construction as described in historical accounts that have been theoretically suggested (Snow, 1997; Williamson, 2004):

  • Wright’s reconstruction of a longhouse at Nodwell suggests a π shaped internal support infrastructure existed which would have supported a visual ratio of 4:1 in height between the main building and a separate arbor roof (1971, 1995);
  • Based on extensive historical European oral accounts and two specific visual representations of Seneca longhouse floor plans from the 1700’s, Snow suggests that longhouses might have had a 60/40 split between longhouse body and a separate upper roof (1997);
  • Kapches, using Iroquoian oral history, suggested that the longhouse walls and roof might have been entirely integrated by long exterior posts lashed at the center roofline forming a continuous arbor effect (1994).

Snow_framing

It is clear that framing techniques would have varied from one Iroquoian group to another and the material archaeological record is entirely void of any tangible references that could support or refute these framing theories (Snow, 1997).   However when support posts are identified, they present a pattern that is consistently 5-15cm in diameter with an average of 8-10cm’s (Dodd, 1984; Kapches, 1994; Snow, 1997; Williamson, 2004; Wright, 1971). All framing techniques support the notion that external walls were constructed by lashing pliable smaller diameter new growth poles onto the internal framing structure (Dodd, 1984; Kapches, 1994; Snow, 1997; Williamson, 2004; Wright, 1971).

One of the main questions of architectural design that remains enigmatic is the actual longhouse height, but this can only be qualitatively gleaned from the annals of European chroniclers, which state that height was equal to width (Thwaites, 2008). It also has been suggested academically and historically that a longhouse’s height was equal to its width, however we have no archaeological evidence in which to verify this notion (Bartram, 1751; Kapches, 1994; Heidenreich, 1972; Snow, 1997; Thwaites, 2008; Wright, 1995). We know based on Dodd’s extensive analysis of Huron and Neutral longhouses, that the average mean widths of longhouses were between 6.5-7.2m (1984), with Wright (1971), Snow (1997) and others indicating ranges of 6 to 7.5m as minimal and maximum width/height variables.

Archaeologically, total longhouse length is easily measured from the physical record when excavated (Dodd, 1984). There is a substantial historical and archaeological range in length between 5 to 72m with unique examples both above and below that range, but Dodd and others have suggested a mean value of about 19.8 m for most common longhouse lengths (Heidenreich, 1972). Length is also correlated to the number of hearths within a structure (Dodd, 1984).   Champlain and Sagard reported seeing longhouses with 8 to 12 hearths and the archaeological record supports this (Heidenreich, 1972); however, as Bartram also demonstrates, exceptionally long longhouses can also have single hearths fitting into the category of structural use anomalies (Snow, 1997).  In Varley and Cannon’s (1994) work on hearth spacing, house length and use, hearth position and numbers are not always consistent within the archaeological record and hearth position could and likely did move throughout the interior of common longhouse structures (Heidenreich, 1972). However, generally archaeologists acknowledge that most residential longhouses had 3-5 hearths, with two families sharing each hearth with a bark-enclosed raised compartment on either side (Allen & Williams-Shuker, 1998; Chapdelain, 1993; Heidenreich, 1972; Wright, 1974).

Using low resolution 3D proxie model objects within SESI’s Houdini and the published longhouse architectural data from Dodd, Snow, Wright and Kapches, a 3D template was developed based on basic archaeological assumptions.  The sequence below is an example of the procedural engine in which changing one variable like height or width, will also change other variables that are dependent on those unique architectural features. For instance, when the length increases so does the number of fire hearths.

Although not clear in the video above, we were also able to change between the Wright, Kapches and Snow interior support framing automatically having all other architectural elements ripple through accordingly.  The initial goal of this test was to see if a procedural model from the archaeological data could be developed.  Additionally, user controls were created to allow other stakeholders to easily change parameters easily without having to know 3D animation.

A second test was conducted using the same methodology but with further refined controls and additional architectural elements.  In this attempt the model elements were greatly simplified to allow for faster render and procedural calculations when changes were made in real-time.  However, the model built was not “birthed” from an actual archaeological site map, but became a representation of the data presented by Dodd, Wright, Kapches and Snow based on the architectural variables present in the archaeological record.

This exercise provide a unique opportunity to create new tools that could be deployed to the general public as a means of archaeological engagement.  With further work on the interface and the real-time optimization, we can envision a deployable interactive tool set that could be installed in museums or through an App/Web for school curriculum needs.  From a research perspective however, it provides an excellent base to the design, development and implementation of a 3D, virtual phenomenological experience of the archaeological record.  Next, we expanded on this procedural methodology to test other longhouse construction variables in Longhouse 1.75.

Works Cited:

Bartram, J. (1751). Observations on the Inhabitants, Climate, Soil, Rivers, Productions, Animals, and Other Matters Worthy of Notice, Made by Mr. John Bartram, in His Travels from Pensilvania to Onodago, Oswego and the Lake Ontario, in Canada. Printed for J. Whiston and B. White, London.

Chapdelaine, C. (1993). The sedentarization of the prehistoric Iroquoians: A slow or rapid transformation? Journal of Anthropological Archaeology, 12(2), 173-209.

Dodd, C.F. (1984). Ontario Iroquois Tradition Longhouses. Archaeological Survey of Canada, Mercury Series 124. Ottawa: National Museum of Man.

Kapches, M. (1994). The Iroquoian longhouse architectural and cultural identity. Meaningful Architecture: Social Interpretations of Buildings, 9, 253.

Heidenreich, C.E. (1972). The Huron: A Brief Ethnography. Discussion Paper Series No.6. Toronto: Department of Geography, York University.

Snow, Dean (1997). The Architecture of Iroquois Longhouses. Northeast Anthropology 53: 61-84.

Thwaites, R. G. (1896-1901). The Jesuit Relations and Allied Documents, 73 Volumes. Burrows, Cleveland, Ohio.

Varley, C., & Cannon, A. (1994). Historical inconsistencies: Huron longhouse length, hearth number and time. Ontario Archaeology, 58, 85-101.

Williamson, R. F. (2004). Replication or Interpretation of the Iroquoian Longhouse. The Reconstructed Past, John H. Jameson, Jr., editor, 147-166.

Wright, J.V. (1995). Three dimensional reconstructions of Iroquoian longhouses: A comment. Archaeology of Eastern North America, 9-21.

 

 

 

 

Longhouse 1.0

Longhouse 1.0 began in the Winter of 2013 through a series of discussions with long-time 3D animation and VFX collaborator Andrew Alzner as a starting point for my Ph.D. research into Phenomenological experiences within virtual environments (see my blog on methodology & research).  Both Andrew and I had met each other in 1996 at Side Effects Software (SESI) and had travelled to Japan and LA regularly for customer support.  SESI was one of the three original Animation and VFX software companies founded here in Canada which dominated the animation and VFX production industry.  SESI was know for it’s procedural animation methodology, which would allow users to build 3D objects, animation or VFX sequences through a dynamic, interrelated and real-time pipeline through a software application called Houdini.  Basically you built a 3D object using operators that represented a single stage in the modeling process.  If you changed one operator, the changes would ripple through all of the operators essentially creating a living document of the model one was making in 3D.

Procedural 3D Modeling is a dynamic building block technique for organically creating digital assets.  The proposed pipeline has been specifically designed to allow stakeholders (public, private, academic and descendant) to access a procedural 3D model library in order to build in real-time and within 3D space, interactive visualizations of extant cultural heritage structures. Beyond initially allowing users to “build” their own archaeological engagement, stakeholders are able to experience the association between the physical structure, spatial relationships and the phenomenological experiences of these archaeological landscapes. These built digital assets can also be reapplied within any numerous engagement tools such as mobile Apps, Internet Websites or even within 3D gaming engines, further extending the narrative beyond the individual’s brief but personal archaeological experience.

In simple terms, procedural modeling is a process in which all of the steps needed to create an object in 3D are held in a dynamic relational network of building blocks that allow the user to alter, change or experiment with the final model at any stage of the building process. As in this example, a picture of a pot is superimposed in the display window. A NURBS spline is built by placing points along the outline of the pot and then a new procedural operator called a “revolve” skins that single outline spline 360º creating the 3D surface. Finally a transform operator is inserted within the middle of the procedural network and when one parameter changes, that change affects the relationship of the next modeling operation within the network, causing the model to alter accordingly.

It is possible to use this methodology to develop a process in which the archaeological landscape can be methodically reconstructed while retaining the ability to experiment with the assumptions in near real-time visualization. Further, once the method is in place, the technology can then be packaged in such a way to allow for more pre excavation or during excavation interpretations, stakeholder or public engagement and further research.

Using this concept of total user control, we started to develop a dynamic pipeline for the creation 3D longhouses using the SESI Houdini procedural method.  We first started with a standard post-excavation site report map.  Working with ASI (Archaeological Services Inc.), they provided an example in PDF form, which was then inputted as a base image into Houdini.

post_map_trace

Using the post hole positions and selecting a certain diameter range used on the site map, we spawned simple 3D models of poles for every post hole. Essentially we “birthed” posts where they were recorded from the archaeological data provided.  This allowed us to visualize the initial positioning of the poles and how they related to each other in 3D space.

This process was repeated using the same technique, but this time larger pole diameters were selected in order to differentiate the mixed used of pole sizes recorded within the archaeological record.  What we were attempting to do was create an automatic pipeline that would size pole diameters from the field mapping and then cluster and group poles of equal diameter and position.

post_map_point_colour

This technique worked well on site plans that had been prepared so that post positions was the only data being detected. However, substantial labour intensive work had to occur with the raw 2D data for this technique to work.  After discussions with Side Effects Software, they prototyped an additional procedural modeling network that would allow any site plan to be imputed with post points being detected, isolated and converted into 3D posts.  The notion was to allow non-3D users to be able to pick any site plan material and upload it into the pipeline to be able to create their own 3D model.

Although much slower in real-time, the process proved successful in pre-post point selection and modeling.  However, it was abundantly clear that if the public was to use the system, extensive 2D map clean-up had to occur first to allow for a faster visual experience.

In an attempt to refocus the process more for archaeological research needs and after discussions with Dr. John Creese, we wanted to test his Kernal density estimation (KDE) analysis post-clustering theories using this technique but with another popular 3D animation software application called Autodesk Maya.  Working with Toronto based VFX Supervisor Mahmoud Rahnana we took a site plan from John’s 2009 paper entitled; Post Molds and Preconceptions: New Observations about Iroquoian Longhouse Architecture and animated the birthing of the posts from the excavation data map.

This technique which we coined “3D Post Clustering” allowed us to birth poles from site excavation maps automatically. Additionally the technique would grow the height of the pole in relations to the width of the longhouse as indicated in the literature as being equal in length (Bartram, 1751; Dodd, 1984; Kapches, 1994; Snow, 1997; Thwaites, 2008; Wright, 1995).  Visually it allows archaeologists to see within 3D space how the poles might have looked and which poles would be associated with each other based on time and space.  We immediately saw a need to determine old vs new posts within the archaeological record and whether a technique could be developed to determine which posts were associated with specific longhouse construction and repair periods through the 3D visualization of the data.

Although a simple use of procedural modelling techniques, this process represented the base of future experiments in 3D longhouse construction using archaeological data bringing our research to the next stage, Longhouse 1.5.

Works Cited:

Bartram, J. (1751). Observations on the Inhabitants, Climate, Soil, Rivers, Productions, Animals, and Other Matters Worthy of Notice, Made by Mr. John Bartram, in His Travels from Pensilvania to Onodago, Oswego and the Lake Ontario, in Canada. Printed for J. Whiston and B. White, London.

Creese, J. L. (2009). Post-moulds and Preconceptions: New Observations about Iroquoian Longhouse Architecture. Northeast Anthropology 77-78, 47-69.

Dodd, C.F. (1984). Ontario Iroquois Tradition Longhouses. Archaeological Survey of Canada, Mercury Series 124. Ottawa: National Museum of Man.

Kapches, M. (1994). The Iroquoian longhouse architectural and cultural identity. Meaningful Architecture: Social Interpretations of Buildings, 9, 253.

Snow, Dean (1997). The Architecture of Iroquois Longhouses. Northeast Anthropology 53: 61-84.

Thwaites, R. G. (1896-1901). The Jesuit Relations and Allied Documents, 73 Volumes. Burrows, Cleveland, Ohio.

Wright, J.V. (1995). Three dimensional reconstructions of Iroquoian longhouses: A comment. Archaeology of Eastern North America, 9-21.

 

 

Extracting Useful Data from Twitter for Methodological Evaluation – Part II

university_of_leicester_richard_III

11.35: The hashtag RichardIII is now trending on Twitter. This was reported by Telegraph Reporters on Sept 12, 2012 during a minute by minute timeline of the announcement that the University of Leicester Archaeology Team had discovered bones believed to be Richard III buried under a Council parking lot.  Suffice to say, this was seminal event in archaeology, as it was the first time that embedded reporters reported live and in realtime, an archaeological event.  Twitter brought that news to the world.

In the second part of our investigation in extracting useful data from Twitter for methodological evaluation, I’m going to use Topsy again to try and provide a view of digital media, archaeology and public engagement.  Does an event such as this, also help to expose the public to archaeology and archaeologists or are these terms co-opted byproducts of a pop culture event?

To recap, two events occurred surrounding the discovery of Richard III.  The first happend on September 12, 2012 with the University of Leicester announcing that they had discoverer what they believe might be the bones of Richard III, with almost 1565 unique Tweets under the hashtag #richardIII.  The second event was the official news on February 4, 2013 that the archaeological team had confirmed the bones to be Richard III.  On that day, 66,696 Tweets where made world wide.

A couple of things need to be considered with this query.  In Twitter, when users want to “home in” on a subject matter of interest, they tend to use a hashtag such as #richardiii.  Prior to the original announcement of Richard III bones being discovered, the hashtag #richardiii was used by a various assortment of users who primarily discussed the works of Shakespeare and specifically the play The Tragedy of Richard the Third.  In the course of the archaeological discovery, this hashtag was hijacked by people wanting to connect or Tweet about the discovery of the bones and the subsequent news related to that discovery.  More importantly however, that hashtag was not the only way people Tweeted about Richard III.

Keywords are important when mining Twitter data.  Using additional related terms such as “Richard III”, “King Richard III” and “King Richard” a fuller picture begins to emerge of the extent of Twitter activity surrounding the archaeological event.  By combining total Twitter counts of just these four terms, total number of Tweets jumps to 430,079 over a 24hr period.

OvR3ActivityFeb4-5

Essentially we are looking at the gross number of actual Tweets that contained the search terms above over a 24 hour period.  However, the story doesn’t stop there.  Inferences can be made on how many Twitter users were actually exposed to the Richard III terms above on Feb 4, 2013, buy taking the gross number of followers of each Twitter user who posted any message with “#richardIII”, “Richard III”, “King Richard III” and “King Richard”.  Using this methodology employed by Topsy, the system estimates that 1,280,087,045 Twitter users were exposed to a Tweet of some sort on Feb 4 around this archaeological event.

EstExpoR3Feb4-5

Topsy’s describes its methodology this way; Topsy calculates exposure by summing the follower counts of all the authors of tweets that match the keywords being queried. This calculation returns overall gross exposure (vs. unduplicated net exposure) so multiple tweets from the same author or authors with common followers may result in audience duplication.  To better understand the margin of error, Topsy would have to predict and/or calculate how many times the same Tweet was distributed by the same author.  As with using the search terms “#richardIII”, “Richard III”, “King Richard III” and “King Richard”, there is no clear indication on how much duplication within the gross calculation has been made.

Finally, one of the interesting elements from an anthropological perspective of this type of real-time, machine language data mining, is the ability to estimate gross number of Tweets from country of origin and the positive, neutral or negative value of the qualitative or quantisized Tweet.  Let’s first look at the geographic makeup of Tweets over a 24hr period on Feb 4, 2013.

OvGeoActiR3Feb4-5

Twitter can “geo-tag” a Tweet and generally there is a 90% confidence that all Tweets from a certain country is correct.  Topsy states;  The Geographic view shows country-level metrics at a high confidence and coverage rates. The confidence rate will be 90%, meaning that 90% of tweets that are geo-tagged by country are correct based on our validation methods. The targeted coverage will be 90%, meaning that 90% of tweets that come from Twitter will be geo-tagged at the country level at the 90% confidence rate.  So when using this methodology, researchers must also be cognizant that “volume” is qualitative in nature and not quantitative.

Going beyond the margins of error however, it is interesting to see that the largest amount of Tweets were generated (328,340) from the United States.  Next was the actual country of origin of the archaeological event, with 49,439 UK Tweets.  Surprisingly, Indonesia had the third largest amount of original Tweets on the subject.  Next was France and then Canada.  The Canadian ranking of 5th was surprising, solely for the fact that the actual identification of Richard III’s remains would not have been possible without the DNA sample from Canadian Michael Ibsen, who is a 17th great-grand-nephew of Richard’s older sister — Anne of York.

If you compare the top 5 Tweets listed beside the geographic total, 4 out of the 5 original Tweets are from the UK and one is from the USA.  Unfortunately, also out of the top 5 Tweets, 3 are jokes about Richard III’s situation.  Which brings us to the skewing factor.  If one dives down into the actual quantitative gross counts, to examine the qualitative nature of the actual Tweet, a substantial amount of Tweets turn out to be original or retold jokes!  This was not lost on some as this Feb 4th post almost 16hrs from the original UK announcement in Maclean’s Magazine points out Richard III’s skeleton found; Twitter gets buried in jokes. Now Topsy nor do any Twittter data mining tool set have a “no joke” filter, but there are some interesting observations that can be made to discern how to filter the actual jokes from the data set.

OvActSentR3Feb4-5

As discussed in Part I of last weeks blog, Topsy and other data mining applications use Sentiment Analysis or natural language processing (NLP) to determine a quantitized value of the actual Tweet.  Topsy uses a NLP methodology that ranks words with a value from 0 to 100.  As Joe Masciocco, Social Analytics Consultant over at Topsy points out; in layman’s terms, we have language coding specialists on staff.  We score every word that comes through within each tweet on a scale from 0-100 (very negative – very positive) we then take a look at how the words interact and score the tweet on a whole from 0-100.  This all happens in real time for all tweets.  Hence Topsy quantitizes the content of the Tweet to determine it’s overall Sentiment (Driscoll et al, 2007).

Again there is no “joke” filter in NL processing, however I did discover something interesting when reviewing the graph above the quantitative data displayed by Topsy.  By clicking on the end points of each graphed line, the user can get a listing of the top 5 positive Tweets.  When we go through all four search terms, almost exclusively in this small sample set does the search term #RichardIII reveal where the “jokesters” live!  It seems #RichardIII by the end of Feb 4th has been co-opted yet again, but this time by people looking to plant or supplant a good joke!

Unfortunately like any interesting data, we have only scratched the surface.  In all the jumble of understanding how one archaeological event could potentially expose over 1.2 Billion Twitter followers in a single day to archaeology, we also need to examine how archaeology and archaeologists were effected.  In Part III, I’ll compare our Richard III event alongside mix methods analysis of archaeology and archaeologists to see if there is a correlation between pop event culture and public engagement archaeology.  I leave you with an article from the Washington Post I found in a Tweet from an archaeologist the day after the big event; On social media, archaeologists roll their eyes at Richard III skeleton discovery.

Cheers,

Michael

References:

Driscoll, D.L., Appiah-Yeboah, A., Salib, P. and Rupert, D.J., 2007. Merging Qualitative and Quantitative Data in Mixed Methods Research: How To and Why Not. Ecological and Environmental Anthropology 3(1): 19-28.

Extracting Useful Data from Twitter for Methodological Evaluation – Part I

A facial reconstruction of King Richard III, based on an analysis of his recently identified remains and artist portrayals over the years, was unveiled by an eponymous historical society on Tuesday. (Rex Features / AP Images) @SmithsonianMag on Twitter

There has been a lot of talk about “Digital Archaeology” being “Public Archaeology” recently.  As part of my Methods class this semester, I wanted to put that assumption to the test and decided to analyse Twitter feeds on specific subjects against possible uptakes on other subjects.  In the last 365 days, the major archaeological event to occur has been the discovery and more importantly, confirmation of Richard III’s bones buried under a Council parking lot in Leicester.  The story is ripe for public engagement, especially since the Bard skewered Richard III in Tudor times to such an extent that he’s readily seen as a dastardly villain today!

Extracting meaningful data from any source is always a challenge.  With Twitter, it’s as David L. Driscoll et al coined as mixed methods research (2007), both qualitative and quantitative material extracted sometimes in meaningful chunks.  Several tools exist to do this type of analysis, but I found Topsy.com to be a great tool for first time Twitter data extractors like myself.  It’s as easy as typing in a Twitter hashtag or a subject heading and the system generates a report on the quantity and quality of Twitter results for that particular subject.

Richardiii_365day_Twitter_count

Doing a scan today on March 7 2013, over the last 365 days, we find that the hashtag, #richardiii has had 96,516 Tweets (as seen above in the chart generated in Topsy).  In that time, as displayed in the graph, there are two major events which help to accelerate and promote Twitter engagement around the topic of Richard III.

RichardIII 365 Twitter Analysis

They roughly occur at the Sept 12, 2012 mark when archaeologists confirm they have discovered what they think are Richard III’s bones and on Feb 4, 2013 when the University of Leicester confirms that DNA testing and physical analysis of the bones by qualitative means through oral histories and written accounts confirm that Richard III has been discovered.

Digging further down, on the Sept 12th, there were 1565 #richardiii Tweets.  Topsy can return data on the top Tweets and the content for that day which reveals the top # of Tweets coming from a journalist from BBC History Magazine who is supposedly embedded with the archaeologists at the moment of discovery.  The second top set of Tweets comes from Medievalists.net who is tweeting news from the Richard III Society about the success of finding Richard III.

Feb4 TopTweetsIn comparison on Feb 4th, when Richard III’s bones where confirmed, there where 66,696 Tweets.  The top tweet with over 1000 tweets and re-tweets was a Twitter handle by the name of @queen_uk Elizabeth Windsor, who’s message was; Don’t even think about putting one under a car park in Slough, which is a tounge-in-cheek reference to the industrial city just North of Windsor.  The second highest was from BBC Breaking News, reporting that the Mayor of Leicester had announced that Richard III was going to be reinterred at Leicester Cathedral (which, as we will see in next weeks blog has some interesting elements of it’s own).  The last three top tweets where from BBC and The Guardian reporting on official archaeological and scientific information.

Now, you’ll notice that the Top Tweets seem to be skewed in the Topsy screen shot above.  This is because Elizabeth Windsor, with 1000K tweets and retweets, actually had the fastest uptake amongst other twitter users.  BBC Breaking News, with over 4K tweets and retweets is the largest volume, yet the news spread slower than the joke.

Topsy also has the ability to breakdown the tweets into quantitative data.  However, as Driscoll et al (2007) discussed, Topsy can also quantitize, albeit with mixed results, the qualitative nature of the tweet into twitter industry accepted terms of Positive, Neutral and Negative Sentiments.  That is, the emotional value of the Tweet as written by the Tweeter through Sentiment Analysis or natural language processing (NLP).

Part II of our exploration into mixed methods research using Twitter analysis next week, I will explore some of the issues around the data generation in Twitter and specifically Topsy as well as see if my assumptions are correct from a Twitter perspective, that when an archaeological event like the discovery of Richard III happens, people become more publicly engaged in archaeology overall.

Cheers,

Michael

References:

Driscoll, D.L., Appiah-Yeboah, A., Salib, P. and Rupert, D.J., 2007. Merging Qualitative and Quantitative Data in Mixed Methods Research: How To and Why Not. Ecological and Environmental Anthropology 3(1): 19-28.

TV Producing and Thesis Writing!

Last week was spring break at Western, which gave me some time to get caught up with hunting down current literature for my thesis.  It also gave me a great break from driving between Toronto and London, generally in the weekly Friday snowstorms!  I had however, the opportunity to stop in at Sheridan College to give my yearly lecture on Producing and Business in Animation to the latest cohort of 3D animation students.

I’ve enjoyed giving this lecture for about 10 years now.  As I had the spreadsheets and budgets projected on stage, it dawned on me that maybe, just maybe I could use my 17 years of production management experience in writing my thesis?  After all, to be a Producer you must have highly skilled management, organizational and analytical skills.  And, no matter how many times my wife says only women can multitask successfully,  I think I’ve mastered that one as well.

Students are always amazed when I recount that as an animation expert, my single most used software application now is MS Excel!  Practically every animated project must start by translating the creative and artistic style into schedules and ultimately budgets.  The process becomes repetitive and when one becomes good at it, all a client has to do is mention how many minutes a series is or long a film might be and generally the process can calculate the cost down to the last penny.  Although I miss the creative part, there is a certain artistic mastery in developing a budget and schedule.

I’ve been using an on-line tool called Ref Works, which has been extremely useful in automating the referencing process.  For students and teachers, it’s a free service provided by your university library.  Occasionally it’s a little buggy and I’ve had to develop strategies to get around some deficiencies but overall it’s been an excellent tool.  One particular nifty tool is the ability to link the reference within Ref Works with the actual PDF whether on-line or uploaded as a file.  This little feature has helped to “relocate” reference material quickly when it has been improperly filed on your hard drive.

However, I’ve been thinking about “how” I track those references within my thesis and more importantly “where” to insert those references when needed.  That got me thinking about Excel and Producing.  Essentially my thesis, or any thesis for that matter, consists of parts.  Simplistically it could be an opening, middle and end or conclusion.  However in archaeology we’re about the narrative.  So a good thesis should tell a story, whether it’s about scientific data or a qualitative experience, it’s still a story in which the reader must be engaged.

Excel is great for organizing data, so why not have it organize reference material as well?  The vertical columns can be the overall paper split into thematic sections.  The horizontal columns are subsections in which very specific reference points are made.  Each cell is a specific reference which in pure Excel functionality, can then be referenced and tracked in other cells throughout the entire set of thematic sections.  Visually, it can allow the writer to see weak points in their referencing by the lack of references within a section or if a particular reference is used too much.

Visualizing my references made me then think about all of the infographics out there and how those connections are made between references within a theses.  I found this really neat infographic which provides a good visualization of how data is connected in the writing process.  I think it would be a useful tool to visualize how references within my thesis are interconnected as well!

Copyright Playtime-Arts.com

So I started this blog thinking about how to manage data more effectively using my Animation Producing skills.  Now that I’ve reflected on how to organize my reference data, I’m also keen on how that data is interconnected and more importantly, how I personally make those connections between references.  A visual roadmap if you will to guide the writing process?

As an Animation Producer I’ve been able to incorporate my two favourite things; Excel and Visualization!  Now if I could only hand in an animated thesis, my job would be done!

Cheers,

Michael

 

Missing the Point? It’s the experience Dummy!

The last two weeks I was busily developing and presenting a draft of my proposed Research Flow Chart.  My old age must be setting in, because I find it harder and harder to develop succinct research ideas!  In an attempt to make sense of what I am trying to accomplish, I drafted a short paragraph to flesh out the idea and then to act as a guide for my Flow Chart.

Visualizing Southwestern Ontario Socio-Cultural Implications

in Longhouse Morphology and Use

Understanding Longhouse morphology amongst the Southwestern Ontario archaeological landscape as it relates to extinct and descendent populations is problematic.  Historical accounts can be romanticized or even intentionally misleading while socio-cultural variation within homogeneous cultural groups varies wildly based on outside cultural influences, landscape as well as environmental resources and factors.  Visualization of these variable Longhouse features may provide a unique opportunity to engage all stakeholders (public, private, academic and descendent) in redefining what it means to live within a Longhouse community by experiencing it phenomenologically through the archaeological record.

My research will focus on engaging with the archaeological landscape by creating a 3D virtual tool-set specifically designed to allow stakeholders (public, private, academic and to use a procedural 3D model library in order to build in real-time within 3D space, interactive pre and post contact Longhouses of Southwestern Ontario.  Further, when deployed, stakeholders should be able to experience multiple senses of sound, lighting, environmental and atmospheric controls to focus on the association between the physical structure, spatial relationships and the phenomenological experiences of Longhouse landscapes.

The aim of my project is to develop a new way to engage with the archaeological landscape that will help to broaden our understanding of longhouse construction, community organization and external cultural and environmental influences with an eye towards challenging our current assumptions of longhouse communities within the archaeological record.

Visualizing Longhouse Morphology and Use
Visualizing Longhouse Morphology and Use

Combined with what I think is a good start to a traditional Research Flow Chart, I’m relying heavily on Landscape and Phenomenological Archaeology.  When I initially presented the concept, my colleagues became engaged when I started talking about having stake holders actually experience the environment virtually, but with the aid of sound and smell.  One colleague who has been a site interpreter for Sainte Marie Among the Hurons in Northern Ontario indicated that when school groups first enter their reconstructed longhouses, people stop in the doorway to adjust their eyes……I stopped myself to think, how can I created the same effect in 3D?  Then the museum also uses the smell of a fire burning in the hearth or sweetgrass smouldering along with the sounds of everyday life to bring the landscape to life!  These Phenomenological experiences, combined with visual elements like light, atmospherics (smoke, rain, snow, dust) and texture help to extend that experience.

Maybe it’s the experience that is more important than how one builds that experience?  Can that experience be reproduced repeatedly?  Should it?

It took a while, but I think “it’s the experience Dummy!“, that I’m finally catching onto.

Cheers,

Michael

The Methodology before Theory? The search for my Research Question!

This post is going to start with a story.  In 1993, as a newbie field archaeologist, I had as most do, a horrible time differentiating between soil or root stains and post holes.  I can’t tell you how many post holes I mangled, much to the dissatisfaction of my field supervisor.

Sweating every soil stain, I began to wonder if there was a way to visualize in 3D, what we assumed to be post holes to determine if they actually belonged to the archaeological landscape.  A sort of, post hole detection methodology.  More than that, it would give stakeholders an ability to visualize an actual structure as opposed to trying to explain to the client that these stains were important!

It was that frustration which drove me to understanding how to create 3D objects and eventually into a long career in the animation and VFX industry.  Part of that journey included Sheridan College, which I was extremely lucky to attend in the early 90’s at the beginning of the second wave of artistic talent and immense technology.

However, it was my very first job in the industry which has now framed my research methodology to build interactive, real-time 3D Longhouses.  Kim Davidson, an industry legend and founder of Toronto based Side Effects Software, an animation production software company, produced and continues to build upon a procedural animation tool set called Houdini.

In Houdini, any function, from model building to texture map making to compositing or animating can be done procedurally.  What this actually means is that every function has a node or parameter that is never locked and as such, can be reworked at any point in the creation of a model, animation or VFX shot.  All changes “ripple” down the nodal network allowing the user ultimate flexibility without having to recreate or trace their steps again.

Using this methodology, I can theoretically build a Longhouse App with total flexibility allowing for regional, cultural, societal, historical variables in Longhouse construction to be “mashed up”.  This technique can then free stakeholders of all types; archaeologists, descendent groups, researchers and the public to build and more importantly experiment with how Longhouses may have looked and uniquely how one can then interactively engage within that space, always refining based on the individuals own unique perspective.

Procedural Arc Research Tool

This theoretical procedural network simplistically outlines how we can start with a basic field survey of post holes and “build” or more precisely “rebuild” one of multiple variations of Longhouses based on any infinite amount of parameters.

Prototypes for visualizing and manipulating 3D Longhouses constructed from site maps have already proven successful and the next stage will be deployment against a set of research questions.

One such question comes from the Droulers site on the boarder of Quebec and Ontario.  Claude Chapdelaine from the University of Montreal has been researching the site for many years.  The archaeological landscape has yielded some interesting questions regarding Longhouse construction, in particular, how massive structures could be built in and on totally rocky/stoney terrain.

Essentially, there are no soil stains to determine “what” the Longhouses might have looked like.  There are however hearths that have been discovered.  So, is it possible to 3D visualize the dimensions of the Longhouses through hearth positioning only?  The archaeological landscape will quite literally guide and frame my research.

It’s important to note that I’m not simply attempting to reconstruct Longhouses in 3D.  I’m attempting to provide the tools necessary to allow non-archaeologists and archaeologists alike to play with the historical and current data in visual 3D form.  I also hope this technique can be done in both real-time and in stereoscopic 3D, by providing a virtual interface for users to not only build in 3D space but be immersed within it.

As always, any thoughts, opinions, leads to other research or examples of other sites are greatly appreciated!

 

Starting My Research – 3D Visualization and Procedural Longhouses

After 2 years of coursework, I’m now getting down to the research portion of my PhD.  It’s been quite a journey since I first started my undergraduate in Archaeology and Visual Arts at the University of Western Ontario (UWO) in 1989, but the ride has been great and I’ve learned a bunch along the way.  For the next couple of years, my PhD research is a culmination of over 20 years of commercial work, academic studies and personal interest in the effort to bring 3D visualization to the forefront of Archaeological theory and methodology.Saau_trowel

It’s full circle for me now.  24 years ago I was really excited about the cool “Computer Animation” being used in archaeology, primarily in the UK.  Back then, desktop computers were still rare and expensive with the software being even more expensive!  It was all vector based and/or the start of GPS/GIS visualization.  When I saw what they were doing for Jurassic Park in terms of lighting and rendering in 1993, I was hooked and immediately saw a lot of relevance in 3D visualization of archaeological sites.  It was a fateful sunny and hot afternoon at AIGU “Oversite” in North York, Toronto, Ontario,  while doing fieldwork that a colleague suggested I go to Sheridan College, to learn to use the computer animation software.  That started an 18 year journey within the animation business, which also spurred on my interest in 3D visualization within science.

To answer your questions on who I might be, feel free to jump over to my film & tv corporate website; theskonkworks.com, sign up to follow me on Twitter, check out my industry profile and animation projects at CASO or watch my homage to Archaeology and Entertainment with a project our animation team did in 2003-2004 called “Johnny Thunder” (OMG – 93, 835 views!).  Of course don’t forget to check out what Namir Ahmed and I’ve been cooking up at Sustainable Archaeology!

So this is the beginning of a new journey to contribute to the continued adoption of 3D visualization in archaeological research, public engagement and long term preservation.

My PhD will focus on creating a virtual tool-set specifically designed to allow stakeholders (public, private, academic and native) to build in real-time within 3D space, interactive pre and post contact Longhouses of Southwestern Ontario using a procedural 3D model library.  It is my hope this new methodology will help to enlighten our understanding of longhouse construction, community organization and external cultural influences with an eye towards our current assumptions of longhouse communities within the archaeological record.

As this will be an on-going process of refinement, I’m hoping that blogging will help generate new directions of research, theories and understandings of a unique and somewhat assumed area of study.

Welcome along and I hope you enjoy the ride!

Cheers,

Michael

EdgeLab!

Just got back from the Digital Media Zone open house! I’ll devote some more time to the DMZ later, but I ran into the EdgeLab which shares space at the DMZ.  They were making the coolest Social Innovation tools with Arduino that I’ve ever seen.  Interactive clothing with predefined words that can be emitted from a speaker or read off an LED.  Designed for children and adults with motor function difficulties, this is just an amazing use of Arduino and designed in such a way that this clothing can be specifically produced by anyone, anywhere!

I really encourage everyone to take a look at the website (http://edgelab.ryerson.ca)!

In Defeat, There Is Victory!

'In his 1914 painting A Hundred Years Peace, artist Amedee Forestier illustrates the signing of the Treaty of Ghent between Great Britain and the US, 24 December 1814 (courtesy Library and Archives Canada/C-115678).'

In my mad rush to try and get this project to work, I completely lost sight of Bill Turkel’s initial comment when I and my fellow History 9832b Interactive Exhibit Design classmates first started; “it’s okay to fail”.  I think as students we’re subconsciously ingrained to think that success is only measured in the completion of an assigned task or the delivery of an end product rather than the path of discovery itself.  While preparing for another sleepless night, I arrived at the sobering realization that the project was definitely a lot grander than expected and I might have to scale back on my plans to have it work.

Taking stock of the original project concept, I had an epiphany. The project was all about exposure, buy-in and public engagement.  My first blog entry Twitter War of 1812! immediately generated challenges from both sides of the border by rival 1812 reenactor/history groups. @Navy1812Bicentennial immediately retweeted the blog post and I picked up Samuel Woodsworth@_thewar1812 and Maryland Milestones @ATHeritageArea as Twitter and Blog followers.  Another local 1812 supporter @BrianPMacLean joined as I’m writing this and it was great to get an inquiry from the folks over at Historica – Dominion about the proposed project and when we’re rolling it out.  From this varied sample of supporters, the objective was a success.

As I hunkered down in my foolish attempt to decipher the myriad of Arduino, Processing and Twitter hacks that litter the internet, I picked up support from Processing guru Marcus Nowotny who’s Tweet Balloon was really the key example to use.  I really thank Marcus, through a couple of wonderfully supportive emails about coming to the realization that what I was proposing was indeed a bigger kettle of fish than I had anticipated.  I’m regretful that I hadn’t found his example sooner in the semester.   Additional thanks has to go out to Nicholas Stedman for allowing me to participate in his Twitter to Processing class at Ryerson, which helped to get me over the hump in terms of having Arduino’s LED respond to a Tweet.  My car-pooling partner Namir (@Namir) who bounced various solutions back and forth to get this thing working, jumped in when things became too confusing.  Finally, many thanks to my 15 year creative business partner Romelle Espiritu for jumping in last minute with a stylized visual based on my original creative prototype.

Although I never really got the project to work, I did however get Arduino to blink on a per Tweet basis as demonstrated in the blog post Twitter to Ardunio Hack!.  Admittedly this has been one of the most frustrating, challenging and ultimately rewarding classes I’ve taken in my 20+ years of post-secondary education.  There were times in which nothing worked and I had to reevaluate the way I actually learn.  The project did force me to understand it from a “teachable moment”, something  as an educator I sometimes overlook when conducting my own classes.  Bill was supportive when needed and broad enough at other times to allow for student discovery.

Overall I’m satisfied with how the project engaged public, professional and peers alike.  As someone who has taught, developed curriculum and has been professionally engaged in Digital Media for over 20 years, the course has also redefined what “media” is and how the public can be engaged with it.  As a graduate student firmly entrenched in cultural resource management, these types of projects, whether failures or successes, enable us to connect at a different level with the public.  To engage them interactively, without constantly relying on expensive virtual simulators or displays.  Bringing the tactile and other senses to life when sometimes a static display cannot convey the depth of the subject matter or discovery.

In closing, the path to discovery has really been the success.  Now that this project has come to an end academically, I hope to engage a team to make it a reality professionally!  After all, what other way can we challenge our Southernly neighbours in a lively debate on the only War they lost : )

Twitter War Creative/Arduino Update!

(This Post is a class requirement for History 9832b Interactive Exhibit Design)

Now that we’re down to our last 2 weeks, the pressure is on!  I’ve run through multiple hacks, various examples and a couple of my own very poor attempts to code.  The archaeologist in me says that I’ve met my match when it comes to coding for Processing or Arduino!

This image is the property of Interactive MattersHowever, after doing another exhaustive round of internet searches, I came across this really fun example of how Twitter can talk to Arduino.  Created by Marcus Nowotny @ Interactive-Matter, the Twitballo0n 2 is an excellent approach to having Arduino respond to specific AND steady stream of Tweets!  In it’s basic form, a stream of Tweets with key words are analyzed and then converted into increment values in which a stepper drive turns raising or lowering a balloon on a string.  It’s really an elegant solution.

A brief email chat with Marcus who was kind enough to respond to my questions indicated that this solution with some modifications might work for my project.

On a different front, to clear my head of Proccessing and Arduino code, I jumped back into the display design with my old and trusted colleague Romelle Espiritu.  Romelle and I have been working together in the Digital Media, Film and Television industries for about 16 years now.  I asked him to help clean up my initial design which I’ll also use as a template to build a display board.

We once did a pitch to Osprey Publishing Men-at-Arms to create a TV series so I decided that stylistically we should follow the Osprey look to keep with the theme.  When I first came up with the idea, I immediately thought of an Osprey sponsored Internet based Flash version or even a full 12-14′ display at Niagara-on-the-Lake!

The next task is to send the image out to the printers to get a slightly larger version.  Several copies will be made to act as a template to cut out the support backing.  Given time constraints, I’m considering using a wood or foam core solution, but I would have like to have had Bill’s MakerBot replicate the pieces.

Given the difficulties I’ve had to get Twitter to run a flag up a pole, my fall-back position might be to use the natural querying process in Processing to drive a series of LED Red and Blue lights representing Pro-CAN or Pro-American Tweets in our Twitter War prototype.  That querying process along with a shorter delay on the Arduino LED code would give the effect of fireworks or explosions above the heads of my two soldiers.  A little like the Twitter Mood Hacks in which specific Twittered words represent specific LED colours.

Not that I want to admit defeat, but a simple solution might be the best approach!

Twitter to Arduino Hack!

(This Post is a class requirement for History 9832b Interactive Exhibit Design)

I have to say, this project has been a tough slog!  As discussed in previous blog’s, there are a lot of Twitter to Arduino hacks out there, but each has their own very specific approaches, which at times are hardware and/or software dependent.  Further I’ve learned that I’m a purely visual learner when it comes to physical objects or coding, which means I need to see someone do it first before I can really pick up on the process……..helpful for learning how to chop wood ; )

Luckily I ran into a great chap who teaches at Ryerson by the name of Nick Stedman (http://www.nickstedman.com/).  Oddly enough he was teaching a class last week in Arduino to Twitter through processing and invited me to sit along.  Below is what I think is a very useful approach to having Twitter control Arduino, but explained in a simple way.

So Nick had us work with a hack I had tried previously from Jer @ blprnt (http://blog.blprnt.com/blog/blprnt/updated-quick-tutorial-processing-twitter).  This one needs a user defined API from dev.twitter.com to generate “tokens” that will allow the Processing code to access Twitter more securely.  It also requires you to import the Twitter4J Core, which you can get here (http://twitter4j.org/en/index.html).  The part that Jer didn’t supply was the Arduino hack to read the Twitter feed from Processing and then to Arduino.

Processing Hack

So let’s start with Jer’s modified Processing code:

//Build an ArrayList to hold all of the words that we get from the imported tweets
//Needs SerialStandard for Arduino
ArrayList<String> words = new ArrayList();

import processing.serial.*;

Serial my_port;                          // Create object from Serial class
int rx_byte;                             // Variable for data received from the serial port

void setup() {
//Set the size of the stage, and the background to black.
size(200, 200);
background(0);
smooth();

String portName = Serial.list()[0];
println(Serial.list());
my_port = new Serial( this, portName, 9600 );

//Credentials – YOU NEED TO HAVE GENERATED TWITTER API TOKENS FIRST FOR THIS TO WORK –
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setOAuthConsumerKey(“YOUR TWITTER API CONSUMER KEY”);
cb.setOAuthConsumerSecret(“YOUR TWITTER API CONSUMER SECRET”);
cb.setOAuthAccessToken(“YOUR TWITTER API ACCESS TOKEN”);
cb.setOAuthAccessTokenSecret(“YOUR TWITTER TOKEN SECRET”);

//Make the twitter object and prepare the query – YOU NEED TO HAVE IMPORTED THE TWITTER 4J LIBRARIES FOR THIS TO WORK –
Twitter twitter = new TwitterFactory(cb.build()).getInstance();

  Query query = new Query(“Hi”);
  query.setRpp(10);

//Try making the query request.
try {

//Status status = twitter.updateStatus(“Processing to Arduino Now”); //message needs to change per tweet

QueryResult result = twitter.search(query);
ArrayList tweets = (ArrayList) result.getTweets();

for (int i = 0; i < tweets.size(); i++) {
Tweet t = (Tweet) tweets.get(i);
String user = t.getFromUser();
String msg = t.getText();
Date d = t.getCreatedAt();
println(“Tweet by ” + user + ” at ” + d + “: ” + msg);

//Break the tweet into words
String[] input = msg.split(” “);
for (int j = 0;  j < input.length; j++) {
//Put each word into the words ArrayList
words.add(input[j]);
}
};
}
catch (TwitterException te) {
println(“Couldn’t connect: ” + te);
};
}

void draw() {
//Draw a faint black rectangle over what is currently on the stage so it fades over time.
fill(0, 25);
rect(0, 0, width, height);

//Draw a word from the list of words that we’ve built
int k = (frameCount % words.size());
String word = words.get(k);

  if (word.equals(“Hi”) == true) {
    my_port.write(255);
    delay(4);    
    my_port.write(0);
  }

if (k == words.size()-1) {
    println(“new query”);
    delay(1000);

//Credentials – YOU NEED TO HAVE GENERATED TWITTER API TOKENS FIRST FOR THIS TO WORK –
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setOAuthConsumerKey(“YOUR TWITTER API CONSUMER KEY”);
cb.setOAuthConsumerSecret(“YOUR TWITTER API CONSUMER SECRET”);
cb.setOAuthAccessToken(“YOUR TWITTER API ACCESS TOKEN”);
cb.setOAuthAccessTokenSecret(“YOUR TWITTER TOKEN SECRET”);

//Make the twitter object and prepare the query
Twitter twitter = new TwitterFactory(cb.build()).getInstance();

Query query = new Query(“Hi”);
query.setRpp(10);

//Try making the query request.
try {

//Status status = twitter.updateStatus(“Processing to Arduino Now”); //message needs to change per tweet

QueryResult result = twitter.search(query);
ArrayList tweets = (ArrayList) result.getTweets();

for (int i = 0; i < tweets.size(); i++) {
Tweet t = (Tweet) tweets.get(i);
String user = t.getFromUser();
String msg = t.getText();
Date d = t.getCreatedAt();
println(“Tweet by ” + user + ” at ” + d + “: ” + msg);

//Break the tweet into words
String[] input = msg.split(” “);
for (int j = 0;  j < input.length; j++) {
//Put each word into the words ArrayList
words.add(input[j]);
}
};
}
catch (TwitterException te) {
println(“Couldn’t connect: ” + te);
};
}
}

With the Twitter 4J Libraries installed in your Processing Sketch you should be able to run this and get a constant print in the Sketch’s terminal.

This code searches for the query word and sets it’s query search to 10 returns or examples:

Query query = new Query(“Hi”);
  query.setRpp(10);

This code sends to the serial port that connects with Arduino, that if we find a query word, then == true, which then sends a value of 255 or fully “on” then delay by a value of 4 and turn off sent value.  Basically, if “Hi” is Tweeted, then send to the Arduino that value as positive and turn on the LED fully.  After 4 seconds, turn off the LED and wait for next value.

  if (word.equals(“Hi”) == true) {
    my_port.write(255);
    delay(4);    
    my_port.write(0);
  }

This code and everything repeated below it asks the Processing Sketch to do another query for “Hi” constantly.

if (k == words.size()-1) {
    println(“new query”);
    delay(1000);

The Hack that Nick suggested is that to initialize the query, we have to set it first, return the initial value and then the same code has to be added again to ensure that the query runs continuously looking for the value or query word “Hi”.

Arduino Sketch

Additionally to get the Arduino LED to light, you need the Arduino Sketch, which was the missing piece in the Jer example above.

// Very basic program to try out serial communication.
// Checks for data on the serial port and dims an LED proportionally.
// Then reads a sensor, and transmits the value.
// NB. Serial is limited to one byte per packet, so constrain the data you communicate to 0-255.

int led_pin = 9;                           // use “led_pin” to reference pin #
int rx_byte;                             // a variable for receiving data
int sense;                               // a variable for storing sensor data
void setup()
{
Serial.begin( 9600 );                  // start serial port at this speed (match with other software eg. MAX, Processing)
pinMode( led_pin, OUTPUT );                  // make pin an output – connect to LED (remember to use => 220ohm resistor)
}
void loop()
{
if( Serial.available() > 0 ) {         // if we receive a byte:
rx_byte = Serial.read();             //   store it,
analogWrite( led_pin, rx_byte );      //   and dim LED according to its value
}
sense = analogRead( 0 );               // read the sensor – returns 0 to 1023
sense = map( sense, 0,1023, 0,255 );   // adjust values to transmit – scale to 0 to 255 (…dividing sense by 4 would do the same)
Serial.write( sense );    // send the sensor data
//  Serial.write( sense );        // use this command instead for new Arduino version
delay(10);
}

This Arduino Sketch is really just reading data from Processing and sending it to the Arduino unit to turn the LED off and on based on how many times the Twitter keyword is found.

Conclusions

Here is yet another example of how to extract data from Twitter.  Like the post previously we now have two methods of accessing Twitter through Processing.  The first method is a straight query, using your Twitter Log-in and Password.  The second as described above, increases the security to your Twitter access by using the Twitter API function to generate secure tokens.

Although Nick’s method is a great first step, we still need to regulate how the Twitter query feeds into Processing and then Arduino.  Right now it’s a massive dump of info.  With the additional code to repeat the query, we’re still getting the same results + new results in every query, so we need to ensure that for every one tweet, it only returns it’s value once to Arduino.  Then we can use that one value return to inch our step motor and flag up the pole.

It Works………Kind of!

 

Okay, it’s been several weeks now.  I’ve tried many Twitter to Arduino and Twitter to Processing Sketches, but the best one is this one I found is this!

Twitter to Processing Sketch:

//http://blog.blprnt.com/blog/blprnt/quick-tutorial-twitter-processing

Twitter myTwitter;

void setup() {
myTwitter = new Twitter("yourTwitterUserName", "yourTwitterPassword");
try {

Query query = new Query(“sandwich”);
query.setRpp(100);
QueryResult result = myTwitter.search(query);

ArrayList tweets = (ArrayList) result.getTweets();

for (int i = 0; i < tweets.size(); i++) {
Tweet t = (Tweet) tweets.get(i);
String user = t.getFromUser();
String msg = t.getText();
Date d = t.getCreatedAt();
println(“Tweet by ” + user + ” at ” + d + “: ” + msg);
};

}
catch (TwitterException te) {
println(“Couldn’t connect: ” + te);
};
};

void draw() {

};

This is a simple, elegant and very easy to setup Processing Sketch.  I’ve tested it with “Twitter1812” and it works perfectly.  Now, my assumption is that if this sketch can query or respond to specific word queries, we should be able to get it to seek two variables such as; CAN1812 or USA1812.  Before I can get there however, I want to make the output of this Processing sketch, into a function that will then turn on an LED through a Processing command.

I found a great Processing to Arduino Sketch, but unfortunately I lost the URL from the original site, so I apologize if I’m not recognizing the original creator.  Again, this is a very simple set of Sketches.  The first is the Arduino and the second is the Processing.  Basically Processing accesses Arduino through a dedicated serial port.  Bill’s introduction to Firmata last week was to hopefully bypass the Arduino code bit, but like many of my classmates, the libraries weren’t working on my operating system.

Here is the Processing Sketch:

import processing.serial.*; //This allows us to use serial objects

Serial port; // Create object from Serial class
int val; // Data received from the serial port

void setup()
{
size(200, 200);
println(Serial.list()); //This shows the various serial port options
String portName = Serial.list()[1]; //The serial port should match the one the Arduino is hooked to
port = new Serial(this, portName, 9600); //Establish the connection rate
}

void draw()
{
background(255);
if (mouseOverRect() == true)
{ // If mouse is over square,
fill(150); // change color and
port.write(‘H’); // send an H to indicate mouse is over square
}
else
{ // If mouse is not over square,
fill(0); // change color and
port.write(‘L’); // send an L otherwise
}
rect(50, 50, 100, 100); // Draw a square
}

boolean mouseOverRect()
{ // Test if mouse is over square
return ((mouseX >= 50) && (mouseX <= 150) && (mouseY >= 50) && (mouseY <= 150));
}

And here is the Arduino Sketch:

const int ledPin = 13; // the pin that the LED is attached to – change this if you have a separate LED connected to another pin
int incomingByte;      // a variable to read incoming serial data into

void setup() {
// initialize serial communication:
Serial.begin(9600);
// initialize the LED pin as an output:
pinMode(ledPin, OUTPUT);
}

void loop() {
// see if there’s incoming serial data:
if (Serial.available() > 0) {
// read the oldest byte in the serial buffer:
incomingByte = Serial.read();
// if it’s a capital H (ASCII 72), turn on the LED:
if (incomingByte == ‘H’) {
digitalWrite(ledPin, HIGH);
}
// if it’s an L (ASCII 76) turn off the LED:
if (incomingByte == ‘L’) {
digitalWrite(ledPin, LOW);
}
}
}

Processing creates an interactive box, which when the mouse rolls over it, the Arduino LED turns on.  So, my basic assumption is this.  If Processing can receive an input from Twitter, it can write that input out as a function that can then turn an LED light off and on within Arduino.  If we can accomplish that task, switching the LED for a Motor Shield to power our flag gear and pulleys should be easy!

If anybody has any suggestions I’m all ears!