Mesopotamian Science and Technology - III


last update: 20 Nov. 2019


I have prepared three webpages.
This
first webpage covers the history of writing and numbers (including cuneiforms) and astronomy (including lists of omens).
The
second webpage covers the so-called “science of crafts” with cave painting, tool making (including stone tools), fire and pre-pottery technologies, weaving and textiles (ca. 60,000 BC), ceramic sculptures and pottery (starting ca. 29,000 BC), boats (ca. 10,000 BC), the wheel (ca. 6500 BC), and concludes with early patterns of trade and maps (ca, 12,000 BC).
The
third webpage covers irrigation, and metallurgy of the Copper, Bronze and Iron Ages.

It is worthwhile keeping in mind some of the 'periods' that make up Mesopotamian history:
Natufian Culture (ca. 13,000-7500 BC)
Neolithic "New Stone Age" (ca. 10,000-4500 BC)
Pre-Pottery Neolithic (somewhere between 10,000-5500 BC)
Pre-Pottery Neolithic A (ca. 9500-8000 BC)
Pre-Pottery Neolithic B (ca. 7600-6000 BC)
Pottery Neolithic (ca. 7000-5500 BC)
Hassunah Period with
Hassuna Culture ca. 6900-6500 BC and Samarra Culture ca. 7000-4800 BC
Halaf Period (ca. 6500-5100 BC)
Ubaid Period (ca. 6200-4000 BC)

Chalcolithic "Copper Age" (ca. 5500-3000 BC)
Warka and Proto-Literate Periods (
Uruk Period ca. 4100-3000 BC)
Gaura and Ninevite Periods (
Tepe Gawra ca. 5000-1500 BC, Nineveh ca. 6000-612 BC)
Jemdet Nasr Period (ca. 3100-2900 BC)

Bronze Age (ca. 3300-1200 BC)

Early Bronze Age with Early Dynastic Period (ca. 2900-2350 BC), Akkadian Empire (ca. 2350-2150 BC), Third Dynasty of Ur (2112-2004 BC) and the Early Assyrian Kingdom (ca. 2600-2025 BC)

Middle Bronze Age with Early Babylonia (ca. 1900-1800 BC), First Babylonian Dynasty (ca. 1830-1531 BC), Empire of Hammurabi (ca. 1810-1750 BC) and Minoan eruption (c. 1620 BC)

Late Bronze Age with Old Assyrian period (2025-1378 BC), Middle Assyrian Period (c. 1392–934 BC), Kassites in Babylon, (c. 1595–1155 BC) and Late Bronze Age collapse (ca. 1200-1150 BC)

Iron Age with Syro-Hittite States (ca. 1180-700 BC), Neo-Assyrian Empire (911-609 BC) and Neo-Babylonian Empire (626-539 BC)

As you can guess from the above list, establishing a definitive
stratigraphy on the way artefacts are 'layered' and dated on a particular site and how the context between layers is interpreted has always been an open issue in archaeology. In addition some terms such a Sumerian, Sumero-Akkadian, Assyro-Babylonian, etc. are often not precise in terms of geographic coverage or chronology (even spellings are different). Lists of time periods are notorious for all being different in different texts, and l have simply used a range of dates given on a variety of different sources. Individual authors are often very coherent in their definitions and chronology, but as I have done, when you move across multiple publications and websites you immediately realise that each uses a slightly different chronology to contextualise their research. The key here is that we are trying to discover features of Mesopotamian science and mathematics, not create a coherent description of the history of one of the world's earliest civilisations.

Below we have a simple map of the region indicating both present-day locations and some of the ancient sites that are mentioned in these three webpages.


Map of Mesopotamian Civilisation


The difference between North and South Mesopotamia


It is perhaps important here also to revisit the difference between the North (Upper) and South (Lower) Mesopotamia. On the first two webpages on Mesopotamia I did not really spend much time on looking at North-South issues, probably because it was not really an important topic until vast urban areas started to emerge in South Mesopotamia with the city-state of
Uruk (from ca. 2900 BC).

Sites of Mesopotamian History

Upper Mesopotamia was the area with the earliest signs of agriculture and the domestication of animals. It is this area that is often associated with Pre-Pottery Neolithic A, and it was an area continuously occupied from the time of the hunter-gatherer, ca. 9000 BC. Domestication of sheep and goats came ca. 7600 BC with the Pre-Pottery Neolithic B, and weaving and pottery came with the Halef culture (ca. 6100-5100 BC). From ca. 2500 BC Upper Mesopotamia was the heartland of ancient Assyria (through to 612 BC).

Here are a few of the most important Upper Mesopotamian sites:
Tell Sabi Abyad (ca. 7500-5500 BC) is in the Balikh River Valley almost the boarder between Syria and Turkey. Home to the earliest pottery in Syria (ca. 6900-6800 BC) and being mass produced in ca. 6700 BC. The site has revealed the largest collection of clay tokens and seals yet found at any site.
Tell Halula (ca. 7750-6780 BC) is also very near the Syrian-Turkish boarder. Forty levels of occupation are recorded, and is possible the home of the oldest paintings of people in the Middle East. Was also home to a long initial stage of pottery production, pre-Halaf (pre-6100 BC).
Tell Qaramel (occupied from ca. 15,000 BC) is just north of modern-day Aleppo and again very near the Syrian-Turkish boarder. Home to the first evidence of a permanent stone-built settlement. Had massive walls 2 millennium before the stone tower of Jericho. A large polished copper nugget dating from ca. 10,464-9246 BC is one of the earliest finds of metal. Early remains had their heads removed, indicating a 'head cult'.
Shir (ca. 7000-6100 BC) I think is in Hama Province about 50km from the Mediterranean. Home to some of the earliest ceramic finds in the Near East. Famous for its massive, carefully created lime plaster floors. Given that it was abandoned ca. 6100 BC it provides a unique insight into Neolithic village life untouched by future cultures.
Nineveh is very near Mosul in modern-day northern Iraq. The area was settled in ca. 6000 BC and for a period of about 50 years it was the worlds largest city before being sacked in 612 BC.
Tell Hassuna is not far from Nineveh and to the west of Mosul in modern-day Iraq (occupied ca. 6000 BC). This is the so-called 'type site' for the Neolithic Hassuna Culture.
Jarmo (ca. 7090-4950 BC) in the northern-eastern part of Iraq, in the foothills of Zagros Mountains. One of the oldest sites where pottery has been found.

Here are a few of the most important Lower Mesopotamian sites:
Uruk (ca. 4000 BC to ca. 700 AD) In ca. 2900 BC this city was the largest in the world, and as such saw the emergence of urban life. 'Type site' for the Uruk Period (ca. 4000-3100 BC), which was a transitional period between the Chalcolithic period and the Early Bronze Age. Uruk was possibly the home of writing, of the cylinder seal, and of the first example of architectural work in stone.
Babylon (ca. 2800 BC to ca, 1000 AD) is just south of Bagdad in Iraq, and was for a time capital of southern Mesopotamia. Between ca. 1770 BC and ca. 1670 BC it was the largest city in the world (ca. 290,000 people).
Ur (ca. 3800-500 BC) was a Sumerian city-state, located in the Dhi Qar region in present-day Iraq. It was one of the most important trading cities in southern Mesopotamia.
Kish (ca. 3100-150 BC) was a Sumerian city, located south of Bagdad, in Iraq. It is said to have been the first city to have had Kings after the deluge.


It is easy to get confused between the different
Mesopotamian cultures/periods that emerged from both the North and South, how they overlapped and mixed. The below chart tries to clarify things (dates may vary from one source to another).

Chronology of Mesopotamia


Upper Mesopotamia developed a
rain-fed agriculture, whereas the South had to develop irrigation. It is in the South that we see first the mobilisation of labour for the construction and maintenance of canals, the development of urban settlements, and a centralised system of political authority. Tell al-‘Ubaid in the South, and only 250 km from the Persian Gulf, gave its name to the Ubaid period (ca. 6500-3800 BC). It produced a specific type of pottery that was responsible for the gradual change in the pottery found in the northern Halaf Culture, thus the mention of the Halaf-Ubaid Transitional Period (ca. 5600-5000 BC). The Ubaid period was followed in the South by the Uruk period (ca. 4100-2900 BC), which was named after the Sumerian city of Uruk. Sumer (perhaps as early as ca. 5500 BC) was the first ancient urban civilisation in southern Mesopotamia. It is with the Sumerians that we have the first examples of proto-writing, ca. 3500 BC, but they may themselves have evolved from the Samarra culture (ca. 5500-4800 BC) of northern Mesopotamia. The Uruk period of Sumerian civilisation was followed by the so-called Early Dynastic Period (ca. 2900-2500 BC) and the First Dynasty of Lagash (ca. 2500-2270 BC), which annexed Kish, Uruk, Ur and Larsa. It was Sargon of Akkad (ca. 2334-2279 BC), emperor of the Akkadian Empire (ca. 2334-2154 BC), who conquered the Sumerian city-states.

Not explicitly mentioned so far, but Mesopotamia (and Anatolia, Near East, Middle East, or Fertile Crescent) was key to the
domestication of both plants and animals. Here we will just list for reference what was domesticated in the region, and when. The dog (i.e. wolf) was certainly domesticated in multiple places ca. 13,000 BC (but some experts go a far back as 33,000 BC). Domesticated lentils were being eaten ca. 11,000 BC in Middle East, and goat ca. 10,000 BC in the Near East. Then ca. 9000 BC we have the domestication of the fig tree, emmer wheat, flax, peas, and sheep in both Anatolia and the Zagros mountains. The einkorn wheat and barley were domesticated ca. 8500 BC in the Fertile Crescent, and the Auroch (i.e. cattle) in Middle East ca. 8000 BC. Duram wheat was domesticated ca. 7000 BC and the pistachio was a common food product ca. 6750 BC. Sometime between ca. 6000-4000 BC the watermelon, olives, and grapes were domesticated, and ca. 3500 BC the pomegranate and hemp were domesticated. Later came oats, the quince, chestnuts, rye, and lime. Checkout “Evolution, consequences and future of plant and animal domestication”, Domestication, and the List of domesticate animals and the List of domesticated plants.

North versus South


As we have said, the southern Ubaid period influenced the northern Halaf culture, but
the North remained largely egalitarian with small communities without centralised leadership, and with few signs of status markings. Both house forms (open courtyards and large domed ovens) and styles of painted pottery spread from the South. But both the North and South more or less evolved simultaneously a desire for monumental architecture, for long-distant trade, and for specialised craft production. Both North and South went on to develop a liking for large-scale feasting, religious institutions and specialised temples, mass production of ceramics, urbanism, but also organised violence. Both also controlled property, as evidenced by the widespread use of clay or bone seals and stamp seals. These seals were placed on jars, containers, boxes, and even storerooms, suggesting that they were not just reserved for an elite or a centralised political authority.
There are signs both in the North and South of
monumental buildings intended for manufacturing of craft items, e.g. with plastered basins and bins, pounders and grinding stones, tools, large ovens, and stocks of bitumen, marble, flint and obsidian. We know that obsidian as a raw material had a surprisingly complex trading pattern, as did the trade in pottery (and also with the later exchange of metal ingots and tools). This is not to say that there were not many localised exchange networks. For example the so-called “sprig ware” pottery vessels with a distinctive vegetal motif were made in the North, and were distributed quite extensively in the region prior to the arrival of people and ideas from southern Mesopotamia. And large settlements also existed in the North, even if most settlements remained as villages.

One
administrative technology conspicuous by its absence in northern Mesopotamia was writing. The development of pictographic writing, later developed into the cuneiform script, has its origins in the urbanism of southern Mesopotamia.

Things did not always happen slowly and without
violence, for example in Tell Brak (in the North-East) clusters of human skulls were found, and there was an almost complete absence of hand and foot bones (and no bodies of children). In addition, the pattern of animal remains suggests that feasting events were held on and over the dead bodies.

What is certain is that by ca. 3500 BC
the North also had a few very large urban cities, e.g. at one point in time Tell al-Fakhar in the North was larger that Uruk in the South, and Tell Brak was almost a big. It is unclear if these were truly urban centres, since they encompassed much open space set between smaller settlement clusters, but it is clear that they did have all the indications of being economic, religious and political centres. These larger northern cities were in many ways city-states. One sure sign is that even ca. 4500 BC there was a systems of seals in place run by a centralised bureaucracy that redistributed rations to dependents. Chiefs or 'officials' (it all added up to the same thing) existed and 'Rule' was almost certainly administered by powerful households, or “households of the gods”.

This tells us that the reality was that the (southern) Uruk expansion to the North took time, and was largely driven by
an asymmetric trade relationship. This asymmetry has been described as being due to the early use of irrigation in the South that increased crop yields and reduced the risk inherent in annual climatic fluctuations. Their rivers and large canals provided a simple means of transport for bulk items such as harvested cereals.

In the period ca. 3000-2600 BC the
Uruk colonies vanished in the North, along with almost all traces of interaction with southern Mesopotamia. Trade appears to have dried up, and ceramic traditions gave way to regional assemblages. There are signs that administrative technologies such as tokens and sealed bullae almost disappeared, and there are few signs of writing being used in the North, although cylinder seals continued to be used. Pottery was mass produced through to ca. 3000 BC, and then labour-intensive surface decoration was introduced. Southern Mesopotamia retained large temples, usually run by important economic households, whereas the North reverted to small, single-room structures with some benches and a small podium. Residential buildings in the North became smaller, with fewer ornamentations. A central round building appeared containing vaulted storage rooms, brick platforms, and industrial scale ovens (a sign that in the North they reverted back to chiefdoms). But there are also clear signs that some villages were specialised centres for storage, processing and distribution of grain products within a regional economic system. There was also a specialised animal economy focused on sheep and goat production.

Paleo-hydrology for Middle-East

One telling measure is outlines in the above graph on the so-called 'paleo-hydrology' of the Middle East. It shows the humidity levels in Lake Van, the largest lake in Turkey (and one that is influenced by the Mediterranean climate). This endorheic lake is a saline soda lake. The Dead Sea is a salt lake that actually sits 430 m below sea level. The Soreq Cave is in Israel which has been used to reconstruct the region's climate for the past 185,000 years.
Lake Van has provided invaluable information on the pollen count and the
18O/16O ratio over time. This information suggests that around 9,000 to 10,000 years ago the weather changed to being warmer and dryer, and it would appear that near the end of the 3rd millennium BC it continued to warm-up (before turning cold). The average annual precipitation declined, with a minimum at ca. 2000 BC. This appears to be linked to the decline of northern cities and the move (back) to a nomadic and semi-nomadic society ca. 1800 BC. This move is often called 'pastoral nomadism' to describe a mobility based upon the seasonal availability of pasture and water for animal husbandry.

In the period ca. 2600-2000 BC the North went through a period of rapid
re-urbanisation. They suddenly got into building massive city walls, terraces, palaces and temples. Specialisation in the production of pottery, metals and other crafts increased, along with agricultural and pastoral production. The earliest cuneiform tablets are found in Tell Beydar and Tell Brak. Some Tells grew by nearly a factor of 10 over 100 years, and major cities rapidly emerged such as Tell Taya. This particular Tell had wide streets radiating outwards from a central core towards gates in the outer city wall.

Did North Mesopotamia develop independently of the South?


The above description is what might be called t
he 'classical' view of the South as being the 'heartland of cities', however there is a different view on how North-South Mesopotamia evolved over time. The way the Tigris and Euphrates Rivers became highly 'braided' as they entered the Persian Gulf created an environment ready for irrigation and lacustrine plains. The 'classical story' tells us that this created a unique habitat ready for the emergence of the first cities and as a result some of the worlds earliest 'great powers', e.g. Sumerian (ca. 4500-1900 BC), Akkadian (ca. 2334-2154 BC) and Babylonian (ca. 1895-539 BC). The story goes on to say that North Mesopotamia developed through contact with the South.
But now many experts think that the North evolved independently as a major cultural complex. 'Tells', the early Mesopotamian cities, were just level upon level of mud-brick and
pisé constructions, making them difficult to 'reconstruct' and analyse. Expert knowledge actually often comes from the study of smaller farming settlements that are easier to 'unfold' but don't provide the same depth of information on wider social and economic activities. We mentioned Tell Brak in the North, and the evidence now suggests that it was a major settlement well before the emergence of southern city-states such as Uruk. In fact Tell Brak was nicely situated on a major route from the metal-rich Anatolia to the Tigris Valley, at a river crossing and near rich agricultural land suitable for nomadic pastoralism.
Other northern sites such as
Hamoukar and Tepe Gawra also present evidence of monumental buildings, industrial workshops and the making of prestige goods dating back to the ca. 5000 BC.

Tell Brak

Tell Brak was already known as an important 3rd-millennium site covering 40 hectares and rising to a hight of over 40 m. What happens during an excavation is that layer by layer are removed top-down revealing earlier and earlier occupation of the site. The top layer shows occupancy by Romans and early Islamic tribes, possible sitting next to or on top of a late Assyrian occupancy. Below that it would appear that the Tell was occupied ca. 1950 BC by tribal Amorites (who had already occupied large parts of southern Mesopotamia including Babylon). This is seen as a transitional period between the early and middle Bronze ages. The presence of Amorites is identified though the presence of Khabur pottery, but they occupied the Tell for a limited period and only occupied one part of it. Later, but in many ways seen as alongside the Amorite occupation, the Mitanni (ca. 1500-1300 BC) occupied the site. They built a settlement suburb and a palace and adjacent temple. Before that the site was occupied by Assyrians. Going back further in time Tell Brak was still a large urban complex ca. 3000 BC but centred around the original Tell (or mound). Evidence was found indicating Akkadian (thus southern) control of the site. Houses, a large public building, an audience hall and temple as well as administrative and 'industrial' areas were found. Cuneiform tablets and bullae (clay seals) confirmed their presence. There is also evidence of Hurrian occupation possibly during the period ca. 2300-2000 BC, the Mitanni was a late Bronze kingdom of the Hurrians and as we have said they occupied the Tell sometime later. Signs would indicate that Tell Brak was more or less permanently occupied during the third-millennium BC despite the fact that many other Tells were abandoned.

Tell Brak

Recent excavations have yielded finds now dating back to the mid-5th millennium BC (late Copper Age) showing that there were already monumental buildings on the Tell during the Ubaid Period. The evidence is clear that the buildings were there before the arrival of a Uruk colony from South Mesopotamia (it was at this time that a large 'old town' was built outside the walls of the Tell itself).

Eye Idols

The most notable 'find' was the so-called 'Eye Temple' which confirms that Tell Brak was a large religious and economic complex ca. 3800 BC, i.e. before the expansion of southern Uruk into the North ca. 3600-3200 BC. Above we have a few of the thousands of 'eye idols' found there.
However there is also plenty of evidence that some monumental structures date from the late-5th millennium and early-4th millennium. These were associated with organised craft activities, the manufacturing of prestige goods, and the organisation and provisioning beyond household levels. The Tell was some 40 m high, and these 5th-4th millennium excavations are 11 m under the surface of the Tell.
Experts talk of
monumental structures, but what does that really mean? Let's take an example, in one corner of the Tell near the north entrance they found an important building consisting of a massive entrance with towers on either side. The doorsill is a single massive piece of basalt, 1.85 x by 1.52 m and 29 cm thick, but basalt is not native to the region near the Tell. The mud-brick walls were 1.85 m high. In the excavation, so far the building appears to have consisted of two empty rooms. The walls were not set in foundations but the whole building was set on a platform of large cobbles and red clay 80 cm deep. On one side of the building there was an open area covered in white (waterproofed) lime plaster, with a wooden flooring at the entrance to the area. The area had been replastered at least three times and the main building rebuilt at least once. Additional smaller empty rooms were outside the main entrance (shops? storage? offices?). No one knows what this building was used for, except that it does not look like any known religious building. This building has been dated to the late-5th millennium BC.
Another building, more modest in size, was home to basalt pounders, grinding stones, clay spindle whorls, polished stone palettes, delicate obsidian blades, neatly ground obsidian discs, large quantities of mother-of-pearl inlay cut from local mollusc shells, and unusually long flint blades. Along side all this there were large ovens and plastered basins and bins. And there were also a large number of clay seal impressions, including door seals indicating 'official' locking.
In addition there was a building with ovens for pottery, including for new designs such as large open bowls and small mass produced bowls. Elsewhere there were piles of raw flint and obsidian, lots of jasper, and marble and serpentine used for beads. A large amount of bitumen was stored, as well as clay spindle whorls, and lots of sheep and goat remains. Most of this material would have been collected and transported some distance to the Tell.

Stone Chalice

Above we have the most extraordinary find, a 16 cm tall obsidian and white marble 'chalice' dating to ca. 4000 BC. The obsidian core has been ground out to make a drinking vessel. The two parts are held together with bitumen, and the upper rim would have probably had a gold or silver insert. This kind of vessel is clearly a prestige item and shouts 'social stratification'. It was found in the same room as a lion stamp seal which in later periods would be considered a 'kingship' artefact. we have to remember that these items were not found together by chance, they were excavated and found at the same level, and thus presumed to be related to the same period. Ownership seals have been identified in central and northern Mesopotamia as early as ca. 7000 BC, and were common practice in northern late Ubaid period sites (ca. 4800-4500 BC). The use of this type of seal clearly would have signified ownership or control, a major facet of South Mesopotamian administration where ownership seals were linked with 'numerical tablets' and pictographic script (Uruk ca. 3400 BC). At Tell Brak there are also a few examples of the use of two different seals on the same item, suggesting two different but equal officials. So we have organisational complexity on an 'industrial' scale site in the North at the same time as in the South we would have found a residential-based 'cottage-industry'.
As we have already mentioned this is an excavated site. And above what we have described sat a later 'ritual' building (but not a temple) with a courtyard containing a number of ovens used for large-scale cooking of meat served on mass-produced plates (judging by the serving pottery). This might have been a 'feasting hall', but given that it was near a gate it could also have been a 'travel lodge'. Everything suggests large-scale patterns of production and consumption, and thus large units of labour. And thus the cultivation of land sufficient to support non-agricultural administrators and craftsmen. Monumental buildings required large amounts of straw and water for making mud-bricks, and this would have required considerable investment in time, materials and labour. The 'numerical tablets' found in Tell Brak reflect housekeeping of labour requirements and the control of manpower. On top of all that there is a nearby boundary wall marking the limit of the so-called 'palace area' that dates back to ca. 4400 BC suggesting that the area remained 'monumental' or 'sacred' for over 2,000 years (it includes the 'Eye Temple').
The area excavated on Tell Brak is about 30 hectares, but there was a suburban area of about 300 hectares built around the Tell. Some parts of this suburban area, all at least 300 m from the Tell, date also to the late-5th millennium BC.

For more information on the excavations and 'finds' at Tell Brak checkout this
website, which also has a page dedicated to the example of 'early warfare' mentioned above.

What we have seen for Tell Brak kills the idea that the poor old upstream northern 'dry-land' farmers were only stimulated into economic activity and big building plans after contact with the southern Mesopotamian 'core'. It is true that Uruk (sometimes also known as Warka in the South) was the largest and greatest of early cities and the site yielded the earliest written documents (cuneiforms) and had the largest public buildings constructed in the late-4th millennium. But we don't know much about how Uruk evolved to become that mega-city, and experts have built their understanding through excavations on simpler small farming settlements. Whereas the excavations on Tell Brak clearly show a spatially significant northern 'industrial' Mesopotamian settlement with monumental buildings of the type not seen in the South. And to top it all Brak,
Tepe Gawra and Hamoukar produced an almost identical pottery over an area of some 300 km (a distinctive form that has also been found in south-eastern Turkey).

So in Tell Brak there was a proto-urban development up to a 1000 years earlier than originally thought, and possibly earlier than developments in the South. This does not put in doubt that (judging from pottery shards) by ca. 3400-3200 BC Tell Brak was colonised by Uruk. Some experts have suggested that this colonisation was violent, others that it was largely driven through the creation of merchant colonies. And this does not mean that in the future we may not discover that southern Mesopotamian proto-urban development dates back before the Uruk Period. What Tell Brak has done is made it far more difficult for experts to define the term 'urban' and to know how to attribute it to sites in both the North and South. At best urbanism has become a more fuzzy concept, but as one expert pointed out if you were living in Uruk or Tell Brak you would certainly know that you were living in a place that was totally different to other, smaller city-towns in the region.


Stop and think


My initial objective with these three webpages on Mesopotamia was to create a very short history of mathematics and physics (natural sciences) as seen before the Greeks. On the first webpage I tried to cover the history of writing and numbers (including cuneiforms) and astronomy (including lists of omens). On the second webpage I tried to cover the “science of crafts” with cave painting, tool making (including stone tools), fire and pre-pottery technologies, weaving and textiles (ca. 60,000 BC), ceramic sculptures and pottery (starting ca. 29,000 BC), boats (ca. 10,000 BC), the wheel (ca. 6500 BC), and early patterns of trade and maps (ca, 12,000 BC).
With this
third webpage I wanted to cover irrigation and agriculture, and the Copper, Bronze and Iron Ages with their engineering, metalwork and weapons. The question about 'what' I wanted to cover was easy, the 'how' is far more problematic.

When we kicked-off on our first webpage starting at ca. 10,000 BC the worlds
population was somewhere between 4-10 million people, when we quit this third webpage on Mesopotamia the worlds population will have grown to 50-100 million. Like it or not Man had discovered specialisation and had understood how to exploit that through markets. And what did man specialise in? Whichever way you look at it Man specialised in technology. One view is that he went from mastering Stone, to Bronze and then to Iron (we are now in the Silicon Age). Another view is that Man moved from Hunter-Gatherer, through Agriculture, the Metal Age and then the Machine Age (we are now in the Information Age). Others look at energy as the key definer of human development with Muscle, Animal, Agriculture, Fossil, … Yet another view is to look at information with Genes, Language, Signs and Logic, Writing, …

Perhaps what is common to these different views is the role of 'tools'. Oldowan chopping tools, hand axes, blades, needles, twisted cord, the awl, fire (who could forget fire), … What we find usually is that people try to outline a chronology and then fill-in what fits, but I want to look at science, mathematics, engineering, and 'add a bit' to help understand the context and chronology of a discovery or invention.

Sea levels and irrigation


Despite the facts, many authoritative sources still promote the idea that complex urban-city-states first emerged in South Mesopotamia with the Uruk Period (from ca. 4000). No matter where the first state-level societies emerged, we still don't know why they emerged. Agriculture, bureaucracy, large-scale irrigation, centralised political and religious hierarchies, warfare, inter-regional trade, control of labour, or any combination of the above, could be responsible for the emergence of the first city-state.

We will look at these civilisations (North and South) based upon the idea that their development was certainly linked to so-called
paleoenvironmental conditions. The basic idea is that the Tigris-Euphrates river system discharges into the Persian Gulf, and today covers more than 800,000 km2 (and is the 12th largest drainage basin in the world). Whichever way you look at it the key climatic determinant for human life in what was and still is a semi-arid region is water availability. The river system was and still is dominated by snowmelt hydrology, which makes it susceptible to climate change impact. For example virtually all the waters of the Euphrates are generated in Anatolia from rain and melting snow (the Tigris drains from southeast Anatolia but the Taurus mountains is still part of the so-called Alpide belt). The emergence of the Akkadian Empire (ca. 2334-2154 BC) has been linked with the productivity of the rain-fed agricultural lands in the North and irrigation in the South. Drought has been used to explain the Late Bronze Age Collapse (ca. 1200-1150 BC) and there are strong indications that a decrease in rainfall produced a minima in the discharge of water into the Persian Gulf during the period ca. 1150-950 BC. This correlates with the decline of the Babylonian and Assyrian Empires in the period ca. 1200-900 BC.

In recent times the Tigris, Euphrates and
Karun rivers flow along the length of the Mesopotamian lowlands into the Shatt-al-Arab estuary, a marsh area and deltic system at the head of the Persian Gulf. Initially (in the North) the rivers are deeply incised into the plateau, whereas in the South the slop is less than 1% and they meander through the floodplain. The Persian Gulf is very shallow (average depth of 35 m) and was well above sea level during glacial times (some freshwater lakes and shallow basins would have existed). With the reduction of the ice volume during ca. 12,500-8000 BC the sea level rose by about 75 m (about 32 m below present day levels). This would have still meant that much of the Gulf floor would have been exposed until ca. 6000 BC, but with a broad river valley and marshes and small lakes in the flatter areas. Sea levels continued to rise through to ca. 4000 BC, reaching about 2.5 m above present-day levels. No matter how you look at it the emergence of the southern Sumerian civilisation post ca. 4500 BC must have been substantially influenced by the marshland and lakes created in the estuary leading into the Persian Gulf. In fact experts think that marshlands would have extended in as far as the city of Ur (the site is now quite some distance inland). Some experts have even suggested that the ancestors of the Sumerians might have arrived through the Gulf floor and not from the North. This might provide a rational basis for their legends and gods that are often set against a background of rivers and marshes (e.g. Enki, Abgal,…, or even the Epic of Gilgamesh).

Let us for a moment step back in time. Viewed from a satellite flying over the region, Mesopotamia (particularly the South or Lower Mesopotamia) is like a trench sitting in front of the Zargos Mountains. It is partly occupied by the sea and partly by sediment brought down by the Tigris and Euphrates rivers. In addition at the 'mouth' there is material infilling from Wadi Batin in the south, and from the river flowing down from the Zargos. The Lower Mesopotamian Plain had (and still has) a very low gradient (about 1%) below Bagdad meaning that the river courses are unstable. They can frequency change because of sediment accumulation. All this has created a region of lakes and marshes. In the past the whole region has thus been determined by (a) rising sea levels up to a maximum ca. 4000 BC, (b) sediment accumulation, and (c) a pinching effect on the upper part of the delta. Below we have two images, the first is what the delta might have looked like around 3000 BC or 4000 BC, and the second is what the delta looks like today.

Delta 3000 BC

Delta 2000 AD


The idea that the city of Sumer and irrigation went hand-in-hand looked convincing. At the time the cities were still pre- or proto-literate, so cuneiform texts justifying this actually came from much later periods. In addition early excavations logged buildings and artefacts, not environmental evidence. Luckily winds periodically re-exposed long buried walls, artefacts, and irrigation canals, and allowed experts to discover a pattern of urbanisation (and irrigation) started more than 5,000 years ago. However this 'pattern' is a mix of surveys over limited areas, presumptions about where watercourses should be, aerial photography, hypothesised linear connections where no direct evidence is visible, and a strong bias towards elite sites previously excavated and where cuneiform texts have been found. Today experts have a much more nuanced view of how the South developed.
During the Neolithic
Ubaid Period (ca. 6500-4500 BC) irrigation and urban development focussed on villages built on river levees (dykes) bordering swamps and marshes, and including (ca. 4800-4500 BC) an extensive canal network between major settlements. It is certain that innumerable smaller, scattered sites are still buried beneath the sands. What we see today are some Ubaid towns that remained inhabited into the Uruk Period (ca. 4000-3100 BC) and today remain 'exposed' because they are situated on rock 'turtlebacks'. Our understanding of the irrigation network is far less than partial. As an example, the main water supply into the city of Uruk (totally desiccated) was an importance waterway that could not have depended on the levees visible today. Another example is a simple line on an aerial photograph that was probably a substantial canal the size of a Tigris distributary. Between Uruk and Ur (ca. 40 km) there was a system of dykes that controlled the 'tidal flushing' that influenced cultivation regimes as far inland as Uruk, encouraging date-palms and levee garden crop production. In fact late Uruk seals and tablets depict palms and hunting scenes with pigs stalked among reeds. Proto-literate texts include dozens of ideograms for reeds, waterfowl, fish, dried fish, fish traps, and cattle and dairy products (and 58 terms for wild and domestic pigs). Dried fish was the principle food at the time, and the offices of 'fisheries governor' and 'fisheries accountant' endured 1,500 years.

Mesopotamia

We have to remember that initially Ur was a city-port of what is called the Eridu basin, which now is a patchwork of new sediments, old surfaces, and migrating dunes (even Lagash was once a city-port). At the time this area was probably less an intensely irrigated area and more a marshland used by wetland cattle-keepers who harvested thousands of tons of reeds and rushes for mat-weaving, fodder, fuel and construction needs.
The 'heartland' of the South was north of Uruk where the trend was to bigger settlements as the wetlands were dried for agriculture (experts think that most of these settlements are still buried). Canals were built facilitating boat traffic from one settlement to another. As the climate dried and the sea level dropped these canals were extended further south and east. The larger settlements certainly grew over time, but most did not have city walls and many were 'confined' by areas prone to seasonal inundations. Later (ca. 2600-2350 BC) the sea level rose again and many of the marshes were inundated with mollusks, marine fish and waterfowls. Texts found in the household of the wife of the ruler of Lagash (at the time a city-port) list a productive household of 1,200 people, including 100 fishermen and another 125 oarsmen, pilots, longshoremen and sailors. The economic activity of the household was fresh and salt-water fishing and the sale of fish and dried fish through merchants acting on behalf of the household.
The overall idea is that a few larger settlements had palm groves, gardens, temples, kilns, and sat on rock 'turtlenecks' protected from seasonal inundations by levees (we are still in the 3rd millennium BC). It was here that you would find intensive agricultural production, reed and other marsh products for urbanised consumption. Inundations were managed and exploited, but in most cases there were no signs of field irrigation.
We are still in the 3rd millennium BC but if we look to the north of Ur we find a series of watercourses interconnecting the Euphrates with the Tigris. However texts clearly still show the importance of marshland resources like reeds, fowl, pigs, etc. and they record bitumen, boats, mats, and standardised fish baskets.

Mesopotamian Delta Ur and 1972

Above we have on the left the Mesopotamian delta at the time of the city-state Ur (ca. 3000) and on the right the same river delta in 1972. It is worth noting that some of our ideas concerning Mesopotamia were created in the 19th C when marshes meant disease and that anyone 'sensible' (even 5,000 years ago) must have wanted to convert them into nice cultivated agricultural land. Marsh must be 'waste' and could not have economic potential. Irrigation and plow agriculture must be the only objective of a sensible administration! Today we see that marshlands can be economically viable and the destruction of wetlands can be a major economic and environmental disaster. What we can see now is the early Sumerian city-states were actually more like islands embedded in a marshy plain, situated in a vast deltaic marshland. Waterways were more for transport than irrigation and reeds were the key economic resource. Irrigation canals would come later, but the builders would enter into an infernal cycle - a constant need to invest in ever-more-extensive irrigation systems in order to emulate the natural wetland productivity they were replacing.

Irrigation and agriculture


As we move into the 2nd millennium BC and as the climate dried and became more seaonalised the Euphrates became an important source of irrigation water.

Before moving into the 2nd millennium BC let's just review what we know. There was a slow transition from hunter-gatherer to fully developed farming between the 11th and 9th millennia BC (
Neolithic Period). This was most evident in the Levant (Jericho) and Anatolia (Çayönü). Domestic einkorn wheat, emmer wheat, hulled six-row barley, lentils, chickpeas and common vetch were the first crops domesticated. Domesticated flax for oil (linseed) and textile production appeared in the Balikh Valley in the mid-8th millennium BC. Agriculture was rain-fed and small-scale hoe farming. Tools were just stone sickles, cutters and vessels for harvesting and food processing. South Mesopotamia did not appear to play an important role at this time. For some experts Mesopotamian prehistory starts with the South Ubaid culture (ca. 6500-3800 BC), for others it started also in the North with the Halaf culture (ca. 6100-5100 BC).

Initially everything focussed on how an '
urban revolution' emerged and how large-scale irrigation led to the "coercive control of populations by bureaucracies". Things changed when ceramics could be accurately dated using the Carbon-14 technique. Experts were now able to document long-term settlements from the prehistory to the present. The result was that they actually saw that irrigation developed from a small-scale basis that did not require state control. In fact in Lower (South) Mesopotamia the Euphrates and Tigris naturally irrigate by gravity flow. And we now know that the earliest proto-urban developments in the South were based upon a wetland providing a diversified economy of hunting, fishing, construction materials and animal fodder (cereal was not a major crop). Irrigated agriculture appeared in the 6th millennium BC in central Mesopotamia (not the South). It was here that domestic animals began to be used not just for their meat and carcasses, but also for their milk and labour (transport and plowing with the scratch plow). Irrigated cereal and date-palm gardens first appeared in the 5th millennium BC in Ubaid sites, even if today they are usually associated with later southern Mesopotamian cultures.

Then came the focus on urban societies in Uruk in southern Mesopotamia. Again this was defined as an '
urban revolution' driven by furrow irrigation in narrow long parallel strips bordering newly dug canals. Other experts underline the importance of the seeder plow, others of animal traction, yet others of the threshing sledge pulled by a donkey, and finally other experts prefer to stress the most striking of ceramic artefacts, the low-cost high-fired clay sickle. And not forgetting the idea that the emergence of temples provided the ideological coercion that was needed to make villagers provide the work needed to dig and maintain large-scale irrigation networks. Productivity gains were enormous but it was the temples that benefited.

This '
Sumerian miracle' has been challenged by finds in North Mesopotamia which showed that a proto-urbanism developed earlier in the North, and that clay sickles and flint blades appeared in both the North and South.

The analysis of written texts from the late-4th millennium BC added to the debate. One of the most important '
wisdom texts' outlined what work was needed during the agricultural year - "Annual flooding in the spring, hoeing and plowing to prepare the fields, harrowing, sowing, maintaining the furrows, irrigating (three or four times), harvesting, threshing, winnowing, and finally storing the harvest". Other scholarly compositions offer glimpses of agrarian practices. One tablet listed Sumerian terms on cereal and date-palm cultivation together with their Akkadian translations. Even law codes, such as the Code of Hammurabi, contained provisions on agricultural matters, such as leasing and cultivation of fields, maintenance of irrigation devices, grazing of sheep on fields, palm-groves, the renting of animals for agriculture, and the stealing of agricultural devices. Given the southerly geographical origin of the Sumero-Akkadian scholarly tradition, this exceptional documentary situation is, however, limited to irrigation agriculture and ignores the traditions of dry farming in the North. In this region (and, in many respects, in the South as well), everyday records, such as letters and, more importantly, economic texts recording the expenditures and deliveries of agricultural goods, offer the most complete evidence for the evolution of farming in Mesopotamia between the 3rd and 1st millennia BC.

The earliest documents (
Jemdet Nasr Period, ca 3100-2900 BC) attest the existence of big estates with hundreds of dependents engaged in large-scale irrigation farming of barley, emmer, and probably date-palm. Other tablets mention a tripartite system of land tenure that would last until the mid-2nd millennium BC. Part of the estate’s land was directly cultivated for 'official' needs, while the rest was allocated to dependents as subsistence fields, and some parcels were leased to farmers against a fixed revenue. The Early Dynastic Period (ca. 2900-2350 BC) witnessed the appearance of the seeder plow. By adding a funnel to the ard at the time of sowing, Sumerian farmers were able to drop cereal grains at regular intervals directly into the furrow, minimising seed loss and increasing productivity. It was operated by oxen or donkeys, and the sowing season expanded over several months, from late summer to early winter. Fallow rotation may have been applied. Harvest occurred in spring and was performed with a saw-like tool. Copper sickles appeared in the archaeological record in the early 3rd millennium BC and are first attested in cuneiform texts from the Sargonic Period (ca. 2300 BC). In Ur (ca. 2100 BC), they became the standard harvesting tools. Sumerian fields were large (44-46 hectares), divided into furrows of various lengths, and separated by strip-shaped zones of overgrowth protecting the soil from wind erosion. In the marginal areas grew spices such as coriander and caraway. Field crops were harvested three times a year and included flax, legumes and vegetables, especially garlic and onions, of which a dozen varieties were known.

In Upper (North) Mesopotamia, the beginning of the Early Bronze Age (ca. 3100 BC) saw the development of a material culture independent from southern traditions. This corresponds to a decline in urban life and a process of ruralisation in small centres and hamlets. Located in the dry-farming zone, they relied on rain-fed cereal farming at a time when the climate was relatively wet. The increasing presence of
caprids (goat-antelope) in faunal remains and the decrease of wild species indicate specialised pastoralism in the steppe, probably linked to textile production. Mari (ca. 2500 BC) probably relied on irrigation for its subsistence, while the plains near the Khabur Triangle saw the emergence of massive cities centred on a dry-farming hinterland. Analyses of satellite images show radiating roads used by farmers for traveling to their fields, while land surveys reveal traces of manuring on the fields. Tell Beydar (ancient Nabada) yielded the earliest written evidence for agrarian management in the Upper Mesopotamia (ca. 2400 BC), describing a large estate engaged in dry cereal farming and supervising sheep and goat herding.

Mari

We will spend a little time looking in more detail at the city-state of Mari (its nice to look at one of the lesser known sites). This city flourished as a purpose-built trading centre (ca. 2900-1759 BC), placing it in the Early and Middle Bronze Age, and in fact it was a copper and bronze smelting centre. As a purpose-built city it was one of the earliest known planned cities. Originally the city was 4-6 km from the Euphrates, but today the river has washed away a substantial part of the old city. It is said to have once been connected to the Euphrates by a linking canal for domestic water, and it was this linking canal that eventually became the new route for the river. The Euphrates was navigable and ships could actually use the canal for transporting finished goods. They also built a 16 km long (100 m wide) irrigation canal and another 126 km long straight navigation canal which allowed boats to bypass the winding Euphrates (and on which Mari collected tolls).

Mari

The city was designed prior to construction, and consisted of two concentric rings, the outer as a protection against flooding, and the inner a defensive wall. All the streets sloped and included a complex drainage system, important since the buildings were made of mudbrick. When the city was discovered in 1933 it immediate yielded over 15,000 tablets, most dating from the last 50 years of the cities existence. The intriguing detail is that archaeologist are not sure who actually build the city, they are simply defined as an "unidentified, but well-organised, complex society". During the cities 1,200 year existence, it saw 3 phases (I, II, III). During Mari I there were no religious or palatial structures, but many houses and craft shops producing a wide variety of goods (beyond metalwork). Then it would appear that the city was abandoned in ca. 2650 BC, only to be re-inhabited in ca. 2550 BC. These new inhabitants levelled the old city and built a new one (Mari II). It is thought that this city controlled a vast area in North and Middle Mesopotamia. Mari II was destroyed by Naram-Sin (ca. 2254-2218 BC), grandson of Sargon, as he expanded the Akkadian Empire. He razed the city and walls, and built another city (Mari III). When the Akkadian Empire fell (ca. 2150 BC) Mari again dominated North Mesopotamia. Finally to cut a long story short, Mari III was finally razed by Hammurabi in ca. 1758 BC. As was the case in many places, when he burnt the palace he inadvertently fired the tablets, preserved them for later excavation.

Mari 2014

Above we have a satellite view of Mari (a World Heritage Site) in 2014, and we can compare it with an image taken in 2011 (below). What we can see (marked in red) are pit digs (there are 100's) made by ISIS (an unrecognised proto-state or band of terrorists). In the nearby Ancient Greek-Roman city of Dura-Europos, founding ca. 300 BC, nearly 4,000 pits have been counted.

Mari 2011


Around 2200 BC, many city sites collapsed, a phenomenon that has been linked to the “
4.2 kiloyear event”, a climatic anomaly visible in most palaeoclimatic records. Recent studies, however, suggest that some places were less affected than others, and that many agrarian societies in the North were capable of resilience in the face of climate change. Both the palaeobotanical and written records show a decrease (ca. 2000 BC) in drought-sensitive wheat and emmer, in favour of more stress-tolerant barley. In the Mesopotamian lowlands, these problems may also have been triggered by increasing salinisation of the soil caused by poor drainage. In any case it would appear that the crisis was far less dramatic in the South, where irrigation prevailed, than in the dry-farming North. The flourishing of Neo-Sumerian city-states such as Girsu/Lagaš, and the numerous administrative records, attest to the vitality of agriculture ca. 2100-2000 BC, especially in the time when the dynasties of Ur established hegemony over Lower Mesopotamia.

After the fall of the Ur III empire (ca. 2004 BC), Kings of
Amorite descent took over Mesopotamia (ca. 2000-1600 BC). The Old Babylonian documentation is more detailed than any other, especially in Mari (ca. 2900-300 BC), where the union of both sedentary agriculturalists and nomad pastoralists under a common leadership offered an exceptional overview of tribal life and culture, as well as pastoral nomadism in the plateaux west and east of the Euphrates. The Mari archives document the cultivation of winter cereals (mostly barley) and pulses like broad bean, pea, and chickpea. Sesame and barley were the main summer field crops, and vines were grown hanging on trees in orchards that also contained fig, apple, pear, and pomegranate trees. Mari agriculture was dependent on irrigation, which required fetching the water of the Euphrates and Khabur several kilometres upstream in order to irrigate fields located on terraces above the river level. Some of the Mari fields were located directly on the valley floor and were farmed with small-scale irrigation, and they were directly exposed to the destructive spring floods of the rivers, which caused frequent harvest losses.

During ca. 2900-1700 BC Mari built two successive and independent irrigation systems, and each is said to have relied on a canal of some 20 to 30 km. In their texts such canals were called '
rakibum' or 'one which rides' suggesting that the canals ran over land using dikes. These canals were only used in the irrigation season and they did not have any villages along them. There are mentions of a canal of 120 km long dating back to ca. 2900 BC. Records also point to the existence of two canals, one for irrigation and the other a domestic water supply. There may also have been a navigation canal built more or less at the same time. This particular version of Mari was destroyed by Hammurabi in 1758 BC. This description is subtile different from the description mentioned earlier, and I've kept it to highlight the differences in understanding, interpretation and description between academics and archaeologists. One says 16 km another says 20 to 30 km, one says 'was' another says 'may'.

Remotesensing

It is difficult to get a clear idea of the scale and effort that was put into building canals. But above we have a remote sensing view of about 100,000 km2. What they have done is map known archaeological features such as canals and irrigation areas dating up to ca. 570 AD (so including early Islamic features as well), and they have been able to isolate a total of 2,900 km of canals and qanats (underground channels) still visible.
A different analysis of
satellite photographs still show today that there were over 6,000 km of tracks used to move food and resources to the urban centres in this region. This was the time when barley was the main crop, and there was at the same time a drastic reduction in wild species. Sheep and goats remained the most important livestock. Models show that a 'village' and its surrounding catchment area under cultivation could support around 2,500 people. A town of 6,000 to 8,000 would need six satellite villages within 5-10 kms to provide food for them. So you can see the importance of irrigation because a city with more than 15,000 people would be very susceptible to fluctuations in annual rainfall and could collapse if faced with a multi-year drought.

Contrary to contemporary Babylonia, where royal land was leased to independent entrepreneurs, the Mari farmers were palace employees who were expected to produce a quota of crops fixed in advance. The sowing season was determined by fluctuations of the river and the rising of a star opportunely called 'the Yoke' (
Arcturus). Tasks of the growing season included maintenance of canals, weeding, irrigation, and protection against locust invasions and other pests, like birds or gazelles. Harvest occurred in spring, often in haste because of the risk of flooding. Grain was then carried away, threshed by oxen on threshing floors outside of the flood’s reach, and finally stored in granaries. When the harvest was not destroyed by war, locusts, or flood, this agriculture was very productive. Although absent from the middle Euphrates, date-palm was one of the major crops of Babylonia. Palm groves were exploited by specialists who leased them from private owners or the crown. According to Hammurabi’s laws, the planting of a palm grove required a three-year investment, during which time the palm shoots grew on the mother trees before being planted in an enclosed garden, watered, and cross-pollinated. Before harvest, the gardener was assigned a quota to be delivered to the landlord, often half or two-thirds of the unripened dates.

In 1595 BC, the Old Babylonian dynasty established by Hammurabi fell to a Hittite raid, and Mesopotamian history became more obscure. It would appear that ca. 1500 BC increasing aridity affected the region, and during the Kassite Period (ca. 1600–1150 BC) settlements became fragment and agriculture whilst remaining extensive became dispersed. The archives of Nippur show that irrigation was concentrated only in the south and east of the city, and 'water drawers' were in use (bucket irrigation). Horticulture was also found at Nippur and Ur, with gardeners producing spices (coriander and saffron) and dates. In the middle Euphrates, a series of large urban centres with strong communal institutions flourished using small-scale irrigation to cultivate cereal fields and orchards, especially vineyards, with a frequent mixture of crops and trees on the same parcel. Fortified farmsteads were an important feature of the rural landscape during the Mittanian and Assyrian periods. In the western Upper Mesopotamia, the middle Khabur was progressively transformed (ca. 1800-1300 BC) from a zone of small-scale irrigation agriculture around isolated cities into a full-fledged settlement system articulated on a regional canal. However judging from the archives the Assyrians (in the North) invested in irrigation to supplement their traditional dry farming, but that the results did not meet their expectations.

Hittites

The Hittite Empire started to attack Mesopotamia ca. 1700 BC, and at their height they dominated Anatolia, northern Syria and northern Mesopotamia. They are known to have adopted both the religion and the legal system of the Sumerians, and above all they were among the first peoples to produce iron tools (ca. 1400 BC).

During the early Iron Age (1150–900 BC), aridity worsened, inciting social unrest and famine, which are recalled in Assyro-Babylonian literature and royal inscriptions. By ca. 950 BC, however, improved precipitation renewed Assyrian power allowed for the political expansion of the Neo-Assyrian Empire (911–612 BC). Regional surveys show, from the Upper Tigris valley to the Turkish Euphrates, the multiplication of dispersed, small rural sites that can be identified with the hamlets mentioned in a cuneiform tablet recording the census of the district in ca. 700 BC. This was coupled with agricultural intensification relying both on dry farming and on large-scale irrigation canals around the capitals of Nimrud, Dur-Sharrukin, and Nineveh. A spectacular feature of this irrigation system was the Jerwan Aqueduct, built under Sennacherib (705–681 BC).

Reconstruction of the Jerwan Aqueduct

This aqueduct was build with more than 2 million dressed stones and used stone arches and waterproof cement. It could be the worlds oldest aqueduct, and it certainly predates Roman aqueducts by at least 500 years.

Cuneiform records from Babylonia from ca. 1000 BC suggest that changes in river courses required heavy investments in irrigation and were accompanied by a re-allocation of fields under royal order. Previously uncultivated land was reclaimed and allocated to different social groups in a form of land tenure that is attested until the early
Achaemenid times (ca. 550-330 BC). These innovations were part of a series of technological changes at the transition between the Bronze and Iron Ages, which saw the widespread adoption of iron tools, especially plowshares.


We have seen that it is quite possible that the first attempts at controlling water in the region was to mitigated the effects of destructive episodic flooding and protect already planted fields. On top of all that, the very small incline of the alluvial plain in the South and the fine texture of the soil easily gave way to waterlogging and salinisation, i.e. lack of natural drainage. But the local communities were successful in practicing what is called irrigated basin agriculture (perhaps ca. 5000 BC), just by alternative-year fallowing to allow the water table to fall after harvest. Wild plants would take over during the fallow year and foster evapotranspiration. The process uses wild plants to draw moisture from the water table, drying out the subsoil, thus preventing the water from rising and bringing salt to the surface. These wild plants also add nitrogen, and help prevent erosion of topsoil. When the field is again planted and irrigated, the dryness of the subsoil allows the irrigation water to leach salt from the surface and carry it down to the water table. However, with time the top soil does become increasingly saline. In fact we know that Mesopotamian farmers would abandon lands for period of between 50 and 100 years to allow them to recover.
We also know that Mesopotamians practiced joint tenure of the land, helping both to prevent plots becoming too small to support fallowing, and also preventing plots being isolated from irrigation water. On top of all that, we also know that marriages between farmers and nomadic groups allowed farmers to revert to pastoral nomadism during hard times, i.e. livestock was an insurance against drought. Experts think that the traditions of hospitality and mutual obligations derive from these earlier traditions, e.g. cooperation to dig, clean and maintain irrigation canals. Experts also think that these traditions would have stopped people acquiring excessive wealth and power. Leaders would have been “first among equals”, but any real power would have been through traditional family links. Of course, as pastoral nomadism disappeared and villages developed into cities, kin-based communities were replaced by residentially-based forms of social identification (enabling new political, religious and economic institutions).

Early irrigation practices did not evolve substantially, but ca. 3000 BC irrigation was rapidly extended, presumable to satisfy a rapidly expanding urban society (one set of canals from ca. 2500 BC has been estimate at 35 km long). Early writings tend to focus more on navigability than on irrigation for farming. A major canal built ca. 2400 BC actually provoked flooding and vast areas became choked with salt. But it was even earlier (ca. 3500 BC) that cultivators had started to shift from salt-sensitive wheat to salt-tolerant barley, and by ca. 2100 BC wheat represented only 2% of crops (wheat had disappeared by ca. 1700 BC).

Irrigation water in the region is dominated by calcium and magnesium cations, with the addition of some sodium. As the water evaporates and transpires the calcium and magnesium tend to precipitate as carbonates, leaving the sodium ions dominant in the soil. If not washed away, these sodium ions are absorbed by colloidal clay particles, deflocculating them and leaving the resultant structureless soil almost impermeable to water, impeding future plant gemination. Salt will accumulate, making the ground water very saline. New water, and poor drainage, lifts the water table and the dissolved salts rise into the root zone near or on the surface.

Salinisation meant a decline in soil fertility. Reports show that over a 300 year period (ca. 2400-2100 BC) yields dropped by 50%. Some experts have suggested that this was the reason why many cities were finally abandoned, others are not willing to commit to such a single, dramatic explanation. A few experts suggest that inefficiencies and corruption of the bureaucrats that took over and maintained the irrigation canals were to blame. What is certain is that smallish, independent principalities (the principle social unit at the time) fought over fertile boarder districts. Breaching and obstructing branch canals were common practices. Large irrigation canals were build to stop these conflicts, but they also produced flooding and over-irrigation, producing a rise in the ground water level, and salt pollution of the land. Sporadic salinity started to occur in ca. 2400 BC, moving to barley was completed by ca. 1700 BC, but by then yields had also dropped by more than two-thirds. During this time many of the great Sumerian cities had dwindled to villages, and the power in the region had moved to Babylon. And we should not forget that the region was an alluvial plain, so silting was a constant problem in irrigation systems, and even more so in the larger canals. We know that agricultural life settled near or on the irrigation network, starting with the Ubaid, ca. 4000 BC. The signs are that water was drawn only short distances from the main watercourses, and silt would not have been a major problem. The short branch canals could be cleaned easily or even replaced. There are also signs that these smaller settlements were occasionally abandoned, probably for socio-political reasons (nice way of saying conflicts and wars) rather than natural ones, which actually allowed the lands to recover naturally. In a later period the land and water resources were almost completely exploited, certainly true in the Parthian Empire (ca. 247 BC to 224 AD). But even in the post-Ubaid empires, population pressures was such that long branch canals were built, but because of their small cross section and small slopes they tended to fill rapidly with silt. Such irrigation networks needed both periodic reconstruction and continuous maintenance. Clearly such an irrigation network is a fantastic asset, provided a strong central authority is committed to its maintenance. But in times of social unrest, small local communities would have been incapable of maintaining the system.

We think of the Tigris and Euphrates as being massive rivers that must be viewed as a vast resource flowing into the Persian Gulf. But to people living in the delta their whole lives depended upon an elaborate system of canals and levees (dikes). And in ca. 2600 BC this network depended on inter-community cooperation between a loose alliance of small adjoining Sumerian city-states (south of what would become Babylon). One of these cooperation agreements between Lagash and Umma concerned a valley irrigated by a canal. Umma cultivated some land under lease paying an annual fee to cover the costs of canal maintenance. They refused to pay, hostilities broke out and part of the canal was destroyed. Lagash finally defeated Umma forcing them to pay for repairs (and an extension). This was the first known conflict over water and the treaty the oldest on record (2550 BC). Here is a quick summary.

The
Sumerians introduced irrigation and navigation canals, and by ca. 3000 BC more than 25 square kilometres were irrigated. The network included dams, canals, and weirs and reservoirs, with the main canals lined with burned brick and joints sealed with bitumen (and large dykes strengthened by layers of reed mats). We have to remember that by ca. 2500 BC the population of Sumer was ca. 500,000 with about 80% living in the cities, already in ca. 2700 BC the city of Uruk had a population in excess of 50,000 people. So agriculture, and irrigation on an industrial scale, was already essential to a populations well-being nearly 5,000 years ago.

Mesopotamia is not the only place were a sophisticated water culture evolved.
Mohenjo-daro in the Indus Valley, one of the world’s earliest major urban settlements ca. 2500 BC, developed into a city of over 40,000 people. We know that dwellings had bathrooms and latrines and that street drainage channels were lined with bricks and flanged terra-cotta pipes of standardised dimensions. They also had large bathing areas, supporting the idea that ritualised cleansing was already important in their culture. Certainly between ca. 3200-2600 BC they also developed irrigation, drainage canals and embankments to control floods and protect crops and settlements. We also know that ca. 2200 BC the Chinese were building dikes, canals, and reservoirs as part of large-scale flood-control projects.

Experts think that in all cultures, irrigated agriculture leads to being ruled by an authoritarian elite, and the more they depended upon irrigation the less they were democratic. The basis is that landed elites in arid areas monopolise water and thus arable land, extracting monopolistic rents from peasants reducing them to virtual slavery, and favouring oppressive institutions. With control over water, yields are predictable (we know the Babylonians predicted crop yields), monitoring is not needed, and punishment can be promised based upon low crop yields. The elite protected their privileges through oppressive regimes, and exerted considerable influence over the bureaucracy. They educated their children so that they could reach high ranks in the bureaucracy, leading to its domination by a landed elite. This is true today in places like Pakistan, was true Europe in the Middle Ages, and was also true in ancient Mesopotamia.

And as the “icing on the cake” the Babylonian Creation Myth has as its highpoint a battle between Marduk, the principle god of Babylon, and Tiamat, a primordial goddess of the ocean and personification of sea salt. At least one interpretation has her as creator goddess who joins with Abzû (the god of fresh water), in a marriage between salt and fresh water, to create the cosmos and produces younger gods. The story goes that Marduk defeats Tiamat, and in doing so creates the Earth and the skies (including the planets, stars, the Moon, the Sun, and the weather). He also puts the other gods to work in the fields, but when he destroys Tiamat’s husband, Kingu, he can then use his blood to create humankind to do the work of the gods.

Technologies of irrigation


Firstly we must note that water is not just about irrigation. You have drinking water, you have marshes and wetlands, and rivers are also bulk transport routes. On top of that water provides food resources (fish), economic resources (reeds), as well as often being sacred (and not forgetting 'waste removal').
As is usual when looking at a 'complex' topic such as irrigation we tend to forget the even simpler and more basic technologies, such as
well digging. In Tell Seker al-Aheimar (in modern-day northeast Syria) they have already found Neolithic pottery that might date from the middle of the Pre-Pottery Neolithic B period (c. 6800-6700 BC). The settlement appeared to comprise extensive multi-room mud-brick buildings and gypsum-plastered floors, with indoor storage, processing spaces, fireplaces, and livestock enclosures. And an unusually large (14 cm high) bi-chrome painted clay seated female figurine of high artistry has also been excavated. However in our context what is important is that they also found a 4 m deep water well dating to about 7000 BC. This might be the oldest well dug to gain access to a clean water source. Experts have noted ritual traces associated with the well.
Early settlements were situated near springs and rivers, etc. and whilst they did not know about pathogens transmitted in contaminated water they certainly knew that pure water was a prerequisite for successful urbanisation and state formation.
So man built water wells at least 7,000-8,000 years ago, and that by ca. 6000 BC spring water was being used to feed fields using small stone channels, and that by ca. 4000 BC dikes and terra-cotta drainage channels were being used to protect crops from flooding.

Cuneiform tablets provide a lot of administrative information and mathematical texts looked at question such as how much land could be irrigated to a particular depth from a cistern of specific dimensions. Then there were the volumes of earth that had to be removed in making water courses, and what that earth represented when piled up. Oddly there is not much on water management, nor on irrigation accounts or calendars. Often cunieform texts don't distinguish between natural rivers and artificial canals, and there is little information on the size and layout of the irrigation network. However there are some texts that hint at the extensive nature of irrigation networks. Tablets list the variety of classes of canals, with their different names. Starting with a major river or canal (naru) flowing down a gentle slope, you would create a secondary canal (namkaru) and give it a smaller slop than the main naru. After some distance you would dig the canal but you would build it up over the ground using dikes, from which you could derive a new namkaru.

Today satellite images can help identify features and point to where irrigation might have been useful. And you have information on what the crops might have been, and this can suggest if irrigation would have been necessary at that place.


Basic Sedimentologic

Irrigation has usually been associated with Lower (South) Mesopotamia, where the land is flat, the rivers meander, and where the most common irrigation technology was the natural or manmade levees (dyke, embankment). In fact natural dykes are created by sand and silt deposits as the river meanders down to the sea. More importantly this preferential deposition of sand created over time a situation where the rivers were actually riding 1 or 2 meters higher than the surrounding land. This made it easier to dig irrigation channels leading down the levee rather than follow the very shallow gradient of the plain. This 'ideal' situation was marred by the tendency of rivers to burst their banks and create new channels, leaving some settlements without water, but creating new opportunities for settlement.

The
annual flood cycle in Mesopotamia was poorly synchronised with the needs of cultivators. In contrast to the Nile, whose monsoon-driven annual flood well matched the needs of irrigation, the annual flood of both the Tigris and the Euphrates peaks in April and May as a result of winter rainfall and spring snowmelt on the mountains of Anatolia, Iran and Iraq. This not only threatened the ripening grain crops, but also required that considerable efforts be made to protect the fields from flooding. So the idea that the Sumerians actually expended more effort on flood control than in the distribution of water for irrigation may be correct. In fact most Ur texts dealing with earth moving refer to the construction of embankments (ég) and the need to protect the crops ready for harvest from the river’s high waters, and to prevent the flooding of towns and fields.

The mismatch between the annual flood and the eventual development of large-scale irrigation is perplexing because it makes it difficult to understand
how the early phases of irrigation developed. Some experts suggest that initial irrigation was a form of flood recession agriculture that took place in the lower flood basins following the retreat of the annual flood waters. Although simple irrigation could subsequently have developed out of small natural overflows and levee breaks, which provided the locus for more organised irrigation, these could only operate during the spring floods when the water level was at its peak, which was not the correct time for the irrigation of cereals. Moreover, by discharging excess water at weak points on the river bank, these crevices could be enlarged by the river thereby encouraging channel breaks and even channel shifts or avulsions. On the other hand, spring and early summer floods would have benefited the palm gardens that require copious water, especially during the hot summer months and which must have been a very important crop during the 3rd millennium BC. Because date gardens occupy the levee crests, this cycle of summer irrigation would have been in keeping with the natural ecology of the region. This spring and early summer flood would also have provided the appropriate soil water to initiate cereal crops’ growth in the autumn. As the need for cereals grew during the later stages of the Ubaid and Uruk periods, irrigation channels could have been extended down levee to more distant fields that would progressively withdraw more water during the lower phases of the flood cycle. Such an evolutionary model of progressive development away from a riverine belt of palm gardens would fit the model where palm garden oases formed the primary focus of cultivation together with lower storeys of plants within the shade. The levee-crest garden agriculture of southern Mesopotamia would have complemented the wetland resources and provide the formative stage of Sumerian agriculture.


So we have seen the first (and second) of Mesopotamian '
technological inventions', levees and canals (and later furrow irrigation). We know that large dykes were strengthened by layers of reed mats, and that some later canals were lined with burned brick and the joints sealed with bitumen.
And there was, of course, crops domestication followed by domesticated animals (and later animal traction and not forgetting the yoke which may date back to ca. 4000 BC).
The move from stone sickles to low-cost clay sickles, and later to copper sickles is claimed by some experts to have been the most important Mesopotamian technical evolution since it had the greatest effect on agricultural productivity, and thus on health and well-being of millions of people. Stone vessels became a variety of pottery vessels for carrying and storing food and liquids. The
hoe was an early development, then came the ard or scratch plow (see also hoe farming), and then the mattock. Later the seeder plow arrived, and then the threshing sledge.
There are numerous reports of dams, reservoirs and aqueducts being built ca. 2000 BC (there are also reports that jump from the
Jawa Dam in Jordan (ca. 3000 BC) to Egypt without passing through Mesopotamia). People tend to think that Mesopotamians must have built dams, but at best 'earth dams' can only be detected through bank erosion from wave action and the sediment deposition (siltation) they might have left on the landscape (ancient masonry dams can last 1,000's of years). Also dams usually require substantial amounts of rock or earth, and there is a chance that the location from which this material was taken might still be visible. Apparently the Marduk Dam (possibly also called the Nimrud or Nimrod Dam) was built across the Tigris River in the 2nd millennium BC (or ca. 2000 BC), and was maintained until about 1400 AD (another recently built dam has the same name). Some say that it was built of earth and wood, others think this dam is a legend. We are on much more solid ground with the Karakuyu Dam built by the Hittites (ca. 1600-1178 BC). Discovered in 1931 it contained a small irrigation reservoir with a 440 m long U-shaped embankment on three sides, and open to the south. In the northern embankment there is a 1.4 m wide and 8 m long barrier built of large rectangular stones blocks (typical of Hittite masonry). A large block with a two lined hieroglyph dates it to ca. 1350 BC. You will need your imagination with the photograph below.

Karakuyu Dam


Some reports claim that ca. 2200 BC (or even ca. 3000 BC) Mesopotamian farmers invented the 'shaduf', or 'well sweep', a long beam on a kind of 'seesaw' with a bag or bucket at one end and a counterweight at the other. The user would lower the bag into the water, lift it out helped by the counterweight, swing the beam around and empty the water into an irrigation furrow. Other reports suggest that this kind of device was invented, re-invented and used almost everywhere, e.g. Egypt, India, Greece, North Africa, … A series of 'shadufs' could be used, one above the other. Tablets suggest that a single 'shaduf' could move 2.5 m3 per day which would be enough to irrigate 0.1 hectares of land in 12 hours. Some experts consider that the 'shaduf' marked the shift from horticulture to true agriculture. Some experts also believe that the pulley was invented in Mesopotamia, ca. 1500 BC, although the first documented pulley came much later with the Greeks.

I want to close this section on irrigation technology by looking in more detail at the low-cost clay sickle. Admittedly it has a tenuous link with irrigation, but many experts think that it had the greatest effect on agricultural productivity, and thus on health and well-being of millions of people. Without the clay sickle irrigation would have been far less valuable, and history might well have been written differently.

Neolithic Sickle

Above we have a re-construction of a Neolithic sickle consisting of sharpened flints fitted with bitumen into a wood or bone handle. This can date back to ca. 12,000 BC but there are recents finds that might push the use of this type of tool to ca. 21,000 BC, but that would also suggest that farming may have also occurred earlier that originally thought.

Ubaid Sickle

This is a fired clay sickle with a painted end, attributed to the Ubaid culture, and dated to ca. 5900-4000 BC. Clay sickles from the period ca. 4500-3100 BC can be found on every type of settlement and scattered across the landscape. They disappeared at the beginning of the 3rd millennium BC. Many archaeologists just see clay sickles as another agricultural tool along with mullers, hatchets, hammers, spindle whorls, etc. However, why then are there as many sickles found in settlements as in fields? Why are they just as abundant in urban centres as in country settlements? Why were sickles found in settlements in marshland areas but almost never actually in the marshes? One explanation is that as layers get washed away or eroded durable artefacts drop to lower levels and appear to be 'concentrated' in an earlier time period. Maybe in the fields the sickles are just buried deep under layers of alluvial deposits. Why are the sickles found at settlements almost always broken?
Modern archaeology has many more 'arrows to it bow' and so-called use-wear studies show that clay sickles were used for many different tasks. Just as with flint sickles, the clay one acquire distinctly different wear patterns when used for harvesting grain or cutting through reeds, etc. In some cases narrower clay sickles were used for aquatic plants whilst a broader one was used for grain. The clay sickle may also have been used for processing the fronds and fruit of date palms. And since the palms were used for house building, boat building, fuel for cooking and for kilns, and as a raw material for mats, basketry, and rope making, the sickle might also have been used as a tool inside settlements.
The clay sickle was usually made of sandy clay and fired at high temperatures (thus producing a lot of over-fired, fused, and pitted specimens as waste). The use of quartz sand improved the sickles hardness and cutting efficiency. Even better if it was a calcareous clay which reduces the need for critical control over the firing conditions. It looks as the sickles were mass-produced by pressing the clay firmly into a mould. One side looks to have been smoothed, which the other side retains the contour of the mould. And the sickle and handle were moulded in one piece. It looks as if inside the mould straw was placed there and was burned out during firing. The other surface show the marks that would indicate that clay in the mould was turned-out on to a woven reed mat to dry before firing. The sickles were fired in a kiln until vitrification (above 1000°C) making them stronger and more durable.
Clay sickles appear to be almost identical, but that's not true. The small one measure about 15 cm and the long ones about 20 cm, and the 'large' one up to 28 cm (smaller in the Ubaid Period, and larger forms in the Uruk Period). The angle between the blade and the handle also varies from 'open' to 'closed'. Some sickles were painted on the handle or along the blade edge. One analyses should that there were six different handle types, two different blade types, and 5 different type of tips. Many of these variations appear to be chronological.
The mould-made clay sickle may actually be among the earliest ceramic objects to be mass produced. One suggestion is that the clay sickle was just a substitute for a flint sickle because in some placed in the South floodplain local clay resources were abundant and flint and stone were quite some distance away. Experts confirmed that clay sickles replaced the older flint sickles, however what they also found was that the two types of sickles were not spatially associated. So it's not clear if the clay sickles actually replaced the flint sickle for the same tasks, there is a suggestion that the clay sickle gradually took over as new tasks were development in the settlements. For a more in depth analysis check out the
original article.

Mesopotamia and the 'Age of Metals'


We are going to close this third and final webpage on Mesopotamian Science and Technology with a quick review of the way Mesopotamians (and others) developed
the technologies needed to make copper, bronze and iron artefacts.

One of the first things to understand is that different 'Ages' (
Stone, Bronze, Iron) occurred in different places at different times (and not forgetting the Copper Age or Chalocolithic culture). For example the Uruk Period (ca. 4000-3100) in Mesopotamia is traditionally considered one of the first Chalocolithic (Copper) cultures, whereas experts talk about a European Copper Age running from ca. 3500 BC to 1700 BC. Secondly, defining a 'Copper Age' does not mean that it covers all discovered copper artefacts, e.g. some of the earliest discoveries about copper smelting date to ca. 5500 BC. Thirdly, a 'Copper Age' does mean that everyone in the region ran around smelting copper and making copper axes, e.g. the European Battle Axe culture (or Corded Ware culture) dates from ca. 2900 BC to ca. 2350 BC and some of them actually made stone axes that imitated copper axes right down to carving the 'molding marks' into the stone.

There were seven metals known in antiquity, namely gold, silver, copper, tin, lead, iron, and mercury. In the below text there will be mention of other metals such as arsenic and antimony. These were known in the form of compounds but as metals arsenic was discovered in ca. 1300 AD and antimony ca. 1540 AD. Alloys will be mentioned, for example, arsenical bronze was known to have existed ca. 5500 BC, but no one at that time knew what arsenic was or that it was even another metal. Gold, silver and native copper exist in exploitable quantities in their elemental or native forms. According to Wikipedia the first gold and silver artefacts certainly date from the 4th millennium BC, copper from ca. 6000 BC, tin from ca. 3000 BC, lead ca. 7000-6500 BC, iron ca. 3500 BC, and mercury ca. 1500 BC.

Metallurgy started with copper and developed through a series of techniques. First came
cold working, then annealing (a form of heat treatment), then smelting (a technique certainly used ca. 6000 BC), and finally lost-wax casting (used from ca. 3700 BC). According to Wikipedia all these techniques may have first appeared ca. 7500 BC in southeastern Anatolia.

One addition points, often not mentioned, is that the development of metallurgy was slow, and that for long periods of time new innovations were inferior to what people already had, e.g. not all metals were better than stone and it took maybe 4,000 years to finally produce something definitively better in the form of bronze.

Can we rapidly build a picture of the evolution of metal working? Just grabbing some popular texts of the web we can see that some say gold came first, then hammered copper in ca. 7000 BC, then bronze ca. 2800 BC, and finally iron ca. 1500 BC, and low-carbon steel ca. 1100 BC. Other sources start with cold working copper ca. 7500 BC (and suggest that annealing, smelting, and lost-wax casting occurred about the same time), bronze appeared ca. 4500 BC, iron ca. 2500 BC and low-carbon steel ca. 1400 BC. I think that this gives us a good idea of the considerable uncertainty still associated with the evolutionary timeline of metallurgy.

Why is early metallurgy so important to archaeologist? The traditional idea is that metallurgy accompanied the rise of urban civilisation, and thus the appearance of the worlds first Empires. Empires were built by invading warrior tribes and metallurgy enabled the dominating classes to acquire 'riches' in the form of metal artefacts. Overly simple? Probably, but the fact is that rich deposits of the rarer metals are often not is the place where there is the greatest demand for them. So which ever way we look at it, metal did help drive exportation, it was perhaps the first thing traded over long-distance, and many would say that it brought about (or sustained) colonisation and imperialism. Roman exploitation of Spanish lead and silver deposits produced the second largest spike in Greenland ice cores (after the Industrial Revolution). Some experts suggest that Chinese importation of African gold resulted in the exporting of gunpowder, paper and the magnetic compass to Western Europe. And Spanish colonisation of the 'New World' was driven by the desire for gold and silver.

During the 4th millennium BC in South Mesopotamia the favourable conditions ensured a constant production of large food surpluses, but concentrated in the hands of a ruling class (state and church). As the population increased there was a boom in the consumption of both raw materials, wholly lacking in the alluvial plains of Mesopotamia, and luxury goods. Metals, timber and stones were the main imports, and the striking discrepancy between purchase and sale prices led to high profits for Mesopotamian traders, and stimulated the creation of organised trading networks. This led to the formation of social classes based upon relative riches, and metal finally did what corn, cattle or hides failed to do, that is provide a basis for long-distance barter and trade. Some experts go so far as to suggest that everything was about controlling quarries, ore deposits and metal supplies. They think that early agricultural societies first took to metals because they were colourful, often had a luster or sheen, and were rare and novel. Then they discovered that they were ductile and could be worked in to 'objects of distinction'.

I prefer the idea that metallurgy is the foundation of our early sciences, by encapsulating the way Man would experiment, discriminate between one or other result, and arrange results in lists. Other experts prefer to focus on the techniques and not the individual metals, e.g. hammering, tempering, cutting, grinding, reduction, melting, casting, creating alloys, etc. In their mind the techniques led to the discoveries of lead, silver, …, and all the new metals and alloys discovered through history. This last point of view has much merit in that the so-called Copper, Bronze and Iron Ages were not about a specific type of metal but about a specific group of metallurgical techniques or processes. So the Copper Age is not about finding copper here and there but is the period in which tools and weapons were generally made of copper. That means that the Copper Age is about a period when Man knew how to reduce ores, to remelt, cast, and hammer the ingots, making something that was equal to and even occasionally better than stone. And so the 'Copper Age' was also about a period when not only the rich and the craftsman possessed the metal tools and weapons. The reality is that a 'Copper Age', and even a 'Bronze Age', might never have really existed. It is not clear that copper did not remain like gold or silver, an object of trade in luxuries. Some experts have argued that its rarity actually helped its rapid diffusion. Others have noted that copper artefacts were sufficiently valuable that they were repaired or recast and reworked. Hoards of broken pieces are found here and there, waiting to be recycled. It is perfectly possible that some Chalcolithic villages were classified as Neolithic simply because archaeologist did not happen to find any rare copper artefacts. No one apparently argues that a particular village is Chalcolithic because of its pottery or its architecture, i.e. not finding rare copper artefacts means it's not a 'Copper Age' village. And to complete the argument, in all 'Copper Age' villages you find copper, how extraordinary!
In addition there was originally a too strong a focus on the metal and trading networks. More recently archaeologists look to the diffusion of metallurgy through the spread/migration of the craftsmen. This is at the crux of the question about the
birthplace of metallurgy. Through the 20th C suggestions included the Sumerians, the Hamites, the Armenoids from Mount Ararat, or Asians, or Egyptians, or from Cyprus, or the Syrian coast, or from Cappadocia, or Caucasia, or Turkestan, or over the Russian steppes, or even from China.
So the chronology and chorology (causal relations) of early copper and bronze metallurgies has been an open problem for some considerable time. The quest for the 'when', 'where' and 'why' of the world's earliest metallurgy remains an important scholarly topic. Copper-based artefacts have been found at many different sites around the world, but the problem has always been to understand if knowledge diffused from one place to another or if the techniques evolved independently in several different places. The gap between the earliest use of copper and that of bronze is widely different for different regions, yet each culture or peoples went through the same sequence of technological steps.
Did different peoples in different places independently find ways to cold shape
native copper, to extract copper for the ore by smelting, and to alloy it with tin?
Some experts are 'diffusionists', others uphold the idea of sporadic local invention. Others accept that there was one site, possibly in the Balkans or maybe on the Black Sea coast, which was the first to develop the techniques, but they do not accept that the knowledge was diffused through large-scale migration.

Metallurgical Diffusion

The above map is taken from a Wikipedia article on the Bronze Age and clearly suggests that the evidence today points to the Vinča culture as one of the most important centres for early metallurgy. Other maps highlight the deposits of lead, copper and iron in the Zagros Mountains and the total absence of metal deposits in both North and South Mesopotamia.

Hidden behind the simple idea to find a better alternative to stone there must have been an accumulation of knowledge, built through trial and error experiments made over centuries. In archaeology invention is almost always invisible, what archaeologists see is usually a mature form of technological innovation, something already replicated within a population. Later on this webpage we will look at the excavations in Belovode, but it's worth knowing that experts have estimated that in the Balkans more or less
4.7 tonnes of copper tools and weapons circulated during the 5th millennium BC. Far more than in any other area making copper tools and weapons during that period.

Before we move on, let's throw in a bit of chemistry. If we look at the relative abundance's of metals we have on the top of the list aluminium (discovered late 19th C), iron, then titanium, manganese, vanadium, chromium (all late 19th C), zinc, then copper, lead, tin, arsenic, lead, antimony, silver and finally gold. Why were some of the rarest metal used first? And the most abundant, excluding iron, not used until the 19th C. Oddly, this list correlates with the
free energy of formation of oxides. Gold and platinum don't form stable oxides. Silver, lead and copper bind relatively weakly to oxygen, and sometime can be found in native form. Iron binds strongly to oxygen, titanium and aluminium even more so. The key was the development of smelting, a way to combine high temperature and particle pressure of oxygen inside a crucible or furnace. Iron smelts at a little above copper but with a low particle pressure of oxygen (reductive atmosphere). That is why it took 3,000 years to go from melting copper to smelting copper, and another 3,000 years to go from smelting copper to smelting iron. Nothing, compared to the 300,000 years it took to go from iron mineral pigments to iron smelting.
Carbon-14 dating now shows us that copper metallurgy might have been invented or discovered in at least three different places, i.e. Near East, Balkans, and Iberia (also maybe China). Depending upon the analysis it could be that the way metallurgy emerged in multiple places between Iran and the Balkans might suggest that it all started somewhere in Anatolia, and diffused in both directions. As more and more samples are analysed in more and more different ways, we find more and more questions. Why did the Hittites and the Chinese develop completely different routes to obtaining iron?

The rest of this webpage does not answer these questions, but it highlights the uncertainty still surrounding the topic of early metallurgy. Here and there some results are provided by archaeometallurgy, which for my part is one of the major innovation in archaeology in the last 50 years.

Archaeometallurgy


Wikipedia defines archaeometallurgy as the study of the history and pre-historic use and production of metals by humans. Others might define it as the study of the production, use, and consumption of metals from ca. 8000 BC to the present. Yet others might prefer a nice and short "study of ancient metals". It is a domain that has emerged in the last 50-60 years, in part I suspect because of the number of analytical technologies that have become available:-
Radiography - for microstructures and insights into metal fabrication techniques. This 'classical' technology can be used for investigating ceramic crucibles incorporating metallic components, failed casting, scarp from wrought working, metal spills from casting, etc., evidence of casting in finished artefacts, ferrous blooms and billets, ferrous slag and metal in solid samples, forging with hammer marks, chisel cuts, and weld lines, and artefacts with complex internal structures. (Example publication)
Optical Microscopy - for microstructures and insight into compositions and heat treatment. The microstructure can provide information of ancient manufacturing techniques as well as corrosion and degradation processes. An interesting area is slag microscopy which looks at the many phases created by heating and cooling which are not studied in mineralogy textbooks. Metallography is about using a light microscope to look at the physical structure and components of metals (equally valuable for ceramics and polymers). Electron microscopes can provide higher magnification. (Example publication)
X-ray Fluorescence (XRF) - with Energy Dispersive-XRF (ED-XRF) for elemental analysis and chemical characterisation of both major and minor element, or Wavelength Dispersive-XRF (WD-XRF) for more detailed and targeted chemical analysis of trace elements. XRF is a not-uncommon technique but there are also other X-ray fluorescence tools such a Polarising Energy Dispersion which tries to bring ED-XRF closer to the detection limits of WD-XRD (a relatively rare and thus costly technique). More and more frequently techniques such as XRF are packaged as handheld devices which can be taken into the field. When linked with chronology data from excavations this technique can determine what practices were in use and when. (Example publication)
Atomic Absorption Spectroscopy (AAS) - quantitive determination of chemical elements. This is a 'classical' technique that can determine the elemental composition of metals, pottery, and glass, with a precision of parts per million. (Example publication)
Inductive Coupled Plasma - Atomic Emission Spectroscopy (ICP-AES) - analytical technique for detecting chemical elements. Particularly useful for detecting a broad spectrum of elements in a sample, thus creating a 'fingerprint'. (Example publication)
X-Ray Crystallography XRC (or XRD for XR-Diffraction) - determining the atomic and molecular structure of crystals (crystalline solids). This technique can determine the percentages of various phases present in a sample. XRD can be linked to EDS or WDS for chemical composition, or a Transmission Electron Microscopy (TEM) for very small particle sizes. (Example publication)
Scanning Electron Microscopy with Energy Dispersive Spectroscopy (SEM-EDS) - examination of micro-scale and nano-scale surface features (microstructures) and detection of chemical elemental composition of the surface. A 'classical' tool for analysing surface features, and in particular coatings, etc. (Example publication)
Electron Probe Microanalysis (EPMA) - measures chemical composition of small volumes of solid materials. (Example publication)
Secondary Ion Mass Spectrometry (SIMS) - analyses composition of solid surfaces and thin films, including depth profile, etc. (Example publication)
Particle Induced X-Ray Emission (PIXE) - determines elemental makeup of a sample, include the impact of environmental exposure, e.g. ceramics. (Example publication)
Inductive Coupled Plasma Mass Spectrometry (ICP-MS) - detects metals and some non-metals to one part in 1015 (ultra-trace elements). (Example publication)
Thermal Ionisation Mass Spectrometry (TIMS) - determines isotope abundance (Example publication)

Archaeometallurgy is not an end in itself, but must be meshed together with fieldwork and excavations, geophysics and archaeomagnetic dating, environmental and geochemical surveying, sampling, and collection and analysis of artefacts. Have a look at "
Methods in Historical Metallurgy" to understand more.

Copper homeostasis in bacteria


As you might have guessed there is much debate about which metal was discovered first, when, and where. And '
copper homeostasis in bacteria' might have the answer, or it might just muddy the water even more. Firstly, bacteria is a biological cell that was one of the first life forms to appear on Earth, and they can be found everywhere, in soil, in water, and in the gut flora of everyone. We hear a lot about bacteria that acquires, through mutation or genetic exchange, resistance to antibiotics. I came across this article which I must admit is full of technical expressions and complex jargon. But as far as I can tell the essence of the research is that bacteria has been able to adapt/evolve its internal conditions to resist not only against antibiotics but also against metal poisoning. And it has done so when copper levels in the environment were high. It found a way (or several ways) to efflux (reject) copper from its cells, and the genes can be found in Enterobacteriaceae (which includes a lot of pathogens such as Salmonella). The article claims that this kind of novel copper resistance is related to increased use of copper in agricultural and industrial applications. They claim that the so-called 'molecular clock' in the evolution of these copper reject paths could correspond to peaks in human copper production. The bacteria responded to this 'copper stress' by compartmentalising the potential toxic metal. The authors go on to suggest that two copper resistant mechanisms (one also resistant to silver) got together to create a compound resistance mechanism to counteract this 'copper stress'.
As you can imagine the article describes in detail what is a complex and specialist process, but the overall message is that the molecular clock has three 'markers' that could correspond to three major peaks of copper production, i.e. the Roman Empire, the Chinese Sung Dynasty, and the post-Industrial Revolution. This would indicate that copper started to be used ca. 7750 BC, that the Bronze Age started ca. 4000 BC, that fairly predictably the Roman Empire's copper production peaked at ca. 1 AD, the Chinese production peaked at ca. 900 AD, and the post-industrial Revolution is peaking now. The suggested early contamination route is dairy products fermented in copper alloy vessels.

A complex topic, but one worth following up on!

Mesopotamian Gold


Native Gold

The question has always been asked, was gold the first metal discovered by Man? This has no clear answer. As native metals, the first discovered was either gold or copper, or just possibly the rarer silver, or even meteoric iron. Many experts said gold, but in Africa copper may have come first, and some tribes called gold 'yellow copper', others even 'yellow iron'. In Iran, Mesopotamia, and the Armenian Mountains native gold occurred fairly frequently so there is a good chance it was discovered first. But we know in Egypt that copper was known before gold. So until recently most archaeologist guessed that gold was probably discovered before copper, but today the consensus is that the earliest exploitation and working of gold occurred in the Balkans ca. 5500 BC, and this now looks to be probably later than when copper was first smelted.

Initially gold would simply have been seen as a tough material, but
not competitive with wood or stone for toolmaking. The malleability of gold would be easy to discover, by hammering out gold nuggets they could produce small sheets or wires that could be cut and bent. It's important to look at gold and copper together. With copper Man took the native copper and developed ways to transform the ore by means of heat, air and charcoal, producing a fairly pure metal. This was smelting (extractive metallurgy) and not just liquefaction of the metal by melting it. What we see in this period is that all the gold is native (seen the above photograph), and it did not go through a smelting process. As such it did not contribute to the development of metallurgy. Of course you could bend, beat, emboss, bore, weld, truncate, raise, etc. gold, but it was copper that was at the heart of early metallurgy.

Just as an additional point, charcoal was known to the Mesopotamians in prehistoric times. It was used as a pigment for pottery, however in a treeless country charcoal was expensive. It was made from something like a styrax wood. It was cut into logs, bound in bundles and stored under hide until needed for charcoal making. There are a few Babylonian tablets about the purchase of wood for metalworking. The wood would be cut in lengths of 1 m, 1.5 m, and 2 m, and was to be cut green and not to be wood that had 'died in the forest'. One order was for 7200 logs, in batches of 300. An alternative to charcoal was date kernels. Picking the right charcoal was also an art. They did not want charcoal that would produce too much ash, and generally they preferred charcoal from young trees. Old trees are too dry, and produce a charcoal that sputters as it burns. And different charcoals were preferred for copper, iron, etc. In iron smelting they preferred sweet chestnut, and in the silver mines pinewood. Pine was often preferred because it smoulders less and air blown into it creates a fiercer flame.

With both wood and stone, Man hacked away bits until finally obtaining the desired form, i.e. the final artefact was smaller than the original raw material. There is a suggestion that Man did the same with gold until he discovered that it was malleable and could be fashioned by hammering (melting gold was still in the future).

Metallic silver occurs rarely in nature, so it is usually obtained from lead-based ores. As such silver would have to wait until smelting could be used to refine lead ore. A problem gold did not have. Gold almost always occurs in its metallic state. It's usually not pure, but is alloyed with traces of silver, copper or lead, but it's 'ready to go'. There are a few naturally occurring gold compounds but even today they play no role in gold production.

Gold was probable initially found using what has been called '
placer mining', which is nothing more than mining stream bed (alluvial) deposits for minerals. Gold, gem stones, and even copper would over time be eroded (washed) out of a vein and accumulate at the base of placer deposits. Gold would appear as flecks and nuggets, and be deposited where the flow was slowest. As the river itself migrated with time, the concentrated gold deposits migrated as well, creating what is called a 'pay shoot'. This is certainly where early Man found his first gold nugget.

prehistoricb

What we have above is gold from one of the graves in the Varna Necropolis in modern-day Bulgaria. Wikipedia tells us that the burial site dates from ca. 4569-4340 BC and is home to the oldest gold treasure in the world. This particular grave contained about 3 kg of gold.

One interesting perspective is that both native gold and native copper were hammered into beads well before Man developed smelting (possibly ca. 8000 BC). Both metals were relatively soft and neither were seen as a replacement for stone tools and weapons. Gold was rare but was the only metal that did not corrode and change colour. There is a suggestion that the search for alloys of copper was driven by the interest to
develop a 'fake' gold, and not the desire for a harder metal to replace stone. In fact, it looks as if many of the earliest naturally occurring alloys of copper-tin-arsenic (ca. 4500 BC) would have had a 'fake' gold colour.

Gold in Quartz Vein

There are no detailed descriptions of early Mesopotamian mining techniques, but the Egyptians do describe how they mined gold. They looked for 'dark rock' with veins of white quartz containing native gold. They would use fire to open up the vein and then break the rocks into smaller pieces using hammers. They would then reduce the pieces using a mortar, and then crush the pellets to a fine power. This they would spread over a sloping board and run water over it to separate the gold from the 'gangue' (the worthless waste mineral that is closely mixed with the target mineral). Clotted lumps would be rubbed by hand. The gold would be then collected and weighed. It is said that the Egyptians would follow 'reefs' (veins) down to 100 metres and could create 'drifts' (horizontal shafts) of up to 500 metres following a vein. In a survey of antique gold deposits Egypt comes out top with its mines along the Red Sea coast. And for example it has been estimated that Nubia in the Upper Nile region produced as much as 250-350 kg of gold annually.

As far as I can see Mesopotamia had no important gold mines, so for example
Assyrian gold (from ca. 2500 BC) and gold found in the Early Dynastic Period tombs of Ur (ca. 2900-2350 BC) was certainly imported. Given the craftsmanship of the work the suggestion is that gold had been imported for some considerable time.

Golden Lyre from Ur

This is the 4,700 year old Golden Lyre of Ur, a highly symbolic artefacts in Iraq. During the war the Baghdad Museum was looted and this instrument was found smashed and stripped of its gold in the museum car park. The instrument has been reconstructed and restored.

Below we have a pair of earrings made with 48 grams of pure gold, and dating from ca. 2093-2046 BC. They each consist of nine convex shaped segments forming a flattered half-ball. There is a cuneiform signature indicating that they were given by 'shugi' (which I think might just mean 'old person') to a chief of servants of a temple. The goddess Mammetum is also mentioned, this was the Akkadian goddess of fate and destiny.

Earrings

Nearly all natural gold from placers or mines is not pure, and sometimes contains considerable quantities of silver, some copper and traces of iron (and antimony, mercury, etc.). But unfortunately these impurities don't appear characteristic of specific regions. Reports are that nuggets from the Caucasus region would contain 55-70% gold, from Anatolia 75-90% gold, and from Egyptian mines 70-99% gold. Tests of gold found in Mesopotamia vary from 30-48% gold, 8-60% silver, and 0-10% copper. A good point here is to note that most of the gold was a native alloy, and no refining was made until quite late in Mesopotamian history. The different alloys of gold have different colours, for example gold with more silver goes from reddish yellow through to pale yellow or white, whereas a gold-silver-copper alloy will have a more bronze colour. It is evident in both Mesopotamia and Egypt that these variations in colour were appreciated and exploited in jewellery. The Egyptians would even dip gold in an iron salt and heat it to make a purple or red gold. There are Egyptian papyri explaining how to create metal surfaces with the appearance of precious metals.

Ag-Au-Cu Colours

It is not clear when gold started to be refined (and separated from silver), but most experts place it ca. 1000-550 BC. During the Egyptian XVIII dynasty (ca. 1549-1292) gold started to be debased with up to 75% copper. From the Egyptian XX Dynasty (ca. 1189-1077 BC) they started to talk about different quality levels of gold, e.g. refined twice, or three times, etc., and in the Egyptian XXVI Dynasty (ca. 664-525 BC) the words pure gold appeared for the first time.

It is certain that by ca. 600 BC the techniques for refining gold were well understood. However it is not at all certain that pure gold was preferred over native gold, since due to the impurities native gold is harder and more resistant.

Mesopotamian Lead and Silver


Silver Bead from the Death Pit

One of the earliest known examples of worked native silver are two tube beads found in Domuztepe (a late Neolithic settlement in Turkey) and dated to ca. 5500 BC (although some reports incorrectly mention ca. 6500 BC). At least one of these beads (on the right in the photograph) was found at the edge of the so-called 'Death Pit', a burial pit containing fragmented human bones that might have been related to cannibalism. These two beads were found amongst nearly 1,000 stone beads, pendents and seals.

Lead Artefact

Above we have a large lead bead found in Ashalim Cave (near Negev), and dating from ca. 4300-4000 BC. The ore came from the Taurus Mountains in Anatolia. As we see above the bead is about 3.5 cm long and has a 22 cm long stick through the central hole. It is probably the oldest smelted lead object found so far. The bi-conical bead is almost pure lead, and because it was placed in a burial chamber it may have had some symbolic significance. Naturally some reports in the popular press called this an ancient 'wand'. A number of articles mention a lead bracelet dated to ca. 5700 BC, but I can't find any source data on this.

Native silver occurs very infrequently in workable deposits. So silver was probably only discovered when Man could refine lead ore. Native silver was collected, but its impact was insignificant. Extracting silver from
lead-ores is not an easy process, and would have required much trial and error. There is a mention of silver artefacts existing in Egypt ca. 3100 BC, but they certainly did not come from Egypt because both lead and silver only became common much later. Some experts point to tribes in Pontus on the Black Sea as being able to refine lead and silver by ca. 3000 BC.

The idea is that
metallurgy developed from the smelting of copper ores. Gold, bronze and iron completed its evolution. These metals had, and still have, a role to play in daily life. It was not clear that lead or silver had a purpose, at least from the technical view point. Useful quantities of silver can only be obtained from the smelting of lead ores. And silver was almost always found after copper, gold, and bronze were discovered. The extraction of silver is an intricate process, and thus one that occurred later in the development of metallurgy. Nor would smelting of lead ore have happened by chance. Lead itself did not play much of a role until Roman times, but the Egyptians did use a form of lead bronze for ornamental and devotional artefacts.

Galena

This is not to say that silver was unknown. Galena has a brilliant metallic appearance, a high specific gravity, and occurs frequently, so it would have been attractive to early Man (initially it was used as a eye-paint). If you put galena (the natural mineral form of lead sulphide) on a fire, it will reduce to lead. If you leave it on the fire you will finally get a small drop of silver. Gold ores almost always have silver in them, and sulphur compounds of lead, copper and zinc usually contain silver as well. But most of the worlds silver comes from lead and copper ores. These ores are not rare, but the concentration of silver is minute (around 1.4 kg of silver per ton of lead). The only way to get galena is to find a vein and mine it. In Laurion in Greece there were more than 2,000 shafts each some 70 metres deep (they were worked out and abandoned ca. 100 BC).

It was not easy to know where your silver mine should be, and where to dig the shaft. There was no glittering spark as with gold. The earth might be red in places, and ash-coloured in others. The galena would have been extracted by hand along with waste rock. On the surface it would be crushed, washed in a stream, crushed again and sieved to concentrate the ore (so-called 'jigging'). The heavier galena would go through the sieve and the light gangue would stay on the top and be thrown away. This cycle could be performed up to five times, then the remaining sediment would be smelted and the lead poured off (by this time the lead concentration would between 15-30%). What would be left would be silver.

We should not underestimate the complexity is smelting an ore like galena to obtain silver. Man would have had to obtain lead, then look for and find silver. And who wanted lead? But as some experts have pointed out the finding of silver 'hidden' in lead could be at the origin of the hunt for the power to transmute a base material into a precious metal such as gold.

Mesopotamia did not contain any galena deposits, but rich mines existed in the mountains of
Armenia and Kurdistan, and tablets suggest that Cappadocia had developed lead and silver mining ca. 3500 BC. The upper course of the Diyala River yielded lead and a good quality ore was found in the sources of the Great Zab and Khabur Rivers. On the western frontier of Assyria there were two very important mines (near Diyarbakir and Keban). In modern-day Turkey there were many important deposits, particularly near Pontus (some call it the birthplace of silver). This abundance of silver-bearing rich galena deposits in the surrounding mountains led to an early use of lead and silver in Mesopotamia.

Some silver was found in
Ubaid (ca. 6500-3800 BC) graves and although rare it was also used as a medium of exchange. Silver was used in the Royal Tombs of Ur (ca. 2300 BC), but whilst the workmanship was very good the silver was usually heavily alloyed with gold and small amounts of copper. Lead sheets in the grave at Ur were made of smelted ore, but not de-silvered.

Silver Lion from Ur

This is a silver lion's head from the tombs at Ur, and dates from ca. 2650-2550 BC. It consists of silver, lapis lazuli and shell, and is about 11 cm by 12 cm.

One intriguing view is the role of language. The
Hittites may have called the mountains that gave access to the country the 'silver mountains' and some have suggested that even the name suggests a link between the country and silver production. In Egypt silver was called both 'silver' and 'white'. Other languages used for silver 'pure metal', or 'white, brilliant', or 'white, bright shinning', or even 'white and shinning metal'. Lead was often confused with tin, and some even called it 'a worthless thing' or 'crude lead', others called tin 'white lead' and lead itself 'black lead'.
Another aspect was that silver was the metal of the Mesopotamian moon god
Sin, and the god Nergal was associated with the underworld and fire and could purify metal. The Egyptian goddess Hathor was also the goddess of silver, and the bones of the gods were fashioned in silver. The Egyptians considered lead a 'very cold' metal, and it was often associated with magic. Different texts associate lead with a variety of gods, e.g. the Mesopotamian god Ninurta was a warrior deity, and the Egyptian Osiris was god of the underworld. Tablets of lead were used for curses and for prayers for the sick. And some priests were able to foresee the future by pouring molten lead into water and looking at the 'frozen' structure that emerged.
Gold was almost always associated with the sun god, e.g. the Mesopotamian god
Utu and the Egyptian god Ra.

Vase of Entemena

One of the most impressive examples of Mesopotamian silver work is the Vase of Entemena which is dated ca. 2400 BC, and is said to be one of the oldest surviving examples of engraving on metal. I personally like this webpage with its description and photographs. Below we have an engraving of a lion-headed eagle grasping two lions.

Lion-Headed Eagle Grasping Two Lions

This 35 cm tall vase was fashioned from one single sheet of silver, and there is an inscription on the neck which indicates that the vase was for the oil offering to Ningirsu, the god of war.

Silver as 'money'


So far we have more or less condemned silver as a decorative metal (inferior to gold) and of little importance compared to copper, bronze and iron. However we tend to forget that Mesopotamia (and Egypt) became large and complex states in the late 4th millennium BC. Each created a writing system, and each looked to record (account) for people who weren't present when goods were transferred or services rendered. The available record today is very patchy in both place and time. Silver was not native to Mesopotamia, but it is evident that
the political elite dominated the use of gold and silver. The Royal Tombs of Ur clearly show that gold and silver were used for totally non-productive purposes. Despite the elite stockpiling silver, it was also used to facilitate financial transactions. In fact from the mid-3rd millennium BC through to ca. 1600 BC value was expressed as an amount in silver (it was later replaced by a gold standard). Even the gold-rich Egypt measured value in silver. There are texts from ca. 3200 BC showing that there was an ideal silver equivalent for many other commodities. The Laws of Eshnunna (ca. 1770 BC) clearly lists the equivalents for 1 shekel of silver (equal to 1 gur of barley, 3 sila of fine oil, …), where 1 shekel was 8.333 grams. Babylonians actually changed the definition of volume (sila) so that it kept its value, initially a sila was worth about 300 litres but by the 1st millennium BC it was worth only 180 litres, and the equivalent volume in oil would fluctuate according to the seasons. These laws were not royal proclamations but more like ideals statements, so there is no evidence of centralised price control. But by the ca. 2100 BC there were lists of prices for foods, craft materials, metals (gold and copper) and livestock. Merchants provided these commodities to the institutions in exchange for items such as wool metals including silver, and exchangeable staples such as cereals and fruit. Later the equivalent in silver was documented for the sale of land, wool, cattle, oil, barley, slaves, etc., even if most transactions would have been barters.

So money was used in Mesopotamia, but initially it was just a weight of gold, silver, copper, or barley. By the Akkadian Period (ca. 2334-2154 BC) weights and measures were standardised. The oldest unit appears to be the 'talent' or 30 kg, the weight that could be carried by a worker. This was divided into 60 'minas', which in turn was divided into 60 'shekels', each of about 8.3 grams. Below this there were 'little shekels' (1/60 shekel) and 'grains' or 'barleycorns' (1/180 shekel).

What is important, and perhaps unique, for that time was that sales could include
credit sales of manufactured goods paid up front, and where the loan was defined in silver. There are 1,000's of these credit agreements running through the entire history of Mesopotamia. For example, silver would be loaned but the repayment would be in barley. In the early 2nd millennium BC the Babylonian State wanted taxes and rents paid in silver (the State was the largest landowner). The State would provide credit to merchants allowing them to delay the collection of resources until a future date, and the merchant could extend credit to the producers of resources such as barley or oil. By the mid-3rd millennium interest was introduced (or invented for the first time). Ideally it was 20% for silver and 33.3% for barley, but it was negotiable. Silver was an integral part of the Mesopotamian numerical system, with one mina of silver being worth 60 shekels, and interest of 20% represented 1 shekel per month. For barley it represented ⅓ of the yield. Actual interest could be higher because the interest was calculated on a yearly basis, and you still had to pay the full interest even if you reimbursed the debt early. Debt was a recurring problem in Babylonia, and occasionally the King would abolish consumptive loans. One particular example showed Babylonia exporting textiles in exchange for precious metals, and selling their textiles for triple their cost in silver.

Texts exist listing
payment in silver to boatmen, builders, and even physicians. Fines in silver covered physical injuries, including manslaughter. An 'eye-for-eye' was between equals, but if one of higher rank injured someone of lower rank they paid a fine in silver (although the payment might have been made in another agreed commodity). One example of a fine was 27 kg of gold, another was for 1.8 kg of gold, but gold and silver could also be paid as a ransom. At one point four Babylonia towns were captured and ransomed for about 3,600 kg of gold, and 14,400 kg of silver. I could not find out what finally happened, but it was written that the sums were 'monstrous'. However one of the towns was Sippar, where the conqueror had removed an obelisk with Hammurabi's famous Code. We know about the Code because that very same obelisk survived through to today unscathed thanks to our conqueror, even if he had seven of the fifty-one legal columns chiselled away.

Silver was physically kept in rings and coils, and from the mid-3rd millennium BC there are records of small pieces being snipped off for payment. Some experts call this silver 'money', others refuse this definition. Silver was a measure of a value, even if most people in society would never see the silver. And lower values were often expressed in copper, and higher values in gold. It was only in the 1st millennium BC that silver objects appeared bearing a mark from a temple or palace.

It must be said that fines also existed in lead, and it was not unusual to have quantities of 500 kg of lead mentioned in texts.

If we step back from the circulation of silver then we are left with the question of ancient economies. What of markets, money and the exchange of goods and services? Just how important was economic considerations? How was wealth distributed and re-distributed? Were there markets and market-places, and what role did they have? Did private economic activities exist? Were there enterprises, trade, etc.? Was silver, markets, and economic activities reserved for an elite? Was silver really just a ceremonial artefact? Was silver a type of money that operated in a market economy?
I don't think I can answer these questions, nor even attempt to include all those subjects on this webpage dedicated to science and technology. But from the 3rd millennium BC silver was not an abstract concept in Mesopotamia or more generally in the Near East. As we have already suggested silver was a standard for equivalence for and between specific materials and products. It was closely related to the standardisation of measure and weights. From ca. 2600 BC silver in Mesopotamia was just silver, with no specification of quality. There is a suggestion that it might have been defined by the process 'washed' or 'burnt' (probably meaning native or smelted), but not by its quality (purity). And again from ca. 2600 BC silver, copper and barley were used to buy and sell land. But it does not look like silver was an actual thing that a buyer would hand over to a seller. It was more like an index of value or a kind of standard of equivalence. Later it does appear that silver circulated in the form of standardised objects (bowls, bracelets, dagger, rings) and the weight of silver was noted when it was presented as a ritual or political gift.
So by the mid-3rd millennium BC silver had a specific value and had its equivalent in barley and wool (in Syria). Of course silver could be in the form of an object, or an ingot, or just scarp. And the role of a silver artefact might also depend on where it was found. For example archaeologists have found jars sealed with bitumen and containing silver ingots, coils, rings, beads, sheets, and scrap. These jars were found under floorboards and in funerary assemblages (usually of craftsmen). It looks as if domestic hoards could easily exceed 2 kilograms of silver (and gold). One hoard contained balance weights made in hematite and agate, a seal, semi-precious stone beads and gold earrings, and silver fragments, and an anvil. Some hoards contained some standardised metal objects (usually rings) manufactured according to standardised weights. Most of what has been found suggested that silver was stored in the form of ingots, not designed for circulation but for weighing. So on the one hand the ingots were clearly things that might be exchanged and were valuable (so a kind of money), but did they circulate and were the ingots used for payments within a market economy? By the 2nd millennium some people were clearly hoarding silver (wealth accumulation), but this does not mean silver circulated freely. Silver was also used to secure trade ventures. Most experts think that some type of market economy existed in parallel with 'redistribution' by public organisations and the creation and 'circulation' of ceremonial and gift goods. The guess is that a market trader would also be a public servant involved in public trade, remembering that the control of land and labour were the traditional ways to manage economic power. The conclusion of this type of analysis is that silver was a kind of money in that it was used to both make and receive payments, and as an exchange measure for value. And of course it was also a way to accumulate wealth. But silver was not something that would circulate like our coins.
Interestingly the wealthy clearly felt that taking some silver with them would facilitate a comfortable afterlife, so perhaps they considered it as 'money' after all.

Modern day silver production


Let's start with the
extraction of lead from lead ores such as galena. There are three steps, firstly smelting, secondly purification of the crude lead, and thirdly de-silvering of the soft lead.
There are three ways to
smelt galena.
The first is
roasting (air reduction) by gentle heating in a blast of air. The sulphur-lead compound is decomposed with most of sulphur escaping as sulphur dioxide gas, leaving some lead sulphate. Some of the galena remains intact, and most of the lead is now in the form of lead oxide (litharge). After this process of 'desulphurisation' the temperature is raised and the litharge, lead sulphate and galena interact to form lead, which collects at the bottom of the furnace (the remaining sulphur escapes as a gas).
The second way is to roast until practically all the galena is transformed into litharge, which is then reduced to lead by means of carbonaceous matter (charcoal, coke or wood).
The third way is the so-called matte smelting process. The galena is heated with metallic iron. The lead from the galena forms a complex of iron and lead sulphides which pass into the slag. This third technique was not known in antiquity.
The smelted galena is called 'crude lead' (or
base-bullion). Today this process is carried out in reverberatory furnaces or for low-lead content ores in a blast- or shaft-furnace. Roasting the ores before processing was used in Roman times, but not in Mesopotamia. The Romans found several ways to roast the ores. They would often put the ores and fuel in a hill-side trench and let the wind provide the blast. This type of wind-furnace has been found from Britain to Peru.

The second stage is to
purify the crude lead, and produce 'soft lead'. The crude lead is still contaminated with impurities such as tin, copper, etc., and is too hard for most applications. The impurities also would adversely affect the de-silvering. The crude lead is slowly melted and molten lead will flow away leaving a 'dross', a mixture of copper, lead, antimony and arsenic. This exploits the difference in melting point between lead and copper, and as such the silver stays with the lead. Now the lead is melted again in a reverberatory furnace. It has a shallow bed which exposes the metal surface to a current of air. Impurities will oxidise first and the dross can be skimmed off at regular intervals until the lead sample results are good. This technique dates back to antiquity and was also used in Japan.

The third stage is
de-silverisation, and again there are three ways. It can be purified electrolytically with the precious metals collecting as an anode slime. The Parkes Process extracts the silver in a zinc-silver alloy and then distills off the since to get the silver. The Pattison Process exploits a specific physical property of lead-silver. If melted and cooled the first crystals formed are of pure lead, leaving a solution that is silver-rich. This will work until silver concentrations of around 2.4%, then the molten metal will set in one go. Just before this happens the molten metal can be poured off. From there the old cupellation process can be used.

Cupellation was used in antiquity (from ca. 1500 BC), and the Romans de-silvered completely their lead (down even to 0.002%). Modern day cupellation can archive 0.0002% or better. So the Romans looked to have been efficient, however, this is far from true. What we have is very well de-silvered lead, but a lot of their silver disappeared in the slag.

We have looked at obtaining pure or soft lead, where silver was seen as an impurity. But now let's look at
extracting silver from lead. We will not deviate from our 'three methods' model, and there are three ways to extract silver from ores.
The 'wet method' involves first roasting the ore and/or converting it into a chloride, and then leaching with a solvent such as
cyanide lye, brine, or a strong salt solution. The silver is precipitated. You can also dissolve the ore in sulphuric acid and convert the silver in solution into an insoluble form.
A different technique known to the Romans is to mix the finely powered ore with salt, copper sulphate and mercury, and expose to air in heaps which are constantly worked over. The silver-mercury compound is then distilled, and the mercury recovered, leaving silver as a residue. An alternative is to firstly roast the powered ore with salt, mix with metallic iron, and then use the process as described.
The easiest and oldest technique is to first smelt the ore to make crude lead, and use the Pattison Process as described above. The concentrate silver-lead alloy is cupelled. If the ore was lead-based then you will get your silver, but if the silver ore was copper-based then you need to alloy the crude copper with lead and liquify it. In antiquity they smelted the ores with lead and extracted the silver. With time they learned which ores to use so they could keep the silver in solution. It is probable that the concentrating of silver was not known in early times, so they would have relied on cupellation without concentration.

The reality was that in antiquity they were not able to separate the processes of roasting, smelting, oxidation, and liquidation, so their processes were inefficient and they could not extract all the silver in the ores. Lead could be well de-silvered, but a large part (up to ⅓) of the silver remained in the slag (because a lot of lead also went to the slag).

Judging by the techniques applied it is clear that lead was the initial objective, and must therefore have been known before silver. Only later would lead be smelted for its silver content.

It's important to close with an idea of scale. A furnace in 'antique' times would have been up to about 30 cm in diameter, dug into an earth trench and filled with alternative layers of charcoal and ore, in some cases a chimney would be used to carry the sulphur gas into the air. By Roman times crucibles of about 3 m deep and 2.5 m across were used. It would be sunk in the earth and have thick walls of brickdust and
clay. In both cases air would need to be blasted into the furnace.

Mesopotamian Furnaces


We have mentioned the furnace several times, but it is perhaps useful to stop here and look at what a furnace really was to a Mesopotamian. We have mentioned charcoal as an essential ingredient for a furnace, however naphtha from coal tar and varieties of coal have also been used (particularly in Roman times). Peat is often mentioned and is used in modern-day trials, but there is no evidence that it was actually used in Mesopotamia. Kindling was also needed and the preferred type was fig and olive wood, fig because it is tough and its open texture catches fire quickly, and olive because it has a close texture and is oily and will sustain the fire once started.
Experts have consistently noted that archaeologists of the past often ignored the furnace and its 'waste' contents and looked only for finished products (pottery, axes, ingots), and more often than not when they noted the existence of a furnace they did not note the details or take close-up photographs.
A 'modern' furnace consists of a separate fire-box and a hearth, but primitive furnaces were just one unit. Two additional features included a chimney and a way to introduce 'blast air'. You can find furnaces in which the ore and fuel are mixed, and those where the ore and fuel don't come into contact. When the ore/metal come into contact with the fuel it is a shaft-furnace of hearth-furnace. The shaft-furnace is taller than its diameter, but because they generally operate at a lower temperature they are used for calcining ores, burning lime, etc. The advantage is that they can be continuously fed from the top, with the end-product withdrawn from the bottom. The shaft-furnace is also often called a kiln. You can also force or blast air into them, raising the operating temperature, and they are often called blast-furnaces. These blast-furnaced are used for smelting lead, copper and iron. They usually had a square cross-section, but became round for smelting iron. The bottom of the furnace is conical to collect the metal. There are lots of different types of kilns and blast-furnaces. The hearth-furnace was usually wide than it was tall. The fuel and ore was in contact and its was easier to guide the smelt to an oxidising or reducing reaction. The hearth-furnace was used to produce lead from galena or wrought iron directly from iron ore.
The furnace in which the fuel and ore don't come into contact all derive from the crucible. The crucible is the heating chamber and can be moved in and out of the furnace. This type of furnace was almost always used for refining gold and for producing steel. Another type of furnace had a separate heating chamber in which more than one crucible could be placed.
Many of the most primitive furnaces were built and served only once, and some smelting sites can have 1,000's of furnaces. Once stone furnaces were built they could be used and reused, and it was this type of furnace that paved the way for continuous processes. Different types of stone were used for different types of furnaces. There were acid-materials such as flint,
ganister, sand and fire clays, neutral materials such as graphite and chromite, and finally other materials such a limestone, dolomite, magnesite, and bauxite. The choice would largely be conditioned by the gangue produced by the smelted ore and the flux used. In some cases with a siliceous gangue the lining had to renewed each time. But the simple design and low operating temperature meant that furnace linings could last some considerable time. It is said that some Roman furnaces survived into the 18th C AD.
Another type of furnace often used was a
roasting furnace, used to drive off water in iron ore and to expel sulphur and arsenic in galena and pyrites. These could be just heaps of ore on wood. In some places long narrow buildings were used for roasting copper ore placed on a bed of slag with charcoal.
A primitive type of furnace often still found is the
bowl-furnace, basically a clay-lined hole in the ground. Air was blasted over the rim using a 'tuyére'. However, they lose a lot of heat and thus lose a lot of metal to the slag. The pot-furnace was an improvement, with the neck of the bowl-furnace contracted to form a narrow entry, and holes in the base for the air blast. The walls remained of clay, and the body of the furnace was still in a hole in the ground. This basic model was found all over Europe. The Romans developed a similar model but built into a steep bank. Wind was directed to lateral holes in the base through stone lined channels (i.e. natural draught).
A different type of bowl-furnace was one lined with stones and where the walls were raised above ground. In some cases after filling with ore and fuel they were covered over with earth (so probably inspired by the traditional baking furnace).

Most of these furnaces had some form of chimney, or in the case of the shaft-furnace the upper part of the furnace was fashioned with a neck.

During antiquity much was trial and error, and numerous variations on these basic designs were tests, some kept and others discarded. The model that appears to have won out by the end of the Bronze Age was the shaft-furnace built of stones and lined with refractory clay.


Lead isotope geochemistry


One additional important lead-related development is the use of lead
isotope analysis, or more specifically isotope geochemistry. One feature of this technique relies on the fact that lead has four stable isotopes 204Pb, 206Pb, 207Pb, 208Pb, and one common radioactive isotope 202Pb with a half-life of ca. 53,000 years. Using stable isotope ratios is not the same as using the radioactive isotope 14C to date biological material.
The basic idea is that the lead isotopes were created in the Earth's crust by the decay of
transuranic elements, primarily uranium and thorium. A isotope-ratio mass spectrometer (IRMS) is used to precisely measure very small, naturally occurring differences in the amounts of the stable isotopes in a sample. IRMS has been used for measuring stable isotopes of hydrogen, carbon, nitrogen, oxygen, chlorine and sulphur, and international standards are available, e.g. there is an international standard for 2H/1H and 18O/16O in ocean water.
Why should the abundance of one isotope be different to another isotope of the same element? There are several reasons for a particular isotope to be 'enriched' as compared to another. If for example two isotopes in a compound go through a reversible phase change, say between gas and liquid, there is a tendency for the heavier stable isotope to remain in the liquid phase and for the lighter isotope to move more easily to the gas phase (the gas phase will be 'enriched' with the lighter isotope as compared to the liquid phase). Another option is that in chemical reactions which are usually irreversible the heavier isotope will be more thermodynamically stable and slightly less reactive, e.g. leaving the product slightly 'enriched' with the lighter isotope. Finally if the isotopes are in a gas phase and are diffusing through a membrane, then the lighter isotope will tend to diffuse through the barrier more rapidly.
In a traditional mass spectrometer the magnetic field is varied and the ion species are measured by a single detector. In the IRMS the magnetic field is fixed and the machine is designed to capture the different isotopic ionic species in separate detectors at the end of the flight tube. In the traditional machine it can of course also resolve differences for a particular isotope ratio, but the IRMS will typically be at least a 1,000 times better at doing the same measurement. These machines operate with gas samples, so solid samples need to go through some combustion process before hand.
The idea is that samples taken from the same location will have experienced the same environmental conditions and will have similar stable isotope ratios. Chemical processing of materials can result in distinctive stable isotope ratios, and the impurities or waste left behind during the processing will also have distinctive ratios. So different samples can be grouped dependent upon the stable isotope ratio. If a database of authenticated samples exists unknown samples can be assigned an origin. Just one example to show the power of the technique. The adulteration of honey (from a so-called
C3 plant) with low-cost fructose corn syrup (from a so-called C4 plant) is very, very difficult to detect chemically but IRMS can easily detect the difference in the 13C/12C and determine if a honey contains more than the internationally recognised benchmark of less than 7% of C4 sugar in 'pure' honey.
So IRMS measures the ratio of the heavier to lighter isotope, and in
archaeometallurgy they typically look at 207Pb/206Pb vs. 208Pb/206Pb. These ratios can be valuable in tracking the source of melts in igneous rocks and the source of sediments.
The technique can even be used to determine the origin of people using the isotopic fingerprinting of bones recovered from archaeological sites. Bone can provide information on the diet and migration of people. Tooth enamel and soil surrounding or clinging to remains many also be used. To understand the
Pleistocene human diet it is important to also understand the process of diagenesis. The way sediment may be altered after initial deposit can affect the interpretation of the measurements. Carbon and nitrogen isotope compositions are used to reconstruct diet, and oxygen isotopes are used to determine geographic origin. Strontium isotopes in teeth and bone can also be use to determine migration and human movement. Living organisms will collect the isotopes when they eat, drink or inhale particles, and this will stop when the organism dies. The accumulated isotopes will slowly degrade, and it would be nice to know the original levels of the isotopes in the organism at the time of death. In the analysis the experts need to evaluate the importance of diagenesis that might affect the isotope signal, and they will also need to know how isotope concentrations might vary between individuals and over time.
The isotopic composition of artefacts can be measured and compared with data for known metal ores to try to determine a possible source of the ore. Sources of metals, glass and lead-based pigments have been determined using this technique. The lead isotope analysis has been particularly useful in the understanding the Bronze Age. Naturally there are problems with the interpretation. What of mixed ores? What of contamination? What of the reliability of comparative data? No matter, we will see in the sections devoted to copper and bronze that the technique is invaluable.

Inductive Coupled Plasma - Optical Emission Spectroscopy


We are going to look at (yet) another analysis method in the tool-box of a modern archaeology. Firstly
Inductive Coupled Plasma (ICP) is the 'source' and Optical Emission Spectroscopy (OES) is the measurement system of the emission signals from the 'source'. This is a 'trace analysis technique' and is therefore very sensitive to the preparation of standards, blanks and samples. The technique tries to answer the question "which elements are present and at what concentration?". So the kinds of things this technique is good at is determining the trace concentration of (say) boron in drinking water.
The basic idea is to measure the unique set of
absorption and emission wavelengths of every chemical element in the sample (OES does this in the optical spectrum so wavelengths of 390-700 nanometers of 430-770 THz, and includes part of the UV range as well). Inductive Coupled Plasma (ICP) is one of a set of techniques to force (burn or vaporise) a sample to emit its characteristic wavelengths. The temperature generated is about 6000-8000°C so that all the elements in the sample become thermally excited and emit light at their characteristic wavelengths.
So what can we do with this technique? This
article describes measurements of the lead pollution back to ca. 5500 BC. Pre-historic sites in the Balkans were active in making stone tools, pottery and metallurgy. The authors created a geochemical record for the region from a sample taken out of a peat bog in western Siberia. The Balkans could easily be the home to the intentional thermal treatment of ores to produce metals, sometime after ca. 5000 BC.

Pb in the Balkans

To cut a long story short, the results are shown above. What the authors talk about is so-called anthropogenic lead (Pb-anthro), i.e. lead pollution from human activity. So we start with high, but fluctuating values between 1750 BC and 950 BC, which is a trend reflected also in the copper, zinc, and chrome records. This corresponds to the rapid expansion of metal working in the Carpathian–Balkan region during the Middle and Late Bronze Age, and the first major forest clearances attributed to human activity both locally, in the Dinaric Alps but also more regionally, over the Danube lowland basins. Across the Balkans, a large number of metal artefacts have been documented, many attributed to the Late Bronze Age. This regional pollution may have been linked to mining and associated bronze manufacturing that presumably took place at locations around Mt. Cer which is also proposed as a source for tin found in artefacts of the Late Bronze Age. The onset of the Iron Age is characterised by very low values for all metals, clearly indicative of the disruptive socioeconomic impact the well-known Late Bronze Age Collapse. This period represents the last interval of peat deposition without significant evidence of human created lead pollution.
From ca. 600 BC onward the records shows a near-constant increase in the level of lead deposited through time, until ca. 1700 AD. The increased between 600 BC and 200 BC reflects the local material cultures mining and smelting activities.
The Romans conquered much of the Balkans between 200 BC and 100 BC. As in other regions with rich mineral deposits, the Romans systematically cataloged and mined ores. We can see that lead increases dramatically to a peak at ca. 240 AD, with particularly sharp increases ca. 50 AD and ca. 150 AD.
After a peak at ca. 240 AD, the levels of lead decline, possibly indicative of a regional slowdown in mining output, relating to repeated invasions and upheavals that affected the Balkan area, including the abandonment by the Romans of the mineral-rich provinces north of Danube. However, subsequent increases after 400 AD indicate mineral exploitation in the region recovered during the Byzantine Empire, which, contrary to developments in the rest of Europe, reached its peak in economic and cultural development at this time. Subsequently, the decrease in lead evident after ca. 740 AD and lasting up to 1000 AD echoes the gradual decrease in influence of the Byzantine Empire in the northern Balkans. After 1100 AD, and with the arrival of Saxon miners from 1253 AD onward, the area developed into one of Europe’s leading silver producers, reflected also in significant lead increases in the record.
We will stop here, but you can see that this technique is extremely powerful, if used properly, and when associated with other records from the period.

Copper


Of all the metals copper is the first useful metal that Man found, and with copper he learnt to experiment and to discover the astonishing changes that fire would work on some 'shining stones'.

So we kick off our visit to the 'Ages of Copper, Bronze and Iron', with copper, and that means the
Chalcolithic Period which traditionally is supposed to cover about a 1,000 year period from ca. 4500-3500 BC (but different sources quote ca. 5500-3300 BC or even a little earlier). It was in this period that Man smelted copper ores and cast metallic copper artefacts. Today experts now see the Chalcolithic period as an 'evolutionary' interval between two 'revolutions', the Neolithic and the Bronze Age. In fact the name 'Chalcolithic' is meant to evoke a culture that was no longer Neolithic but which at the same time preceded the first knowledge of metallurgy.

We have mentioned the question concerning the advantages of metal over stone. Some experts think that the earliest motivation was simply because gold, silver, and copper were colourful and glossy. Then Man found that the metals were malleable and more permanent than wood or bone artefacts. Maybe later Man found that some metals retained better their sharp edges or points (remembering that stone tools and weapons needed often to be replaced or reworked). Metals became malleability when hot, and subsequent cooling gave metals some of the properties of pottery. On top of that the metals could be remelted and reused.
'
Casting' was the true achievement of metallurgy. It required access to ores, fuel and fire-making, something to hold the liquid metal, and the ability to blast air into the furnace.
Alloys keep their gloss better and the melting point of alloys is usually lower than its components, thus making casting easier. Lower temperature meant less fuel, so less expensive. Bronze was harder than copper and therefore better for cutting tools and weapons. Also smelting of alloys was in fact easier because more predicable than with copper.

Modern day archaeology has a 'toolbox' of technologies to look into the detail of different copper samples. Examining the microstructures of minerals, ores,
slags, slag sherds, and copper droplets can help suggest how unique a particular technological trajectory was.

Microstructure of Copper Samples

Above we have six different microstructures with (a) leaded copper, (b) bronze, (c) brass, (d) again brass, (e) high-tin bronze, and (f) copper. Experts can see in these microstructures how metals have recrystallised after treatment, where weakness are, when cold processing has occurred, when cold processing has been followed by annealing and hot working, when something has been cooled quickly (heat treatment), how alloys mix, what makes something malleable or brittle, …

As an example, in one study from a specific region in the Balkans and covering the late-7th to the mid-6th millennium BC the artefacts were made using 'cold' techniques, i.e. pre-smelting. The 'cold' technique was used for processing copper minerals into pigments and beads. During excavations, higher (younger) levels revealed slag dating to ca. 4800 BC and associated with smelting (this included also slag shards and copper droplets). Slag shards are in fact ceramic pieces, probably components in the smelting process, that end up containing some residual copper. Ore samples were found at every level (representing every period). Firstly the analysis of the copper minerals (ores) covered ca. 1,600 years, and showed that over time manganese-rich mineral batches were preferred. The minerals have a different colour from sulphur-rich minerals and it looks as if they focussed on brightly coloured ores. As far as I can tell the alternative sulphide-rich copper mineral implied a more complex process including crushing and grinding the ore to liberate the valuable mineral from the waste, then the smelting produces a
matte which must then be converted and refined to produce the copper ingot. I've seen comments concerning other sites where archaeologists noted that both types of ore were used indiscriminately.
The analysis of slag from different sites in the region showed that many used a variation of slightly oxidising/moderately reducing gas atmosphere, but all produced a good copper metal. The slag was heterogeneous in appearance and composition, but the microstructures remained remarkably similar across sites and over more than 600 years!
The overall result of this type of analysis is that (a) most of the ores came from the same nearby site (about 50 km away), (b) they probably selected the ores based upon colour after learning the need to avoid the predominately sulphur-rich ores (more on that later), and (c) that the constancy of the process lasted in excess of 500 years.

In fact copper is very widely distributed in nature and is found in soils, waters and ores. Native, malleable or virgin copper occurs as a mineral, especially in the surface strata of deposits of copper and iron ores. In the upper workings of these deposits it is found in arborescent, dendritic, filiform, moss-like or laminar forms.


Basic Types of Copper Ore


Above we have some of the different copper-rich ores. Nearly every copper ore has a bright colour which would have attracted the eye of ancient miners, and naturally they would have started with the ores they could find on the surface. We will start with the ores that occur in weathered surface outcrops:-
Azurite is a soft, deep blue copper carbonate mineral, with a copper content of ca. 55%. This ore is naturally occurring in the Sinai and the eastern desert of Egypt. It was also used as one of the earliest blue pigments, but it is unstable in air. Used in jewellery as an ornamental stone, and some people can confuse it with lapis lazuli or lazulite.
Chrysocolla is a hydrated copper silicate mineral with a blue-green colour, with a copper content of ca. 36%. Although a minor ore it was used in jewellery.
Malachite is a copper carbonate mineral with a green colour, and contains ca. 57% copper. It is often found with Azurite and was also used as a pigment. Azurite and Malachite are alkaline copper carbonates.
Cuprite is a copper oxide mineral, with a ruby colour, and rich in copper (ca. 89%). It only occurs in small quantities, and the crystals are too small but can be used as gemstones.
Then there are the ores that are found in deeper strata:-
Chalcopyrite is a copper-iron sulphide mineral and when fresh can have a yellow, brassy colour. It is the most important copper ore and usually contains about 35% copper.
Bornite is a sulphide mineral and when fresh has a copper-red-purple colour and when tarnished has shades of blue to purple (often called peacock ore). It is one of the more important copper ore minerals, contains ca, 55% copper and is a rarer variety of Chalcopyrite.
Wikipedia has a list of copper ores.

The Chalcolithic (Copper) Period


The first thing is to note that there are reports that the earliest copper ornament dates to ca. 8700 BC (some texts mention that it is a pendant found in northern Iraq). Whilst this statement is often reported in different texts, I've not found any details concerning this particular item.

What we do know is that
hammered objects of copper were made in Çayönü Tepesi in Anatolia dating from the 8th millennium BC, and in Çatalhöyük, again in Anatolia, from the 7th millennium BC. These must be some of the oldest copper artefacts created by Man. Similar artefacts were not produced or used in Scandinavia or China before the 4th millennium BC.
Why did a copper producing culture evolve first in one area and not another? Firstly, the place must have been fertile and inhabitable. Wikipedia tells us that
Çayönü was the first place emmer wheat was domesticated and could also be the home to the domesticated pig. Secondly, the highlands must have been rich in minerals. Thirdly, fuel in the form of bush and shrub wood would be needed to anneal native copper and extract copper from the ore.

In addition, an added advantage would be the possibility to evolve and produce
copper-arsenic alloys, making a soft pliable copper into a hard tensile product (suitable for tools and weapons). Many native copper ores contain 5-7% arsenic, and a low temperature smelting of arsenic-rich ores would produce good quality copper ingots (so-called arsenical bronze). As a counter example, tin-bronze contains more than 2% tin, and would be clearly an intentionally created alloy (tin seldom occurs in native copper).

Melting point of metals and alloys


Most texts in discussing stone-bronze-iron do not mention copper-arsenic alloys, and even those that do don't highlight the fact that the melting point of alloys is usually significantly lower than for the pure metals. This has a significant impact on the temperature needed to make an alloy liquid. For example the melting point of copper is 1085°C, arsenic 817°C, tin 231.9°C, gold 1064°C, silver 961.8°C, and of iron 1538°C. To melt rock depends upon the type of rock but it is between 600°C and 1300°C to obtain magma. The melting point of copper oxide (cuprite) is 1326°C, arsenic oxide 312.2°C, tin oxide 1630°C, silver oxide 280°C, and of iron oxide 1565°C. The melting point of cuprite (CuO2) is 1232°C and of chalcocite (Cu2S) is 1030°C.

Cu-Sn High Temperature Phase Diagram

Above we can just a small part of the phase diagram of the alloy copper-tin. We can see that for 100% pure copper metal the melting point is 1085°C but for an alloy of 80% copper and 20% tin the melting point drops to below 900°C.

An important conceptual problem is that gold is in fact found in a rock as metal 'nuggets' and by melting the gold-rock, the gold will melt and ooze out. Some other metals such as silver or copper exist in a 'natural' state, but they are rare. So most copper and silver and most other metals are held in the form of an
ore. For copper this means as cuprite or chalcocite mentioned above or pyrite (CuFeS2) or malachite (CuCo3 . Cu(OH)2).

So
smelting an ore has almost nothing to do with melting the metal (pyrometallurgy is extractive metallurgy based upon thermal treatment of the ores). In a smelting furnace you need to first 'reduce' the metal to its metal state (e.g. break the chemical bonds that hold Cu to S), and then the furnace will liquify the metal so that it can trickle out and be collected. Also the temperature needed for a reduction process is not directly related to the melting point. But in most early examples the ancients combined the reduction process and the melting of the resultant liquid metal.
You normally need at least 1200°C or better 1300°C to smelt ore, and you do this with a bed of burning charcoal (a normal wood fire will will burn at 600-800°C, and will not be hot enough to smelt copper ore). This burning produces a lot of free energy (
exothermic reaction) and carbon dioxide (2CuC + C > CO2 + 2Cu for copper oxide and for cuprite 2CuO2 + C > CO2 + 4Cu). Streaming or 'blasting' air using a 'tuyére' through the burning charcoal will deplete the oxygen in the burning of the charcoal, and carbon dioxide is the by-product. As far as I can see the tuyére is used to ensure that the oxygen goes to the charcoal fire and not into the content of the furnace. As the tuyére is pushing oxygen in to burning charcoal, the CO2 rises through the carbon that is just above the tuyére nozzle. As the CO2 meets this layer of hot charcoal that is not burning because it lacks sufficient oxygen, carbon monoxide is produced (CO2 + C > 2CO) but the carbon must be at least 1200°C in order to ensure that the carbon is the furnace is always at least 1000°C everywhere (carbon dioxide becomes increasingly unstable above 800°C).
So the 'burden' or contents in the furnace (ore, charcoal, and a flux) gets hot but does not burn because of the lack of oxygen. So keeping the heat in the furnace will break up the carbon dioxide into carbon monoxide (as in any poorly ventilated or oxygen-starved combustion), which streams through the contents of the furnace in direct contact with the ore and charcoal. We must remember that whilst carbon dioxide does not support combustion, carbon monoxide does and it burns with a pale blue flame. This allows the carbon monoxide to reduce the metal oxide (CuO + CO > Cu + CO
2). The carbon monoxide is consumed in the process, and produces fresh CO2 and elemental metal. Mixing the charcoal with the ore in layers, and adding more charcoal during the smelting means that it continues to use the O2 instead of letting it get to the metal. The flux tends to make the slag more fluid so it covers the metal better and stops the copper from oxidising. The later use of copper sulphides required an initial roasting (2CuS + 3O2 > 2SO2 + 2CuO), and then a subsequent reduction process to obtain copper metal.

The documents I used as resources for the above was "
Primary Metal Production" and "Smelting Copper" by H. Föll.

It is interesting to note that for smelting copper the role of carbon monoxide is important. As we have seen the carbon monoxide is used to reduce copper oxide to metallic copper, and at the same time carbon monoxide to carbon dioxide. This is directly related to the way precious metal catalysts work in oxidising carbon monoxide. And because of the high cost of those precious metals, copper oxide is increasingly used as a catalyst. Thermodynamically, the oxidation state of copper changes with CuO, Cu2O and Cu as a function of temperature and oxygen partial pressure. And there are three possible pathways for CuO to Cu, direct reduction, a reduction mechanism involving the intermediate Cu2O, and a pathway involving two intermediates Cu4O3 and then Cu2O (very infrequent). There is a technique called time-resolved X-ray diffraction (XRD) that can isolate chemical transformations. This showed that CuO started to reduce to Cu at ca. 200°C and was completely transformed at 236°C, and CO was completely oxidised to CO2. The starting point for CO2 was ca. 210°C, reached a maximum at 236°C and tailed off to 585°C. The time it takes to reduce CuO and Cu2O depends upon the temperature, higher the temperature faster the reduction process, with Cu2O taking more time than CuO for the same temperature. We are talking about reduction times of about 25 min. at 220°C. However, with a limited supply of CO the reduction of Cu2O will only start after about 35 mins. One important element here is the time it takes to generate sites for the absorption of CO. Once a few exist the removal of O starts, and creates more absorption sites for CO, and the reaction becomes autocatalytic. Why does it take time before absorption sites are created? Essentially the oxide needs time for its geometry to relax and surface defects to form. Then much depends upon the amount of CO available, with the reduction going faster the more CO is available. If there is not enough CO then free oxygen moves around the free spaces in the oxide lattice, and more CO is needed to continue the reduction process. Finally the metallic copper starts to form on the surface of the oxide, and will gradually become thinker. But as the layer becomes thicker it becomes more difficult to reduce the oxide. Again another reason to operate at a higher temperature to ensure that the metal formed does not create a too tight a barrier to the reduction of the remaining oxide.
In fact there are a number of interesting practical aspects to what looks to be a rather academic analysis. Firstly whilst you might well have 1200°C in the charcoal fire, and about 1000°C in the hot charcoal above the tuyére, at the top of the furnace it could only be about 250°C. And this is what you want, and thus it conditions the design of the smelter. Secondly if you don't want the process to last for ever its best to ensure an efficient reduction process by breaking up the ore into small pieces. But not too small otherwise the air flow is cut off. Better still if the surface of the pieces is 'big', i.e. with lots of micro-cracks, etc., so crushing is good. Another important point is that you are not trying to melt the ore, otherwise it may 'run' through the charcoal without being reduced.
The next problem is after the reduction process has completed, you now have in the upper part of the furnace some small solid metal pieces. But they will start to drift down the furnace, getting hotter, and speeding up. But surely the metal will get to the place where the charcoal is burning (i.e. with oxygen) and it will oxidise again? You could imagine that the metal melts in the reduction zone, flows quickly through the smelter, with limited re-oxidation. Or slag can be produced and this can help sweep the metal fragments through the smelter. Or you can try to have both, liquid slag and liquid metal. On top of all this the drops of liquid metal that go through the smelter must enter a pool of molten metal, another reason for the charcoal burning zone to be as hot as possible.



Since we are on the topic of heating up rocks, ores, metals, etc. it is worthwhile mentioning the
different between melting and smelting. Melting is about raising the temperature to the melting point, whereas smelting is using heat and chemistry to drive off other elements such as gases or slag, leaving behind only the metal (so smelting in chemical terms is a reduction process). With impure ores you can use a flux to separate the metal from the slag (a flux is a reducing agent). Thee idea is to choose a flux that changes the impurities from a form that is inseparable from the metal to one that is separable. For example, adding iron or as a flux during the smelting of copper can transform the impurity solid silicon dioxide into an iron-silicon oxide. The silicon oxide would have remained in the liquid copper, but iron-silicon oxide floats to the top and be skimmed off. A very simple example is that if you have ice and melt it you have water (H2O), but if you smelt the ice you will have hydrogen and oxygen.


What is also certain is that a piece of
rolled native copper found in Ali Kosh (in present-day Iran) dates to ca. 7000 BC. Similar pieces have been found at other sites, and they all point to native copper being heat treated (annealing) between different 'deformation steps' to avoid work-hardening.

The first clear evidence of
copper casting dates from ca. 4000 BC, near 3,000 years later. This involved smelting of local copper ores, which in most cases were arsenic rich. Smelting took place in the early 5th millennium BC, and manufacturing of arsenical copper artefacts came quickly with the 4th millennium BC. Many experts suggest that the accidental use of arsenic-enriched copper could have led to the development of intentional arsenic-copper (arsenical bronze) alloys, which could then have led to the copper-tin alloy of bronze (the below double axe is made of arsenical bronze and dates from ca. 4500 BC).

Double Axe of Arsenical Copper


According to Wikipedia the
first evidence of human metallurgy dates from between ca. 6000-5000 BC in three sites in modern-day Serbia (Majdanpek, Yamovac and Pločnik). It is my understanding that this references the world's oldest securely dated evidence of copper smelting at high temperatures (i.e. extractive metallurgy). The example that is often given is a copper axe which is said to have been found at the Belovode site in Pločnik, and attributed to the Vinča culture (covering a period ca. 5700-4500 BC).

Copper Axe

Technical reports on Belovode actually talk of only finding five pieces of copper slag, the residue of an intense heating process used to separate copper from other ore elements. The different concentrations of elements in the samples show that they were produced in different smelting events. Traces of ash indicated that it was a wood fire, but one that attained about 1100°C (only a charcoal fire can attain this temperature). The microstructure shows that the pieces were heated until liquid and then cooled. They also found one drop of pure copper in one of the houses. No ceramic bowls or containers for the molten copper were found, so it is suggested that they might have been smelted in an earth pit. Animal bones found on the site suggest that it was occupied between ca 5350-4650 BC. The ore came from one copper deposit somewhere in Serbia or Bulgaria (possibly just 10 km away), and the suggestion is now that copper smelting might have evolved independently at different sites and did not originate in and spread out from the Fertile Crescent. It is important to understand that what was discovered was a smelting process applied to a copper ore. Copper smelting might actually date back nearly 10,000 years, where people in Anatolia were making beads and ornaments from copper ore and just heating it to make it more pliable. What is missing is solid archaeological evidence, but many experts believe that it is quite possible that other sites will be discovered that pre-date the one in Serbia.
Other reports mention that on the site there was also a cold processing of copper minerals for beads, and both the minerals and ores were
malachite. It would also appear that later another 3 slag items were found (the total weigh about 4 gm). The cold processing of copper mineral usually involved taking a raw nodule of mineral and chipping away at it to create a blank. The blank would then be drilled and then worked into the final shape (diameter 4 mm to 1.5 cm).
On the site there were two early furnaces, so-called ceramic 'chimneys', each of a diameter of about 20 cm, and probably linked to two rimmed earth pits. Tests on the slag suggest that smelting started at the site ca. 5000 BC.
A copper chisel and a bun-shaped metal ingot were found near the site, but their date is uncertain (Vin
ča or later). It is true that the site was first discovered in 1927, and twenty "massive copper implements" were also found. At the time they were thought to have been trade artefacts, or eventually coming from a post-Vinča culture. And when two other hoards of artefacts were found, they made the same assumption. It was only later that copper smelting was associated with the Vinča culture. The copper chisel found in 1978 could be well contextualised as a Vinča artefact and has been dated to ca. 5040-4860 BC.
A second site excavated in 1998-2009 was Plo
čnik, which I think is a different site very near Belovode. This site yielded more evidence of copper smelting by the Vinča culture. Here also they found a massive copper chisel, fragments of tools and ornaments, and copper minerals. They also found stone tools, and large ceramic vessels, etc. Later they found four groups with a total of 34 copper implements (e.g. hammer axes, chisels, bracelets, ingots, and some stone axes). Analyses of 17 of the artefacts show that the ores came from east Serbia, Macedonia and Bulgaria, and all were found in the same 'industry zone'. The best estimate of the date of the artefacts was again ca. 5040-4860 BC.

In addition a metal foil was found in one dwelling near several late Vin
ča pottery vessels, dated to ca. 4650 BC. It turned out to be an alloy of copper and tin, e.g. bronze. This is the earliest known tin bronze artefact.

The key differences between Belovode and Plo
čnik are subtile, for example copper slag has only been found at Belovode. Both place appeared to have had workshops, but Pločnik provided the vast majority of artefacts. It looks as if the workshop in Pločnik was used only for casting and/or repair of copper tools. So it maybe that Belovode was for production and Pločnik for casting and finishing. Some of the lead isotope analyses on samples are the same on the two sites supporting the idea of a collaboration.
My guess is that the copper axe shown above actually comes from the later excavation of the Pločnik site, and the best date for the artefact is ca. 5040-4860 BC.


Copper Awl

Above we have a copper awl discovered at Tel Tsaf in the Jordan Valley in Israel. This 4 cm long copper awl has been dated to ca. 5200-4700 BC (a period often known as the Middle Chalcolithic). This awl would probably have been set in a wooden handle. It was buried with a woman of about 40 years of age, who also had a belt around her waist made of 1,668 ostrich-egg shell beads.

Mesopotamian Copper Age


The Uruk Period (ca. 4000-3100 BC) is often associated with the Chalcolithic period and the introduction of the Early Bronze Age in Mesopotamia. Named after the Sumarian city of Uruk, the Uruk Period saw the rise of both urban living in Mesopotamia and the arrival of the Sumerian civilisation. The earliest forms of cuneiform script date to the later part of this period.

The traditional view is that in Mesopotamia copper artefacts were found in the earliest stages of the development of their complex society. So the assumption was that copper was responsible for the emergence of that early civilisation. Copper tools improved agriculture, in turn leading to a rapid population increase, which led to the emergence of elites who controlled copper production and trade. Demand for copper tools drove inventiveness and technical progress. So metallurgy and civilisation progressed together.

But the reality might be different.
Metallurgy might have been discovered in different places, without any local diffusion from one place to another. There is proof that metallurgy often remained a secondary activity, mainly for ornaments. In some cases metallurgy may not have had any substantial cultural or social influence for over 1,000 years. In other places metallurgy did result in the production of utilitarian artefacts, but not in the emergence of a centralisation or concentration of power. For example, the Nahal Mishmar hoard (ca. 3500 BC in the Levant) is a unique collection of prestige artefacts involving both the use of the lost-wax casting technique and a surprising practically application of advanced alloying processes. Yet there appears to be no link between this hoard and any advanced social hierarchy or concentration of power.

Levant

Mesopotamia was poor in metal ores, and would have looked around for sources of copper. One of those sources was the sedimentary ore deposit at Feynan in the southern Levant (Timna was another mine). It has been exploited since the 7th millennium BC, and has a competitive claim to have been the first place to smelted copper ore during the Chalcolithic period. Examining this particular site will provide us with a better view of mining in the Copper and Bronze Ages. The main ore body consisted of a 1-2 m thick layer of copper silicates and malachite, mixed with black manganese oxides (the site did not process sulfidic ores). Overall this vein was not particularly rich, about 1% lead and a low copper content. There was also a secondary vein with malachite. Most texts, if they mention manganese oxides, simply note that manganese-rich ores act as a fluxing agent. First a fluxing agent is at its simplest a reducing agent preventing oxides from forming on the surface of a molten metal. Flux can also act as a cleaning agent by oxidising impurities which then form part of the slag. I think it's useful to also remember that flux is also used to lower the melting or softening temperature of glazes used on pottery. The flux is mixed with a high melting point material used for a glaze. It 'swims around' in the glaze melt during firing, making it more fluid. The right flux with the right glaze moves the melting temperature way down and allows the creation of a broader range of effects. Manganese oxide is a common flux used for stoneware (as dolomite or talc) and works well against crazing. Copper was also used as a pottery colorant and melts very actively in oxidation. I deviated from our theme just to point out that magnesium oxide ores would have been known and used at that time as components of pottery glazes.
Concerning Faynan the
206Pb abundance was high enough to distinguish it from other mines, and the copper prills in the ancient slag show that what was produced there was very pure.
This particular 'mine' covered an area of 20 km by 25 km and the smelting sites produced something between 150,000 to 200,000 tons of slag. The site was the largest in the region. Exploited since the 7th millennium BC, the site was particularly active in the 4th and again in 2nd millennium BC. From the 8th/7th millennium it was evident that
malachite copper ore was collected at the site, and pieces could still be found along with flint borers and charcoal samples. In the region the lime plaster is often associated with greenstone beads and powdered copper minerals. The first true evidence of metallurgical activity dates from the end of the 5th millennium BC, e.g. ore, smelted slag, copper prills, and crucible fragments were found. Other nearby mines provided the usual pottery and groved hammer stones that helped date the site. Everything points to smelting taking place ca. 4500-3500 BC. The use of mixed copper-manganese ores was a major technological breakthrough in that they provided a self-fluxing raw material. The easy formation of liquid slag increased the yield of metallic copper from a smelt. And it extended considerably the different ores that could be included in a smelt. Experts have estimated that the site could have produced 100 tons of metal during the Early Bronze Age. The area almost ceased production during the Middle Bronze Age. It would appear that production started again ca 1750 BC stimulated by Egyptian demand. However it was only in the period 900-500 BC that production was again ramped up to an industrial scale. It is estimated that the slag from this period exceeded 100,000 tons. At this time there were more than 100 mines, some 50-60 meters deep. The site closed in the Iron Age, ca. 400 BC (although later the Romans opened the mines again and some 50,000 to 70,000 tons of slag dates from that period). An estimate has been made of the total amount of slag created and copper produced over the total lifetime of the site. The conservative estimate is that the site generated between 140,000 and 200,000 tons of slag, and between 10,000 and 21,000 tons of copper. This site produced copper with a very low content of arsenic, antimony and silver, so the copper from this site can be easily distinguished from numerous other Chalcolithic sites.

Naturally, metallurgy was just one factor in the
evolution of society. Migration, ecological change, domestication, innovation, etc. were said to have worked together to allow a diversity of types of highly organised societies to emerge. Yet Bronze Age societies all look very similar. Today experts believe that this is largely due to the crucible and furnace processes of copper smelting. The smelting of copper appears to have emerged in at least seven different areas between ca. 5000-2000 BC. Notably, the Iranian plateau, the northern Euphrates, and the Iberian peninsula. So the emergence of copper smelting does not appear to have been a unique or exceptional event.

For the purposes of investigating
metallurgy technologies we will look at copper smelting in the Anarak region in the Iranian plateau (the region still has the world’s second largest lode of copper ore). It is probable that copper was mined from ca. 9000 BC as a semi-precious stone and pigment. Native copper (extracted from copper ore) was worked ca. 7000 BC, firstly by cold hammering, then by heat hammering and annealing. By ca. 5000 BC copper was melted and cast in open molds. Crucible smelting would have been discovered/developed when trying to extract native copper from melted ore. The crucible was probably just a ceramic bowl. Examples of crucibles dating to ca. 5000-4000 BC have been found in the Iranian plateau, the northern Euphrates, the Balkans (Gumelnița–Karanovo culture), Central Europe (Inn Valley), and the Iberian peninsula.

Unfortunately copper ores such as azurite are not the best for pigments, they tend to thicken when mixed with oils or fats, and are not permanent to light and air contact. Azurite and malachite are just two of many natural mineral (and synthetic) pigments used in art, but the great thing is that low wavelength Raman spectroscopy can be used to identify different pigments, including modern synthetic pigments (i.e. identify fakes).

In fact, as we have already mentioned, the earliest appearance of metals and metal artefacts preceded the Copper Age proper. Neolithic Man began collecting nuggets of
malachite-azurite and native copper, presumable for their attractive appearance and colour. The first examples of native copper and copper-bearing nuggets would have been found along with gold nuggets in river beds ('placer mining'). At the time Man manufactured simple prestige items, such as small beads, pins, rings, and arm-rings. By 5000 BC they were imitating stone, shells and bone ornaments using native copper (usually made by cold hammering). Also at the same time the earliest gold artefacts were small discs imitating perforated shell ornaments. Native copper artefacts have been found in Egypt, in Vinča, in the Danube valley, in Asia Minor, in Palestine, and with the pre-Sumerian marsh-dwellers in Mesopotamia. These first workers who hammered, cut, and formed artefacts from native were Neolithic artisans, rather than smith's heralding in the dawn of the Metal Age.

Then sometime between 5000-4500 BC these artisans learned to anneal native copper alternately heating and hammering the metal.
Until then native copper has no advantage over gold, but could be used for smallish ornaments. Copper metal was simply too brittle for larger artefacts. We have mentioned that maybe they discovered that by heating a copper awl to pierce hides or wood more easily they also discovered that copper became more malleable and less prone to fracturing. We know that Man dabbled in the sciences by 'heat tested' everything. Who knows how they discovered annealing, but the first ability of a smith was to be able to use fire and hammering to avoid that copper became brittle. Larger lumps and nuggets could be worked and larger more complex artefact made. Smaller lumps of copper metal could be joined into one single mass. Well before the ability to reduce ores by smelting, the smith had been born with the discovery of annealing.

Once Man had seen what heating could do to native copper, the argument is that copper artefacts increased in value, motivating people to look for new sources of the metal and this meant 'heat test' new ores. And some copper-bearing ores (
Azurite and Malachite) were already being used for pigments, eye paint, and for pottery glazes. Using the pottery kiln for these 'heat tests' would have been logical. Initially many experts assumed that the next step was heating ores in camp fires, but the temperature is usually too low (600-700°C) to reduce copper oxide and carbonate ores. A pottery kiln can easily achieve temperatures in excess of 800°C, and in some cases they can even well exceed the melting point of copper at 1085°C. The 'hole in the ground' pot-furnace can't achieve these high temperatures without forced air, but forced air was a feature already known in pottery kilns.

The logic goes that initially some form of heating would have been done in a crucible in a pottery kiln. The pot-furnace would have been modelled on the pottery kiln and early ingots clearly suggest that at best they could produce 140-160 gm of copper, sufficient for one small tool.

You can imaging the amount of experimentation needed for a smith to finally hit on the idea of filling up the furnace with alternative layers of charcoal and ore. And what would be the result of this reduction experiment? A spongy mass of partially fused metal mixed together with cinders and 'matte', totally unlike the liquid form of native copper smelted in a pottery kiln.

Melted Copper Metal

Of course reheating and hammering liberates the cinders and bits of unreduced ore and a lump of unrefined 'black copper' is obtained which can be re-smelted. With time ingots took on the shape of the hearth in the furnace, so a v-shaped block with a diameter of about 20 cm across and about 4 cm high. Lining the furnace with clay was a next step to making purer copper.
It would appear that very early on a distinction was made between the smelting of the ore in a furnace and the reduction and refining of the 'black copper' in crucibles. The problem with early metallurgical crucibles was that the heat was only on the side and rim. This points to the heaping of charcoal and black copper around the crucible and heating it in a kiln, or eventually heaping a fire around it and increasing the temperature with the aid of blast air from a blow pipe. We know that blow pipes were used by early prehistoric metallurgists because clay nozzles have been found which were used as protection against the effects of the fire. The earliest known use of bellows date from ca. 1580 BC.

The argument goes on to suggest that once they knew how to
melting copper, they would rapidly discover casting. It is probable that both techniques of melting and casting copper spread rapidly in years from the Ubeid Period to the Uruk Period, so in the period 4000-3500 BC. Experts think that the arrival of these new techniques would have meant that many older copper tools, etc. would have been smelted and recast (copper was still an expensive and valued material). This would account for fact that older copper tools are comparatively rare, and also explain the stockpiles of old and broken copper tools often found at smelting/casting sites.

Casting is not as simple as it sounds. There are several different methods for forming (or pre-forming) metals, e.g. open moulds, closed moulds or valve moulds, and the more complicated forms such as core-casting and the 'cire-perdue' (lost-wax).

With the advent of crucible and furnace smelting came both utilitarian and prestige items, the most obvious prestige artefact being the large hand axe (signalling power and status). Along with copper came also (more) gold. Their melting points were more or less the same (1085°C for copper, and 1064°C for gold) and the processing techniques were identical (the ores were often found together).

One suggestion was that Man first made cold hammered copper borers (a reamer or type of rotary cutting tool to make holes in things) which they might have heated in a fire to facilitate penetration, and these early 'smiths' would have seen that heated copper was more malleable and easy to shape. Hammering and heating followed by tempering or annealing would have been compared to cold hammering and they would have found that the copper with heating remained harder but did not become brittle. Experts think that this discovery was made ca. 5000 BC, because it appeared to have become 'common knowledge' by ca. 4500 BC.

Some experts have suggested that with the advent of copper the 'smith' became the earliest craft in human history and was the first 'full time job'.

Oddly, early archaeologists thought that if Man knew that copper 'melted' then naturally they would have discovered casting at the same time. But no, the reality is that a wood fire will not attain more than 600-700°C which is not enough to create liquid copper for casting. But it is enough to make copper very malleable for hammering and forging pieces together to create a lump of metal. Many experts feel that a pottery pit fired
kiln (900°C) would have been used to produce a so-called copper frit, but only a primitive forced-air furnace would have been able to smelt copper.

Green Copper Pigment

Above we can see what initially was considered to be the earliest known example of slag from copper smelting dated to ca. 6500 BC. This sample from Çatalhöyük looked only semi-baked, thus an accidental copper firing event. But why was it deposed in a grave? It actually turned out to be an accidental firing of copper pigment. Copper pigments were placed in graves, and there were firing events that charred bones in shallow graves, thus this sample was totally non-intentional. It's a reminder that not all semi-molten black and green 'stuff' is metallurgical slag resulting for intentional smelting.

Pit Furnaces

Smelting copper must have been done in a pit furnace. It was the only way to attain a temperature sufficient to melt copper ore. Above we can see two 'small-scale' pit furnaces, one with an intact collar and slot. These examples are two Late Bronze Age pits found in Wales (ca. 900 BC). Feature 109 was about 10 cm wide and 10 cm deep, and Feature 111 about 9 cm wide and 15 cm deep.

The first challenge would have been to
find the copper ore, and decide if it was a carbonate ore or a sulphide ore. Carbonates (or silicates) can be refined by heating and reduction with charcoal or wood and a little help from a blast of air. However the sulphides must be roasted first to burn off impurities (mainly sulphur, arsenic and antimony) and a flux such as quartzite, feldspar or limestone must be used in the smelting process. Carbonate ores did not require pre-smelting roasting or a flux. In the earliest furnaces the metallic copper was probably only partially smelted and it would have been found in small prills, and not a large conjoined mass. They would have had to crush the prills to remove the slag, and then re-smelt it to make purer, larger fragments. These fragments would be cold or hot hammered (with a rock hammer). The copper would be quite soft, but when cold hammered it would harden and become stronger. If the copper was shaped with heat and hammering (forging), then this process is called annealing. The heating prepares the copper for hot hammering, and the artefact is harder and stronger when cooled or quenched in water. During the period ca. 4000-3500 BC improvements in the process were made. Firstly, there are signs that the copper ores were placed in a crucible inside the pit furnace. Secondly a flux was added, this could be charcoal, or for copper they could have added iron ore. A flux can do many things. The simplest one is a reducing agent to prevent oxides forming on the surface of the molten metal, while others absorb impurities into the slag. In smelting by adding a fluoride (fluorite) or limestone it purges the metal of chemical impurities and renders the slag more liquid (less viscose). This produces a purer copper. It is possible that with a crucible and the use of a flux, the copper could be removed from the pit furnace as a liquid and run into an ingot cast. Running the liquid copper into a hollow clay or sand shape, was probably the first step in developing the lost-wax casting technique. So by ca. 3500 BC the pit furnace technique would involve spooning out the slag from the copper in a crucible, and the addition of a flux, before reheating. Then came, either cooling and forging, or pouring the liquid copper into a cast.


Whetstone

They would also have started the entire process by crushed the original copper ore, this process is called comminution, the reduction of solid material by crushing, grinding, cutting and vibrating. Above we can see a Bronze Age 'whetstone' actually used with a stone hammer to crush the ore. The first problem was finding the right ore and extracting it from the waste rock, etc. Even a rich mine will only have at best about 4% copper ores. Today it is between 1% and 0.8%, for this reason you need to move 4 tons of ore to smelt (at best) 4 kg of copper metal. The modern way to separate the copper ore from the waste is by “froth flotation” (or benefication), a process depending upon the fact that grains of minerals “wet” in different ways. I have not found any mention of how this separation process happened in Mesopotamia, but what is certain is that it was all done by hand.

With the sulphides the ore would need to be
roasted (a process of calcination). Today there are special roasting furnaces that recover the sulphur for the production of sulphuric acid. The next stage can be viewed as two steps. Firstly the ore could be smelted to what is called a copper 'matte', a mix of copper and iron sulphides, but with limited arsenic, antimony and silica content. Then this 'matte' could be smelted with charcoal and a siliceous flux. In ancient times these two stages would have been mixed together and seen as one process. The flux used would have been a quartz in the form of lumps or pebbles, and it served to remove the iron oxide. In covering the smelting product it also help reduce the risk of oxidation of the pure copper metal produced. This smelting process purifies the copper sulphide but does not produce pure copper. The copper 'matte' probably contained 30-40% copper, and after the second smelt the resultant metal was probably 60-75% copper. In the jargon the 'matte' is called 'blue metal' if it still has some iron in it, 'pimple metal' if its richer in copper, and 'fine metal' if it contains comparatively pure copper sulphide. It was possible to shorten the cycle if the ores were very rich pyrites with high copper content, or if the sulphides were mixed with carbonate or oxide ores.
In any case the 'blue metal' could be economically transported and traded, but usually it would be refined on the same site. The next stage would be to
re-smelt the 'blue metal' again with charcoal, flux, and an air blast. The objective here is to remove the remaining sulphides and combine them with a flux to form the slag. The blast of air oxides the sulphides and the sulphur escapes as sulphurous gases. The charcoal reduces the copper oxide to copper metal. What you have at the end is copper metal, a slag, and some remaining unreduced 'matte' which can still be recycled. This copper is called 'blister copper' (or occasionally ' black copper') and contains about 95-97% copper, with some impurities of gold, silver, iron .lead, arsenic, antimony, nickel, cobalt, zinc and tin (most of these impurities would remain unknown in ancient times).
It is certain that some people would have gone through this process in order to finally extract the gold or silver. However this 'black copper' was an easily traded item. In fact these slabs were found on almost every site that smelted or worked copper. It was this type of basic copper product that would be refined again and worked by your 'local' smith.

The last stage would be the
final refining of the 'black copper' but it is a delicate and complex process. The idea is to remove the final impurities by combining oxidation, 'slagging', and volatilisation. First the 'black copper' is heated and oxidised by blast air, and the molten metal is continuously agitated. This 'flapping' is continued until a thin layer of copper oxide is formed in the molten metal. At that moment the impurities of iron, zinc, tin and cobalt and the traces of sulphur are almost completely removed in the slag. The copper metal still contains the silver and gold present in the ore as well as traces of arsenic and nickel. It is now necessary to reduce the copper oxide present (otherwise the copper would be brittle and useless). This is done by 'poling', that is forcing green logs or trees under the molten metal. The wood reduces the oxide formed and the product is called 'touch-pitch copper'. This last step is very delicate as it is easy to refine copper for too long, causing the copper to absorb the gases from the wood and producing 'over-poled' copper. The refined copper is now from 98% to better than 99.5% purity dependent on the percentage of precious metals present.
It is quite possible that the Romans understood the principle of refining by 'poling', but in Mesopotamia refining was more likely conducted by simply smelting the 'black copper' again with charcoal over a crucible or in a charcoal fire with blast air. The purities mentioned above would not be achieved.

We have already mentioned the possibility to extract precious metals from copper (if it was worth it) by '
liquation'. First the copper is alloyed with lead. Then the alloy is heated until the melting point of lead (but below that of copper). The lead then sweats from the alloy and at the same time extracts those impurities that are very soluble in lead among which the precious metals are prominent.

This description of the stages in the refinement of copper certainly was in antique times far more complicated since each stage would actually have been seen as a multitude of individual steps totally conditioned by experience based upon decades of trial and error. The smith of the day would have to follow a complex flow sheet or, as most did, sacrifice efficiency and extract less copper from the ores. The key message is that no matter how you look at it a smith wanting to produce high-quality copper was obliged to go through these steps. The message is clear, a simpler process either wound not work or was simply far less efficient.

And inefficiency was often the name of the game. It is true that the Romans more often than not worked with and traded sulphides, although smooth, sulphur-free cakes of refined copper can also be found. However, some analyses of Spanish-Roman slag shows that it contained up to 50% wasted copper.

Modern copper smelting


As an aside the modern view of copper smelting involves the following processes. At the formation of the Earth, copper concentrations were around 0.0058%, and it is through a process of hydrothermal alteration (one of the alteration assemblages, and also known as wallrock alteration) in which mineral concentrations are altered due to reactions accompanying the flow of heated aqueous fluids along fractures and grain boundaries. In the case of copper this alteration created porphyry copper deposits, and copper concentrations can be between 0.2% and 6% (of which 574 known deposits are at the surface). We learn that large porphyry deposits are associated with calc-alkaline intrusions, as is the case for gold-rich deposits as well. So the first task is to find deposits of copper ore, and the second task is to mine the deposit to extract the ore (mining aims at copper concentrations of 0.5% to 6%). The next step is comminution, or pulverising the ore. This prepares the ore for beneficiation, or the removal of the gangue (worthless) minerals. The technique used today is froth flotation. The idea is to reduce the amount of ore that needs to be handled, transported and smelted. The next step is smelting and the product is copper matte, or a molten metal sulfide. Smelting is the heat-induced separation of complex sulfides into copper sulfides (and some iron sulfides), and removal of sulfur as an off gas. The gangue is removed as slag. The copper concentration can be between 30% and in excess of 50% depending upon the quality of the ore and the efficiency of the process. The next process is converting to produce a crude copper metal and more slag. This removes most of the remaining iron and sulfur, through a process of oxidation of copper sulfide to elemental copper. This involves adding a flux (mainly silica) which reacts with the iron oxide to produce a light slag which can be poured off the top of the molten copper. The process, matte plus converting, is repeated several times in order to obtain a purified copper sulfide. Then oxygen is added to oxide the copper sulfide and separate the copper from the sulfide, resulting in a blister copper with better than 95% purity. Additional refining can be performed by further removal of oxygen by the introduction of carbon or the removal of sulfur using injected air (removal of the blisters). The final step would be electro-refining which removes metallic impurities, resulting in a 99.99% pure copper. See copper extraction techniques for more information. There is also a hydro-metallurgical process that involves leaching, the dissolution of copper form the ore in sulfuric acid solvent (in-situ leaching can be also used for copper). This creates copper concentrates of between 20% and 50%. The next step is cementation, which is the precipitation of copper using iron. This can result in copper concentration of 85-90%. This is followed by a process of solvent extraction, and then on to electro-refining. For an interesting overview, check out “Copper Production Technology”.


As far as I can understand it just putting copper ore in a
fire pit might partially melt the copper, but what would be left would be difficult to exploit properly. Man innovated by trying to find ways to increase the temperature in the pit furnace. So next they would make a 'tuyére', a kind of tube through which air could be blown into the furnace (we can see a reconstruction below along with the turf cap). They could make this with a mix of clay and chopped plant fibre (or dung). Dried, or just partially fired, is enough to make it a true ceramic. You would be looking for an air channel of about 15 mm diameter. Air could be blasted into the pit using a bellows or a blowpipe.

Pit, Collar and Tuyère

A pit of about 1 m cube would be sufficient (the demonstration above is much smaller). The tuyére was placed into one side of the pit, entering at a 60° angle, but not too deep. It is important that a collar was created for the tuyére, so that it could be sealed later in the process. Once working, the whole furnace, including the tuyére, had to work as a pressurised system in order to attain the necessary temperature to properly smelt copper. For a small furnace of the type shown above a wooden blow pipe (made of elder) would have been connected to the tuyére.
The ore would have been crushed, and the furnace made up of kindling and charcoal. It would appear that the early furnaces were covered with turf, which has excellent refractory characteristics. The fire would have been started, and the crushed ore added by hand. The furnace would be capped with turf, and the smith would build up and control the pressure inside using the blowpipe. Larger furnaces would have required several blowpipes or bellows. The charcoal would have been regularly compacted, and additional wet charcoal and charcoal dust would be added. This helps create a reducing atmosphere (carbon monoxide rich atmosphere) needed to properly smelt copper ore. This process creates 'sick flames' a purple/oily flame, rather than the usual yellow/orange. The whole process would take in excess of 90 minutes. You would find copper prills adhering to slag. If partially smelted you have reddish 'blister copper' which could be broken with a stone. You just needed another 30 minutes in the furnace. If what you had did not crush easily, you had your copper metal. These pit furnaces appear very rudimentary, and just look like holes in the ground, but they represented quite a complex and optimised metallurgical process.

The Egyptians added a slight modification to the same technique. The pit would be lined with stones and built on a slope. They would also blow air into the furnace using a foot pump. But because they were on a slope they would have a small channel in the base of the furnace so that the liquid copper could flow our into a small collection dip. The slag would remain in the furnace, and some would also collect in the copper puddle in the collection dip.

The making of stone beads, amulets, figurines, small vessels and cylinder seals is often called the art of
lapidary (see also hardstone carving). One of the most important tools was the drill. The earliest were made of flint, which were later replaced by copper or bronze drills. Flint drills used for making stone beads date from ca. 7000 BC, and have been found well into the period ca. 2800-2200 BC. It is almost impossible to find copper or bronze drills, because they could be repaired or recycled (the only find so far dates from ca. 2350 BC and was a pack of sealed cutting blanks). However, an analysis of the cutting patterns on seals suggests that copper and bronze drills were regularly used ca. 2000 BC. The abrasive used was probably crushed quartz (sand) or emery, and a lubricant such as water or oil was used. There is evidence to suggest that the harder copper or bronze tips were also used earlier for very hard stone such as hematite.

The transition from crucible smelting to
pit furnace smelting could have taken up to 1,500 years. But the “jump” was quite pronounced. Pit furnace smelting, when it became available, rapidly replaced crucible smelting. The crucibles always remained less than 15 cm in diameter (usually around 10 cm in diameter), whereas full-sized pit furnaces were at least 30 cm in diameter. Almost certainly one reason involved the temperature attained in the crucible as compared to a pit furnace. In very simple terms, the crucible was heated from a fire outside the crucible, whereas the pit furnace is actually heated from within the furnace by mixing the charcoal and ore. Burning charcoal with copper ore, produces carbon monoxide, which is a reducing agent for copper smelting (see copper extraction techniques). So the story goes that the crucible technique would normally have produced only a few grams of copper at a time. Enough motivation for early metallurgists to hunt for improvements. One major improvement could be so-called co-smelting. By mixing copper ores, and including copper sulphide ores, sulphate can be used as the reducing agent. This would allow the crucible to be filled completely with copper oxide, thereby improving the yield. In fact copper found in many places shows high sulphide content, because of the inclusion of sulphide ores. This evidence clearly shows that prehistoric smelters intentionally added sulphide ores as a reducing agent. They would not have understood the meaning of a 'reducing agent', but they would have developed the technique by trial and error (this is another example of the 'science of crafts'). An important additional point is that sulphide ores (and not oxide ores) frequently include arsenic (and arsenical bronze is an alloy of arsenic and copper). We also have to note that trying to obtain copper exclusively from sulphate ores in a crucible does not work, since you would need to oxidise the ore first at exactly 800°C (dehydrating the ore and igniting the sulphurs and removing them prior to oxidation of the copper salts), a process that would not have been known at the beginning.
So crucible smelting or ores would probably have remained a limited activity compared to just using native copper. You needed high-grade oxide ores, and the yields were limited. This is why experts believe that furnace smelting was not an extension or evolution of crucible smelting, but was a totally alternative technology.

The first evidence of copper smelting dates from ca. 5000 BC in the Levant. But there is one problem, native copper can't be found in the Levant (remember that native copper is metallic copper found in a native form as opposed to being in its oxidised state and mixed with other elements). Some experts suggest that the technology arrived from the Caucasus or Anatolia, others note that there is no evidence of this. The earliest archaic furnace comes from
Timna (an area rich in copper ores), and dates from ca. 5000 BC. Slags have been analysed over the period ca. 5000-4200 BC, and show three distinct development phases.

The most ancient slags are extremely heterogeneous, and contain about 10% unsmelted
cuprite. The mix would have been very viscose and the slag would have been crushed to remove the copper prills. Later, at the same sites, the slag became more homogeneous, and less viscose, suggesting an improved smelting process through furnaces reaching higher temperatures. A small copper ingot would have been created in the bottom of the furnace. The third phase was a true smelting process. Here the copper would have been liquid, and there are signs that the mineral matrix included manganese, magnesium and calcium. Here we see sulphide ore being used as a reduction agent. There are clear signs that an arsenic-copper alloy was intentionally produced (arsenical sulphide ores were being imported).

So copper ore was smelted in a furnace, but crucibles were used to remelt, purify and cast the copper. By ca. 4800 BC copper production started to be separated into specialised stages, i.e. ore preparation, smelting, copper purification, and ingot production. The argument is that, in the Levant, furnace smelting developed independently of crucible smelting. Rather than furnace smelting developing from crucible smelting, it actually migrated from the Levant, and simply replaced the less efficient technique. By ca. 4500 BC furnace smelting became fairly common, and it had replaced crucible smelting by ca. 4200 BC. The migration of furnace smelting outside the Levant has been linked with sudden and extensive metallurgical activity elsewhere, and with signs of the emergence of a proto-industry with production processes divided into specialist stages.

The same hearth-draft furnaces (with our
tuyére) as used in the Levant were found as far away as in Anatolia, and the use of iron oxides as fluxing agents became a commonly understood technique (and one not associated with crucible smelting). Experts have seen that arsenical sulphide ore was imported from the northern Euphrates. The many stamp seals found in these proto-industrial centres, and the lack of molds, suggests that the copper produced was not just for local use. Similar seals have been found in places known for the production of copper artefacts, but which did not have sources of copper ore. However the story does not stop there, since other sites (devoid of copper ore resources) became proto-industrial centres for copper purification and casting.
So we had by ca. 4200 BC places in the northern Euphrates area specialised in smelting, purifying and working arsenical copper, and a 'copper road' for exporting to the southern Levant. Everything points to furnace technology developed in the Levant, being exported to the Euphrates area, and rapidly replacing their less effective crucible smelting.
From ca. 4000 BC the Levante was exporting copper products into the Nile Delta and Valley. And at the same time furnace technology started to appear in the Caucasus, the Iranian plateau, and in the Aegean.

It took time to move from cold hammering copper to melting copper ores in a crucible. But in many ways it was just a new way of obtaining a substance that was already known, and at least initially they made artefacts that imitated stone tools. But with furnace smelting and casting the 'smith' could make completely new forms, and become completely separated from the traditions of past flint workers and stone cutters. Some expert texts focus on the reduction of sulphide ores as a separate and possibly later step to the smelting of carbonate ores. As we have already mentioned both azurite and malachite are so-called copper carbonate ores (see below for the ores found together).

Azurite Malachite Crystal

The 'mining' of blue and green ores would have been the first step, but they would have quickly encountered the yellow, grey and black sulphides.

Bornite

These sulphides generally occurred in deeper strata in the same mines. We know that Mesopotamian 'heating texts' existed as a kind of list of 'chemical tests'. They did 'fire tests' on everything by heating things to see what happened. Heating sulphides would have yielded a black glassy 'matte' that contained small particles of copper and would turn green when attacked by humidity. A second smelting would yield copper. They would be forced to develop a two stage process, first roasting then smelting, using two different types of furnaces. Finds of specialised furnaces and lumps of semi-refined and pure copper throughout the Near East and Europe prove this.

Through ca. 3000-2000 BC
furnace metallurgy appeared in areas rich in tin ore. And it is clear that the extensive facilities for tin smelting greatly exceeded local needs, suggesting a large network for tin production and trade. Furnace metallurgy was taken up and developed by numerous Bronze Age cultures, e.g. the Maykop culture (ca. 3700-3000 BC), the Afanasevo culture (ca. 3500-2500 BC), the Poltavka culture (ca. 2700-2100 BC), the Sintashta culture (ca. 2100-1800 BC), and the Andronovo culture (ca. 2000-900 BC). The technology migrated to China, and through the Iranian plateau into the Indus Valley. In the same period furnace smelting appeared also in Crete and the Cyclades, then on to Sardinia, Italy, southern France and North Africa (following the same spread pattern as the Bell Beaker culture who are known to have used furnace technology). Around 2300-2200 BC furnace smelting arrived in Rio Tinto in the southern Iberian peninsula, and with the same proto-industrial complex (smelting, purification, and casting). Between ca. 2000-1000 BC furnace smelting arrived in the British Isles, Scandinavia, and Finland, as well as in sub-Saharan Africa, and it also spread throughout China, into Korea, and on to Japan.

So initially
furnace technology moved from the Levant to places with new ore resources. Transport and trade links were created back to the Levant, and in northern Mesopotamia small fortified settlements grew up around furnace smelters. Experts feel that these were small colonies of migrant smelters. Some centres in this metallurgical network linking the Levant with the northern Euphrates were devoid of copper mining, but were already important trading points for obsidian. In some mining areas fortified settlements were created exclusively for metallurgy specialists, and all the copper produced was sent away to other sites for casting and manufacturing. It was the Canaanites (from the Levant) who, throughout the Copper Age, Bronze Age and Iron Age, specialised in mining and the production of metals (copper, silver, gold, and tin), and who diffused the furnace technologies throughout the Mediterranean basin, and on to the Atlantic coast. In particular they controlled the trade of tin right from the Iberian peninsula to the Indus Valley (the Canaanite port towns are often called by the classical Greek name, Phoenicia).

We should not forget that copper was used to produce a vast array of artefacts, e.g. in agriculture, in the so-called
secondary products revolution, and in burial customs. Many of these artefacts saw light for the first time in the Levant, showing their intimate relationship with furnace metallurgy. Copper tools replaced tools made of flint. Just think of the impact of copper on agriculture, the wheel, boats, and stone cutting. The smiths who smelted copper in crucibles had a prestigious status, and the arrival of furnace smelting reinforced that status (well in the Middle Ages and even into the Industrial Revolution). That status was more associated with smelting than with the actual process of metalworking. Those who could create a new matter from simple oxide ore were attributed magical powers. Some experts suggest that the smelting of copper ore in a furnace was the key civilising dimension of metallurgy, not the artefacts made from the copper.

For a far more detailed analysis check out “
From Metallurgy to Bronze Age Civilisations: The Synthetic Theory”.


Just for the fun, possible the first complaint ever written was a Babylonian clay tablet ca. 1750 BC, in which a merchant complains about the poor quality of some copper ingots he was buying.


It is impossible to give a reliable estimate of the production of the most ancient copper mines however two tentative estimates do exist. The total copper output of Egypt (including Sinai) over a 1,400 year period has been estimated at 10,000 tons (ca. 7.7 tons/year). The total copper production of a Bronze Age culture around Mitterberg in modern-day Austria has been estimated over a 1,200 year period as 20,000 tons (17 tons/year).

Copper was not found in the Merimde Culture (ca. 4800-4300 BC) of Lower Egypt, but some copper objects (beads of narrow ribbon copper and a stout pin or borer, along with needles and a fishhook) have been attributed to the Badarian Culture (ca. 5000-4000 BC) in Upper Egypt, and dated before gold and silver artefacts from the same region. This native copper was fashioned by cold working and in imitation of flint and bone implements. During the later Amratian Culture (ca. 4400-3500 BC) also in Upper Egypt they made more complex copper artefacts (adze, bracelets, rings, chisels, harpoon heads, needles and tweezers). The Gerzean Culture (ca. 3500-3200 BC) was known to have had contact with Mesopotamia, and they made practical copper weapons and tools, such as flat axes, double-edged knives, harpoons, rhomboid daggers, and flat tanged spearheads. One of the axes was analysed and found to have been cast of impure copper, then hammered and annealed by heat treatment under 800°C. Afterwards its cutting edge was hammered to increase its hardness. Where did the early Egyptians learn about copper metallurgy? Experts have suggested Asiatic invaders, the Caucasus, North Syria, Cyprus, or simply a domestic development of the Gerzean (who are said to have come form the East). Copper was not universally introduced into Egypt. By Dynastic times daggers were made of copper but arrowheads were still made of flint. When copper arrowheads finally appear they are quickly replaced by bronze arrowheads (yet it is the flint arrowhead that had the most penetrating power). In the Old Kingdom the King’s weapons were very often still made of stone and so we find the usual stone mace-head side by side with the copper battle-axe (obvious traditional reasons).
The Egyptian copper ores were easily reducible ores, and ferrugineous and siliceous sands for fluxes abounded. The main ore of the Sinai was a friable sandstone with nodules of malachite and chrysocolla, which by crushing and sieving could easily be concentrated. Azurite was mined in Egypt for copper and pigments, and malachite for eye-paint, mural painting, and glazes.
Copper was smelted as early as the Middle Pre-Dynastic period as is proved by the manganese content of a copper axe-head of that date. This smelting in a charcoal fire with the blowpipe is pictured in some tombs. Casting and cold hammering of the cutting edge seems to have been the standard method of manufacturing tools and weapons. Annealing was not too easy in those days. Anvils were just flat pieces of diorite, basalt or granite and the hammer was usually a piece of stone without a shaft. Only in the Iron Age the shafted hammer came into use in metallurgy. This is strange and somehow significant because stone-cutters and miners almost immediate developed shafted hammers, but somehow it took centuries to arrive in the hands of the smith! Cold hammered copper sheet (around a wooded core) was often employed for water pipes, ewers and basins. Spouts and other parts were often cast and inserted with rivets or nails. Open mould casting was soon practised with stone (steatite or serpentine) moulds if the objects were meant as mass products. Core-casting using a core of clay and charcoal came to the fore with bronze. Egyptian wall-paintings showed new copper as red, and copper tools in blue.
Turning for a moment to the
Sinai mines, which by the Egyptian First Dynasty had become an important source of copper (it is said that gave out ca. 1175 BC). In the Sinai the copper ores extends to a depth of 90-100 meters and the veins of ore varied from 60 cm to 150 cm thick (they were extremely rich and assayed with 5-15% copper). The galleries were excavated leaving rock columns to the support the roofs. Most were horizontal and ventilated with vertical air-shafts.
I found a reference to the Egyptian mining season lasting from mid-January to mid-May, thus avoiding the hot season. This appeared partly to do with the work itself being difficult in the summer heat, and partly to do with a superstition that copper, reduced in the heat of the summer sun, would have a 'bad skin' and not the usual hard skin (a thin oxide layer on the surface of the cast copper).

Enormous quantities of copper are mentioned in the Papyrus Harris amongst Ramses Ill’s benefactions to the gods. Over a period of 31 years the god
Amun received no less than 2.4 tons of copper, and all the gods received 15.6 tons (total 18.5 tons) as “black copper, copper in vessels and scraps, black copper for balances, copper in beaten work, copper in vessels or simply copper", apart from the usual statues in copper, doors of cedar mounted with copper and other gifts.


Texts belonging to the period throw some light on the organisation of metallurgy in the
Sumerian cities. The metal was delivered to a central storehouse “where it is kept”. The copper-smiths got their orders and their material for which a receipt was written and kept. It would seem that quantities of 10 kg of copper were still worth mentioning and this tells us that we should not overestimate copper production in Mesopotamia. Trade connections with Capadocia and Syria were firmly established and in that period Mesopotamia was already a flourishing and well-known centre of copper and bronze industry. Sumerian craftsmen were famous and copper was now much cheaper than in Egypt. Copper also served as a medium of exchange, either in bars or weights of metal in animal form. Imports of refined Cyprian copper became very common ca. 1500 BC, and exports of copper and bronze to Syria were common in Mesopotamian texts from. ca. 1600 BC. There are suggestions that smiths from the foothills of the Armenian Mountains worked as metallurgists in Assyrian and Babylonian towns.

Copper of the North and West was often brought back as booty by the Assyrian armies. The Assyrian records that one leader took 180 vessels of bronze and 5 bowls of copper from the one tribe, and 60 vessels of bronze, bowls of copper and great cauldrons of copper from another. Another Assyrian leader received 130 talents of copper as a tribute from one town, 30 copper pans from another, 40 copper pans from yet another and 50 copper vessels from a larger city. The list goes on, copper was a very desirable plunger or tribute. Sargon II took from temple in
Musasir no less than "3,600 talents of crude copper, 25,212 shields of bronze, great and small, 1,514 lances, 305,412 daggers, 607 basins, all of bronze”.

Oddly, metallurgy left few traces in Mesopotamian religious texts. Apart from such poetic expressions as "Your fame may shine like glistening copper” or "Your sorrow may flow like molten copper” we do not find direct associations between metals, colours and planets as often supposed even in Babylonian times!
A special god of the copper-smiths as an emanation of
Ea (Enki) is mentioned, but 'he' was the god of water, knowledge, crafts, creation and mischief. The old fire-god Girru (or Gibil) is mentioned as "the refiner of gold and silver the mixer of copper and siparru” (siparru may just mean 'metal'). There is a reference to Nindara, who shone like copper and who came out of the earth where the metal is found "covered with solid copper like a skin”.

Mesopotamian Tin


Tin (and antimony and arsenic) were found in alloys. Tin appeared to have had little important, until it was alloyed with copper to make bronze. Some inlays were made of tin, and it might have had a role as a poor-man silver (but only post ca. 1500 BC). Despite tin being essential to the bronze alloy, it had to wait at least 1,000 years before tin metal was identified as such. Tin had no specific role, to the point where it did not even appear in magical texts (until the Greeks). Tin was just confused with lead, or occasionally called 'solid mercury'.
In Mesopotamia 'true' bronze had appeared by 3000 BC, 'true' in the sense of being an alloy that was created intentionally by intelligently mixing the ores and managing the smelting process. etc. The
Royal Cemetery of Ur is often mentioned when displaying fine pieces made in gold, but there are also some magnificent pieces made in bronze, e.g. the bulls head below.

Bronze Bulls Head

Around 1500 BC cassiterite was finally reduced and a tin-metal produced industrially to be mixed with copper to form bronze. Finally the dose of tin could be controlled and alloys created for different purposes, e.g. weapons, mirrors, bells, statues, etc. By this time there was a very active and profitable trade in bronze items. For example Cyprus would import tin and export bronze as a valuable trading artefact. Later the Assyrian kings would take 'white bronze' as a tribute from their northern territories.

Hardness of tools and weapons


Before we enter the Bronze Age and then the Iron Age, it is perhaps useful to look at why Man actually 'evolved' from stone through to mild-carbon steel (as a follow-on to wrought iron). Numerous experts highlight economic constraints, such as the availability and price of tin for making copper-tin (bronze) alloys. But the general consensus is that tools and weapons made with low-carbon steel were better than bronze tools and weapons, and so on, down to stone.

Man finally started to use 'native' copper ca, 8000 BC, and they were initially made by hammering, followed by annealing at high temperature. Being about to heat the metal to ca. 1100°C in a charcoal file would have melted the metal and permitted casting as an alternative shaping method. This lead to smelting, and today it is often difficult to distinguish between native copper and smelted copper, and to identify which ores were used first. Copper metal production really started in the 4th millennium BC, using copper oxide deposits. Later in the Bronze Age copper sulphide ores were used, but that involved introducing an additional state, roasting in an oxidising atmosphere. Then the product would be reduced with charcoal in a two-state process, reduction of raw ore to 'matte, followed by refining. During this period copper-arsenic alloys were produced and used throughout the Near East. Tin bronzes were first introduced ca. 2600-2300 BC, and ca. 1500 BC it replaced copper-arsenic alloys. It is unclear if copper-arsenic alloys were intentionally created, but it is clear that alloy of 2% tin in copper does not occur in nature and was deliberately created. The Iron Age is said to started ca. 1200 BC with wrought iron, and to have evolved to quench-hardened low-carbon (or mild) steel possibly as early as ca. 400 BC.

What we are interested in is the idea that copper was better than stone, but bronze was better than copper, and of course iron (or mildly-carbonised steel) was better than bronze. Tools and weapons were judged by their hardness or the way they withstood the work demanded of them. They were expected to be strong and resist deformation, and be tough and resist breaking.

Defining what we mean by 'hardness' is harder than you think!

Wikipedia defines
hardness as resistance to plastic deformation, i.e. irreversible deformation. And immediately we learn that hardness is dependent on ductility, elasticity, stiffness, plasticity, strain, strength, toughness, viscoelasticity, and viscosity. We generally think of hardness as being able to resist penetration or indentation, and also an ability to resist wear, abrasion, scratching or cutting. It's difficult to know where plasticity or strain might fit in with wear and tear, or scratches.

Fortunately we just want to look at hardness.


The 'classical' diagram used to present the
strength of materials is strain, another word for mechanical deformation, against stress (the internal forces within a material). Strength is the ability of a material to resist deformation under the action of tensile, compressive or shear force. The strength is usually based on the maximum load that can be borne before failure (yield strength). So deformation can involve the application of a tensile (pulling) force, compressive (pushing) force, shear, bending or torsion (twisting).

A Typical Stress Strain Curve

When applying a load to material (object, artefact,…), initially the relationship between stress and strain is linear (zone A).  While the relationship remains linear, it is considered the elastic region of the material. In this region, when the stress is removed, the material returns to its original shape. Elasticity is the same as ductility, and is the ability of a material to resist deformation when a pressure or load is applied. As the load is increased we have elastic deformation which just means that the original shape is recovered once the applied force is removed (this is often called linear elasticity). This elastic deformation is also often called resilience, and pure annealed copper is not resilient whereas spring steel is highly resilient. Ductile metals such a copper, silver, and gold have a large deformation range. Mild-steel is also quite ductile, but cast iron, like stone, glass and ceramics, are subject to rupture because they are brittle (in simple terms they break easily). The most elastic body found in modern science is phosphor bronze.

A popular misconception is that all materials that bend are 'weak' and those that don't are 'strong'. In reality, many materials that undergo large elastic and plastic deformations, such as steel, are able to absorb stresses that would cause brittle materials, such as glass, with minimal plastic deformation ranges, to break.

If you look up malleability on Wikipedia you are redirected to ductility, and as mentioned above this is said to be the same as elasticity. However, they are in fact subtly different. Specialists tend to think of ductility as the property of material to stretch without being damaged (as in 'drawn' when making wires). This is equivalent to tensile strength, and is the strength of something when you pull on it. Malleability is the property to deform under compressive stress (pressure), as in being cold rolled or beaten into sheets. So this involves a material being able to support a large deformation when subjected to a compressive force (push), whereas ductility is deformation under a tensile force (pull). Both of these are features of plasticity (see non-reversible change below), i.e. if you have made a wire or beaten a panel the material does not return to its original shape. And elasticity is the opposite of plasticity in that what ever force you have applied to a material, when removed the material regains it's original shape (suffers no permanent damage).

One specific form of plasticity is '
creep' (or cold flow) and this is a progressive deformation of a material under constant static load maintained over a long period of time. So the material slowly deforms, and the deformation will be time-dependent, and possible temperature aided.

Elasticity (ductility) is one of the two features that make up an objects
toughness, the other being the strength of the material. Toughness usually goes in opposition to hardness, and materials that are very hard (e.g. diamonds) are usually very fragile.
Alongside elasticity we also have
stiffness, the ability to resist deformation, so the opposite of flexibility or pliability (easily bent). There are several measures of stiffness, including shear stiffness, and torsional stiffness. Ceramics are usually very stiff. Often stiff materials are hard, but more fragile or brittle. Also a stiffer material tends to be more resilient.

With a high enough load, the relationship between stress and strain becomes non-linear, i.e. the extension of the material increases more rapidly than in the elastic region. Linear elastic behaviour becomes non-linear 'plastic' behaviour (between points B and C). The point beyond which the relationship between stress and strain becomes non-linear is called the yield strength (point B). Applying loads beyond the yield strength results in plastic deformation of the material (when the load or strain is removed some degree of extension remains). While the yield strength is thought of as a single number or point on the curve, in reality, there is a small transition zone between the elastic and plastic region. i.e. it is not an instantaneous transition. In the plastic region of the material, the part deforms permanently (non-reversible), and will not return to the original shape when the stress is removed.

Yield strength (and strain hardening) sounds a little academic, but it is exploited by metal working techniques such as cutting, forging, rolling, pressing, bending, shearing, and extruding. If we take a 'perfect' material then we will have its theoretical maximum strength. But no material is perfect, all materials have inherent defects and dislocations. For example, the practical yield strength for copper is some 2,500 times less than its theoretical maximum strength, and for iron it is nearly 1,000 weaker than the theoretical maximum limit (some types of nanofibres can approach their theoretical limit). This different between theoretical and practical limits is not unusual and might still appear academic, but there are several ways to engineer an increase in the yield strength since it is very sensitive to the way the material has been processed.

Dislocation Movement

Above we have a slow motion view of a dislocation moving in a lattice (this is a so-called edge or 'T' dislocation). We can see the dislocation in A, and the shear stress being applied from the left at the top and from the right at the bottom. The dislocation is slipping or moving one plane at a time (the 'slip plane' is moving A > B > C > D). When the stress force is removed the dislocation has moved but only a few bonds were broken at any time, and viewed from the outside the array (and surface) may still appear un-distorted. Another way of moving in the lattice is a screw dislocation, and you can also mix edge and screw dislocations inside the same lattice. So you can imagine that there are a lot of ways these dislocations can move around depending upon how the forces are applied, and the result may still look unaffected from the outside.

Slip Planes

Now if you can imagine a nice regular array with these dislocations located in different places you should be able to imagine that under force the dislocations can move easily in different ways, and not so easily in other ways. Each way of 'slipping' is called a 'slip system' and each one can be a mix of edge and screw dislocations (this just depends on the way the atoms are 'packed' into the lattice). The more 'slip systems' a material has the more ductile it is, because extensive deformation is possible in many different ways. For example copper can have 12 different slip systems, whereas iron with specific impurities can have 24 different slip systems, but not all of them are available at room temperature.

We saw above that the
elastic deformation allowed the object to change shape under pressure, but to return to its original shape once the pressure is removed. This shape change is due to the stretching of the internal bonds within the material, but does not involve atoms moving in the material lattice. Plastic deformation involves the breaking of a limited number of atomic bonds by the movement of 'dislocations'. Breaking a large number of bonds simultaneously in a material (creating a fracture or crack) requires considerable force, but letting atoms slip past each other requires far less force. These dislocations have a preferred direction in a material (usually along the densest planes of atoms), and several parallel planes dislocate together can form 'slip bands' (as seen above).
Plastic deformation corresponds to the movement of a large number of dislocations. If we restrict or hinder the movement of dislocations we make the material harder (less ductile) and stronger, i.e. we extend the materials elastic deformation range.
One key way to increase the yield strength involves so-called '
work hardening', also called cold working and strain hardening. To make a material stronger and harder we need to stop dislocation moving, i.e. stop the material becoming plastic too quickly. One way is to introduce additional dislocations into the crystal structure of the material. These additional dislocations hinder the motion of other dislocations, thus increasing the elastic range of the material and thus it strength. Work hardening is just what it means, we have to add work to the material to make it harder. There are many ways to 'work' the material, e.g. forging by forcing a material into a die, rolling a sheet to make it thinner, drawing (pulling) a material through a die, and extruding or forcing a material through a die. Each process deforms the original material and increases the 'dislocation density' or the number of dislocations in a unit volume of the material. If we consider that a cubic millimetre of material might have a 1,000 dislocations, then strongly deforming the material could increase the number of dislocations to 10 billion per cubic millimetre.

Another way to stop dislocations slipping is with so-called '
grain boundaries'. The 'grains' are in fact crystallites or almost microscopic crystals which are formed during the cooling of many types of materials (e.g. polycrystals). All the common metals and many rocks are polycrystals (see some examples below).

Polycrystalline Structures

Obviously the smaller the 'grains' the more interfaces there are and the more barriers there are to slip. So fine 'grain' materials are stronger, harder and tougher. And you can affect the grain sizes by the way the material is cooled (solidified) from its liquid phase, and also by plastic deformation followed by the right kind of heat treatment.
When you work harden a material then reheat it, its called
annealing. The basic idea is that this process increases a materials ductility and reduces its hardness, thus making it more workable. During the cold working the number of dislocations was increased, but also a certain amount of internal energy is stored inside as 'strain energy'. When the material (lets start talking about a metal) is heated up and maintained above its recrystallisation temperature (a phase called 'recovery') some internal energy is released as some dislocated atoms move ('diffuse') and settle into a lower 'strain energy', resulting in a reduction in the number of dislocations (i.e. less hard and less strong, but more ductile). Then as the metal cools it recrystallises in to a new 'strain-free' state and with 'grains' of lower dislocation densities. This recrystallisation completes when the 'recrystallisation temperature' of the metal has dropped to between ⅓ to ½ of the metals or alloys melting point. Much depends upon the amount of prior work hardening, the 'grain' size, and the purity of the metal or composition of the alloy. Increasing the cold working will increase the rate of recrystallisation (reducing the time it takes to recrystallise), but there is a limit to the rate of crystallisation above which cold working does add anything. The recrystallisation temperature for pure copper is ca. 120°C (melting point 1085°C) and for iron is 450°C (melting point 1538°C). After recrystallisation the metal is still hot and strain free 'grains' will continue to grow consuming smaller 'grains'. So as the atoms in the grains migrate some 'grains' grow, other shrink and some disappear, and the number of 'grain' boundaries is reduced.

Work hardening happens every day. As an example, if we take a copper tube it will be quite ductile. If we slowly bend it (avoiding crimping) we are in fact work hardening it. Bending introduces strain into the copper, which introduces more dislocations, which makes the copper harder and stronger. That is why it is difficult to bend back straight or even to bend further after stopping. Then let's heat the copper to about 400°C, this annealing helps the copper release the internal strain created originally, new larger 'grains' will 'grow' and the still bent copper will be easier to re-bend or de-bend. This annealing is not the same at heat treating which actually changes the crystal structure, and is typically used with steel.

Yet another way to strengthen a metal is to
alloy it with impurity atoms. The impurities are forced into the lattice and impose a strain on the bonds between the 'host' atoms. This lattice strain between dislocations in the host and the impurity atoms restrict the movement of dislocations, making the alloy stronger and harder.

Returning to our stress-strain graph, as the load/stress continues to increase,
the material starts to fail, or “neck”. This is the ultimate strength of the material (point C). With a high enough load/stress applied, the part will eventually pull apart, fracture or fail (point D). See structural integrity and failure for a more complete description.In science there is a constant desire to measure and compare, and in the field of hardness and strength there are a multitude of measurements systems and units.

Below we have just listed a few of the better known tests, but there are multitude of specialist tests, and many of the other parameters are defined by equations.

For
hardness the most common measurement is indentation hardness, and the common indentation hardness scales are Rockwell, Vickers, Shore, and Brinell. For example the Brinell hardness scale defines how resistant a sample is to a semi-static load deformation from a sharp object. The Vickers hardness test, has one of the widest scales, and is used to test hardness of all kinds of metal materials (steel, nonferrous metals, tinsel, cemented carbide, sheet metal, etc.). The Knoop hardness test is widely used to test glass or ceramic material, The Janka hardness test is used for wood. The Shore hardness test is used for polymers, including rubber. The Barcol hardness test is used for composite materials. There are also the Meyer hardness test and the Rockwell hardness test.
But 'hardness' might also be defined as
scratch resistance (Mohs scale), or maybe as how elastic a material might be (the Leeb test measures the coefficient of restitution of a surface by looking as the speed of rebound of an impact body on a test surface).
Young's Modulus is a common measure of the stiffness or elasticity of a solid material, in the so-called linear elasticity region. Stiffness is not the same as strength (maximum elastic deformation), hardness (resistance) or toughness (energy absorption before fracture).
The
Charpy impact test determines the amount of energy absorbed by material during fracture.

On the
Mohs scale the hardest of 'our Mesopotamian metals' is white gold (an alloy with silver, nickel, and lead), steel (alloyed with carbon) and iron, all have 4.0, as compared to 9.3 for boron. Then we have arsenic (3.5), brass (3.0), bronze (3.0), copper (3.0), silver (2.5), gold (2.5), and then tin and lead both with 1.5. Hardness does not mean tough, what may be hard to scratch could be very brittle and crack under pressure. Rock is not a single material, so the hardness of different rocks usually ranges from 3 through to 8 or 9 (but with 1 for talc and 10 for diamond). For comparison, 5.1 is for a steel blade and 6.5 for petrified wood.

Young's modulus, describing how elastic or stiff something is, and goes from wood (along grain) of 11 Giga-Pascals (GPa), through hemp fibre (35 GPa), bronze (96-120 GPa), copper (117 GPa), wrought iron (190-210 GPa), and mild-steel (200 GPa). Something that was perfectly stiff would have an infinite Young's modulus (e.g. beyond diamond), and something that was very elastic would have a very low Young's modulus, e.g. rubber or polyethylene.

The
yield strength of some of 'our Mesopotamian metals' is measured in Mega-Pascals (MPa). A limb bone has a yield strength of ca. 104-121 MPa, whereas pure copper only has ca. 70 MPa, so your leg bone is in many ways stronger than copper. Brass is ca. 200 MPa, whereas cast iron is at best ca. 170 MPa. Steel is anywhere between 20-1650 MPa depending upon the type and treatment.

But first let's look at what was possible. The most brilliant source of information I found is a webpage on
selection charts for materials and processes. These charts are material property charts for designing protects, etc. and are quite complex but we can isolate the information we need.

This first chart is for strength against density. This is all about identifying light, strong structural material. For different classes of material it can mean different things, e.g. crushing strength for glass and ceramics (which is much higher that fracture or tensile strength), and for metals its the
yield strength. Generally we can see that some ceramics can be stronger and lighter than steel, but that all the metals are stronger and heavier than natural materials. The key message is that properly treated copper alloys can be just as strong as steel, but are just a bit heavier.

Strength against Density

The next chart the so-called fracture toughness against strength. This basically describes materials that yield before breaking. What we can see is that copper alloys occupy a large potential space dependent upon the type of alloy and how they are made. Copper alloys are generally better tools and weapons than cast iron, because cast iron fractures more easily even if it could be somewhat stronger. However both copper alloys and cast iron are clearly inferior to carbon-steels. And this chart highlights the enormous leap that Man made in moving from stone (non-technical ceramics) to metal tools and weapons.

As an aside, you can see three dotted-lines, the first (least slope) is for optimal elastic range, the second is for optimal strength, and the third line is for optimal light/strong. You 'slide' the line around, and all materials above the line meet your criteria. If you are interested in how these charts fit into the way engineers select appropriate materials for particular tasks, check out this website.

Fracture Toughness against Strength

One of the other charts in the collection (detail shown below) looks are wear rate (vertical axis) against hardness (horizontal axis). Here we can immediately see the downside in using copper alloys for tools and weapons, they wear out quickly. And you can also see the advantage for bronze and even cast iron, they are harder (and certainly more brittle) but they keep their shape and sharp edges much better. And on top of all that you can see that low-carbon steels have no substantial advantage over bronze in terms of wear resistance or hardness.

Wear Rate againat Hardness


We have to be realistic, and look not at what might or should have been possible, but what actually happened in the transition from the Stone Age to the Iron Age.

During the
Stone Age obsidian was used some 700,000 years ago, and Anatolian obsidian was used in the Levant and Mesopotamia perhaps as early as 12,500 BC (suggesting some kind of migration-trade-transport link). It may have been a prestige artefact because it was relatively rare and because, along with flint, it fractures to produce sharp blades. In fact obsidian requires less force that other lithologies to detach flakes, but it does have limited strength. Heat treating fine-grain siliceous rock can improved the 'flakeability' but it still remained second to obsidian. Other type of rocks were better suited to grinding and pecking (hundreds of small sharp blows) that to flaking. For blade manufacture, the lithic raw material normally favoured were homogeneous and isotropic microcrystalline siliceous lithologies. In practical terms this meant that the knapper was able to detach flakes from any direction. One key feature of these artefacts is the brittleness (lack of plastic deformation) and the ease with which they would crack or break under compression or tension. Intentional heating of the stone improved its knappability (the rock was 'stiffer' and harder, but easier to fracture as in knapping). But the key was to heat slowing until about 200°C, rapid heating reduced knappability.

Ancient copper alloys, such as bronze, showed what is called remnant 'coring' which occurs when the cooling is not well managed and the exterior solidifies before the interior. This means that the percentage mix in the alloy is different in the centre and on the surface, i.e. they have different compositions. The result is poor mechanical performance. As a molten copper alloy cools the crystals grow in a kind of tree-like structure called 'dendrites', and their centres, which cool first, usually are richer in copper, whist the spaces are richer in the alloying element.

Dendrites

The microstructure is thermodynamically unstable and the alloys will homogenise over time. But at room temperature this can take 1,000's of years. In fact most specialists concur that bronze cooled at ambient temperature under normal conditions will rarely exhibit the best structural properties, and will be far from ideal.
Once cast the alloy was then work hardened, or beaten into a permanent new shape (what is called plastic deformation). Ancient smiths knew that heating up to about ⅓ of the melting temperature of alloy would remove some of its hardness, i.e. internal stress is removed and new, larger 'grains' promoted, making the alloy more ductile again. The microstructure of many Bronze Age artefacts still retains the 'coring', and the hammering and annealing can also be seen. In other words the work the ancients did was not perfect, and the only ones having a homogeneous distribution of tin within the copper were those recrystallised at 500°C for 20 minutes, and then annealed at 700°C for 5 to 10 minutes. Better still if the process cycle was repeated many times with a series of small thickness reductions each followed by a period of annealing. The reality is that most smiths of the time annealed at low temperatures, probably ca. 500°C. There is an important additional point here. The idea was that the ancient smiths were an elite in possession of specialist knowledge (and specialist furnaces, etc.). But we can see that they were annealing in what might have been just a simple home hearth. It is certain that smelting was a specialised practice, but artefact production (including maintenance, repairs and repurposing) might as easily have been done in a domestic context.
Copper alloys exist with tin, antimony, arsenic and mixes of two or all three. In many cases there was a primary alloying component, and a mix of the others. Antimony appears more often in pendents, etc., possibly because it was easier for casting small items and it had a silver colour. Arsenical copper was better for dagger blades with hard cutting edges. Things changed with time, and in some areas the focus moved to tools and weapons. Some artefacts with multiple parts included different alloys suggesting a deliberate process, but some identical artefacts were made with different types of alloys, so who knows. On top of that many of the artefacts we have in our museums, etc. are funerary items so it is not clear if they were meant to be fully functional. Arsenic in copper increases its hardness and acts as an antioxidant, and at higher concentrations imparts a silver colour. About 5.5% arsenic gives the best compromise between hardness and ductility, and most arsenical copper artefacts fall well below 4%. On top of that up to 25% of the arsenic is in the form of mineral inclusions and does not contribute to properties of the alloy. In some cases it could have been simply added as a colorant. It is quite possible that the use of arsenic was not deliberate. Or at least not initially deliberate.
In the early days of small-scale smelting they would use azurite and malachite taken from near-surface deposits, The temperature in the smelter was not that hot, and they did not use fluxes, so the very early copper was quite pure (but a lot was lost to waste). As the demand picked up they would have had to mine copper-rich ores and that meant dealing with all sort of mixtures include the sulphides. That meant arsenic, antimony, nickel, iron, … might turn up in your copper purely by accident. Not tin, which does not occur naturally in useful concentrations with copper. It may well have taken them a 1,000 years to work it all out, e.g. arsenic, antimony, nickel is fine, iron is not, you may have some iron with the sulphides, but you must avoid finding arsenic and iron together. So initially variations were large but the overall concentration of impurities was low. Later, with larger furnaces and a need to produce lots of copper, purity went down. So over time, everyone found out to intentionally make arsenical copper. It made casting easier, the product was harder (up to a 30% improvement), work hardening worked better, and it looked nice with a silver sheen. Not everything was perfect, Ötzi the Iceman (ca. 3300 BC) had a axe made of 99.7% pure copper, but a considerable concentration of arsenic in his body and hair.
Tin increases hardness and the effect of work hardening, and reduces the melting point. Tin can be smelted as a metal and added to copper in a controlled way. And if you melt them together you have a tin-bronze. The amount of tin in the alloy was about 7% and rarely exceeded 10%, and was often found with both antimony and arsenic. It looks as if the smiths added less tin when they knew that antimony was present. When antimony occurs along with nickel and silver it is possible that this occurred simply because the smiths were using the Fahlore ore. In some case it looks as if the addition of antimony was deliberate and the tin was there because they smelted a tin-rich copper ore (but of course they did not know what antimony was). There are pointers to the fact that antimony and arsenic might have been added separately and intentionally. It looks that over time the smiths intentionally created their alloys, and picked their impurities as a function of the artefact needed, i.e. arsenical copper and tin bronze for blades and tools, antimony copper for cast objects. Antimony was also used to add a silver look to the artefact. There is even the suggestion that alloys were appreciated as such, and not simply as a harder version of copper.

The key question in this particular section is about
hardness. This fantastic webpage mentions the work of Heather Lechtman who estimated the hardness of different alloys. I've coped her graphic below.

Heather Lechtman

We can see the difference between annealed and work hardened tin-copper, but we can also see that there is virtually no different between work hardened tin-copper, work hardened arsenical-bronze and air-cooled mild-steel. Tin-copper will win out because it is slightly harder and it has some other superior properties that make it more practical and useful (and arsenic could well kill you). However, despite iron tools being sold to everyone as superior to bronze tools, in fact good old tin-bronze is harder than air-cooled mild-steel (and that wrought iron was even worse).

There is a branch of archaeology which involves imitating or replicating experiments. One such experiment compared the efficiency (measured as time) in felling trees using stone, bronze and steel axes (replicas of the type used by ancient Man). Firstly you can cut down large trees with a stone axe (chipped or ground). Initial tests showed that a man using a steel axe was between two and four time as efficiency as when using a stone
adze, but it could be much higher for bigger trees (or harder woods). A key point was that a man would expend 5 times more energy cutting down a tree with a stone axe than with a steel axe, and more than 3 times as much energy in clearing land. The polished flint axe was far superior to a ground stone axe, but both compared poorly to a mild-steel axe. Different studies showed that the using a bronze or iron blade did not change much the results. This is interesting because steel axes were significantly harder than the bronze axes, but it did not make much different when cutting down trees or clearing land. A real improvement occurred only when blacksmiths master the art of quenching steel to harden it. Variables such a half length, blade width, and axe weight showed that a longer half were better for larger tree.

One report noted that the key is
quenching. You harden copper by cold hammering, but you harden iron (mild-steel) by heating, hammering and quenching. Doing the same with copper does not harden it. But there are experiments that show that properly hammering copper can substantially improve the hardness, by as much as 55%. This would make it stronger than bronze (10% tin) and almost as hard as cold-hardened mid-steel.

We tend to imagine these
swords as being big, heavy weapons, but this is far from the case. A 70 cm long copper or bronze sword, weighting 700-800 grams, would be a big sword (and adez would be bigger and heavier). The overall shape would be from a casting, and the chances of getting a perfect casting were no better than 10-30%. Evolution went from maces/axes to flat axes, then to small daggers. It was only ca. 3300 BC that swords appeared in Anatolia, and it was only ca. 2500 BC that swords started to appear as a kind of long dagger. Casting copper is tricky because it does really flow easily into narrow moulds, so to get a good edge would require a lot of hammering to get the thickness down from 5 mm or more to a hard cutting edge. Part of the problem is that casting a small copper-bronze object is one thing, but it does not scale easily to something much bigger.

With iron the techniques are quite different. The best way is to beat the blade from an ingot of block of iron. The idea of just hardening the edge of an iron blade does not work, it will break unless the entire blade is worked.

Heating and hammering iron (mild-steel) is not the same as quenching. Iron/mild-steel might date from ca. 1400 BC or earlier, but the first mention of quenching dates from ca. 800 BC. There are even stories about great swords being quenched by thrusting it in to the body of a slave. But the reality is that quality steel was almost non-existent in Europe through to the 12th C AD (see the later section on iron).

What is important is that Man quite quickly found out that by hammering the edge of a metal tool or weapon he could make it sharper. The difficultly was that it would not stay sharp. So they would re-work the tool, but this has it own problems. Studies on copper tools show that not only did they cold-harden their tools but they also re-heated and re-worked them. This causes the formation of copper oxide which might initially make the edge a bit harder, but when the oxide content increases the edge become brittle. Then it was back to square one - smelt, cast and rework.


The Bronze Age


Now we turn to the
Bronze Age, but where to start? Let's set the scene by first stepping outside Mesopotamia to visit two lesser known Bronze Age cultures, one called the Leyla-Tepe culture (ca. 4350-4000 BC), the other Kura Araxes (ca. 4000-2000 BC). The Leyla-Tepe culture was situated in the Central Caucasus and their bronze metalwork tradition was very sophisticated right from the beginning. Experts think this is due to the arrival of migrants from the Ubaid-Uruk period ca. 4500 BC. The Kura Araxes was a so-called trans-Caucasian culture that covered parts of northwestern Iran, eastern Turkey and even went as far as Syria. According to Wikipedia initially metal was scarce, but following on the traditions of Leyla-Tepe they later displayed a "precocious metallurgical development", and their work with copper, silver, gold, tin and bronze were widely distributed into north Syria and southwest Anatolia.
We are going to start by looking at a unique piece Kura Araxes bronze work, through which we will try to understand the evolution
of bronze metallurgy.

The Necklace of Gegharot

The artefact is the necklace of Gegharot (a possible reconstruction is shown above), found on the Early-Late Bronze Age settlement of Gegharot in present-day Armenia. The necklace consists of 99 metal beads, 88 chalcedony beads, and 217 talc beads. In terms of the metal beads there are double-voluted beads, cylindrical beads with raised transversal rims, cylindrical beads with parallel oblique cuttings, barrel-shaped beads, conical and spherical beads, and two types of teardrop-shaped beads. My understanding is the artefact dates to ca. 3800 BC, but what is more interesting is the variation in types of copper-arsenic alloys. Copper with up to 0.5% arsenic is called arsenical copper (and would correspond to a naturally occurring contamination), and above 1% it becomes arsenical bronze, as opposed to a copper-tin alloy. Arsenic in copper imparts a higher tensile strength and helps prevent embrittlement (thus better for casting).
Some pieces contain 4.6% to 6.1% arsenic, and are typical of the type of alloy found in the Caucasus and Anatolia. Some pieces contained 15.8% to 19.4% arsenic, and the teardrop pieces were made of a copper-arsenic-tin alloy. The lead concentrations ranged from 3.7% to 9.1% and certainly result from an intentional alloying, presumable to obtain a different colour and a more 'precious' appearance. Alloying with lead in this period is quite rare and more often found in the Aegean and Mediterranean regions (although Aegean alloys had low arsenic content). Items with very high arsenic content don't tend to oxidise so much and keep a 'fresh' colour, but actually making copper alloys with high arsenic content was not easy and it is unclear how they achieved it. The way the different alloys were used for the different components in the necklace clear show intent to demonstrate craftsmanship and produce a prestige artefact.
The inclusion of lead means there was a chance to relate the copper used to parent ore sources, using lead isotope analysis. The results show that some of the pieces relate to ores found in the general region on Armenia and Anatolia, but others pieces related to ores from elsewhere, implying imported ores or alloys (possibly from Jordan).

copper and Tin

What we see above is a modern-day experiment in Spain showing how it is possible to produce bronze using local ores and ancient smelting techniques. The exact process used for bronze is still debated. Did early Man alloy copper metal and tin metal in a simple melt? Did Man add one of the metals to the ore of the other, in what is called a cementation process? Did Man smelt copper ores that naturally contained tin? Or did Man co-smelt copper ores with tin ores? Most experts assume that this last option was the most likely given that slag samples often contain both copper and tin and, for example, in Spain they have found bronze prills in various compositions of slag. What we see above is the co-reduction of copper and tin ores found in the same place in Spain. The smelt consisted of a shallow hole surrounded by stones and fed by bellows through two separate nozzles. The ores were ground to a small grain size (1-3 mm diameter), and gangue (waste rock) was removed by hand. A small amount of charcoal was mixed with the ores. The hole was about 30 cm by 30, and 40 cm deep, and lined with 2 rows of stones. The two tuyéres each had a pig leather bellows, and were fixed in the stone wall of the pit with unbaked clay. The fuel was charcoal, and the smelting vessel was made of fireproof clay. The product was ash, unburned charcoal, slag, and 5 small metallic prills (about 0.3 gm each). Three of the prills had about 8-9% tin, one had 4% tin and the last 69% tin. Impurities included iron and lead. So it was very easy to produce bronze through a co-smelting in an open fire, but the results were quite heterogeneous (even from one smelt).

The Mesopotamian Bronze Age


The 'classical' Bronze Age in Mesopotamia (ca. 3800-1200 BC) is often divided into Early (ca. 3800-2100 BC), Middle (ca. 2100-1550 BC) and Late (ca. 1550-1200 BC), despite the fact that Wikipedia says that the period began in ca. 3500 BC and ended in ca. 1155 BC and that Early/Middle/Late is not used. Wikipedia goes on to say that the cities of Ur, Kish, Isin, Larsa and Nippur were 'Middle' and Babylon, Nimrud and Assur were 'Late'. In Anatolia the Bronze Age's are often divided into Early (ca. 3000-2500 BC), Middle (ca. 2500-2000 BC), and Late (ca. 2000-1200 BC). In Europe the Bronze Age ran from ca. 3200 BC with the Aegean civilisation through to the Urnfeld culture ca. 750 BC ('Europe' is quite vast in this definition and extends as far as the Caspian Sea).

You may be surprised to find this section on the Mesopotamian Bronze Age rather shorted than expected. Firstly, we are looking at the science and technology of Mesopotamian bronze, and not the artistic, social and economic development of Mesopotamian during the period (which was truly incredible). Secondly, Mesopotamia was not the only source of innovations in bronze technology (some would say it only played a minor role). Thirdly, we have mentioned quite extensively the basic technologies used in the sections on copper and tin.

Let's just summarise the evolution of ancient metallurgy leading up to the creation of bronze alloys:-
Shaping native copper (hammering, cutting, bending, grinding, polishing)
Annealing native copper (heating and hammering)
Smelting oxide and carbonate ores
Smelting ore (in wood- or charcoal-fire over clay-lined pit with air)
Melting native copper in furnace or fire over crucible and casting into stone, clay, or sand moulds
Fashioning or cold hammering, finishing by grinding and polishing
Smelting sulphide ores
Roasting the sulphide ore to remove majority of sulphur
Smelting roasted ore with charcoal in shaft-furnace producing a copper 'matte'
Roasting copper 'matte'
Smelting roasted 'matte' with charcoal and silica flux
Roasting 'blue metal'
Smelting roasted 'blue metal' with charcoal
Producing ingots of 'black copper' rich copper matte
Melting 'black copper' with blast air in charcoal fire to obtain refined copper
Fashioning refined copper by casting, etc.

How did Man move to bronze? These are at least two views, one a Middle European centric view, and the other we will call a Mesopotamian view.

Firstly, the simplest view starts with the idea that ores of copper almost always contain impurities including antimony, arsenic, and tin. And the argument went that veins of copper ore are probably found alongside veins of tin. So bronze must have just happened naturally from mixes of these ores. To refute this, one must show that tin was added to a copper ore that did not have tin as an impurity. Sounds plausible, expect that ores of tin and ores of copper only occur together in three or four places in the world, one of them in Cornwall. And the reality is that until Roman times tin was obtained from placer deposits in streams. But more telling, if tin was 'hidden' in the ores how did the ancients perceive that it was
cassiterite that 'improved' smelted copper? And how would they have made the link between the opaque cassiterite ore and 'placer' tin, since at the time everyone through it was just another type of lead? Finally whilst places such as Cornwall had their role to play in history, it is not clear how they would have had a link with the much earlier origins of bronze making in the Ancient Near East.

This first suggestion about how Man discovered bronze is based upon the idea that it first came from Middle European tin and copper regions, initially with the Aegean Bronze Age (ca. 3200 BC) and then later through a whole series of Middle European cultures spanning the entire 2nd millennium BC. However there is no doubt that the metallurgy of gold, copper and lead was known in the Near East in early sites of Mesopotamia and Iran much earlier that any Chalcolithic site in Europe and before any connection between the Near East and Middle Europe can be proved. And as we have seen the way metallurgical thinking developed with time in the Near East, from mixes of natural occurring ores, through the heating of tin-bearing oxidic copper ores with wood or charcoal, and then later to the smelting of mixes of sulphidic tin and copper ores. It is in this slow but natural progression that the true value of the tin ore was discovered, and from then onwards a further quick development took place along the following lines:-
Cassiterite in the form of stream tin was discovered in working gold placers.
This cassiterite was reduced by metallurgists already in possession of the fundamental knowledge necessary for the production of gold, copper and lead. The tin produced was held to be lead, there was no practical way for them to tell the different between tin and lead.
Tin was mixed with copper ores before smelting to produce the “improved” copper.
At a later stage stream tin is reduced with charcoal and mixed together with the crude copper already obtained by separate smelting.
At the same time certain mixed ores were worked unintentionally for copper and thus “natural” bronze was produced, generally with a small tin content which varied greatly according to the ore used, but occasionally also with a higher tin content. In the latter case a true bronze was produced (all bronzes containing over 2% of tin are most probably artificial products, and only very few mixed ores would give true bronzes on smelting).
Metallurgist of the period recognised the similarity between copper mixed with stream tin and copper coming from the smelting of cassiterite and copper ore. It is highly improbable that the cassiterite from the mixed ore was ever isolated and proved to be a tin compound. At the same time this new fact was remembered and used when the stream tin supply began to fail or became insufficient to satisfy demand.

It is almost certainly true that the production of '
natural bronze' (mix of ores) preceded that of 'artificial bronze' (copper intentionally mixed with tin) in certain regions but it was probably not recognised as a special mix until the production of tin and bronze from cassiterite and copper ore was well established and the properties of bronze well known. This 'artificial bronze' can be found in early bronze artefacts, showing not only a higher tin content than would be expected from mixed ores, and also less variation in the tin content.
A still better bronze and a more stable composition could be obtained by reducing the
stream tin with charcoal in molten crude copper, this was an important advance of bronze technique.

Gradually the stream tin deposits in the Near East could no longer cope with the growing demand (and many small surface lodes were depleted in early times). Thus we find in the Sargon era inferior hammered axes of unalloyed copper replacing the earlier mould-cast bronze ones such as the splendid bronzes of Ur.
This “low tin content” period in Mesopotamia has been identified and probably corresponds to the depletion of the known deposits and the growing demand for tin. Prospectors, metallurgists and traders struck out West to look for further supplies. Without claiming direct contact this explains the gradual introduction of Sumerian metal types in the Danube regions and finally in Middle Europe where the tin supplies were found in Bohemia and Saxony.
Around 1500 BC, as far as present evidence goes, a further technical improvement was achieved. The cassiterite was reduced separately and tin-metal produced industrially to be mixed with copper to form bronze. This not only allows a better dosing of the tin content but gradually led to the production of different alloys each specially adapted to weapons, mirrors, statues, bells, etc.
In the same period the earliest tin objects began to appear in mass in the excavations. Archaeologists can see that Aegean traders now started to bring tin from the West to the East. This tin was passed on by many links in the chain of trade stations as was the case with Middle European tin. Dating of Near Eastern excavations clearly now show that the West was influenced, if only slightly, by the East.

So is it possible to determine
where tin-bronze metallurgy was born? In Egypt the earliest tin objects date from the ca. 1550-1350 BC, in the form of a ring of pure tin, and one made of a gold-tin alloy. No traces of tin ore have been found in the Sinai copper ore or the slags near those mines. Bronze dating to ca. 2600-2500 BC was probably introduced from abroad, and not ca. 1350 do we find undoubted bronze objects in sufficient quantities in Egypt to justify the assertion that tin bronze was then in common use. The oldest bar of tin was found on an Egyptian mummy. It is free of lead and silver and must have been manufactured from a pure cassiterite, its date is not later than 600 BC. Egyptian texts do not mention the source of tin, so it probably arrived in Egypt already in the form of bronze artefacts. The early bronzes of Anatolia date from at least 3000 BC but copper forms were still used for a long period and as tin seemed scarce the tin content was hardly ever outside the 2-10% range. From 2200 BC, true bronze forms were used and higher percentages of tin appeared more regularly and not only for intricate forms as in earlier periods. Before the Iron Age, tin was still more or less a precious metal and was often reserved for inlay-work in bronze (as a substitute of silver).

True bronzes (copper-tin) occurred in Mesopotamia (Middle Bronze Age) during the Djemdet Nasr Period (ca. 3100-2900 BC) and on Early Dynastic sites (ca. 2900 BC). The bronzes of the Royal Cemetery at Ur are true bronzes which show that the Ur metallurgists were fully acquainted with this branch of their craft. In Mesopotamia many types of lead and antimony bronzes preceded true bronzes (often called the Early Bronze Age). The earliest direct allusion to tin occurs in the annals of the Assyrian kings who take 'white bronze' as a tribute from the Northern and Eastern border regions. The most favoured sources of Mesopotamian tin supplies were the south-eastern Caspian region. In later periods tin came from the West, from the Phoenicians. In Neo-Babylonian times the price of tin was still eight times the price of copper and in early bilingual texts it is mentioned after silver but before bronze and iron.

A scientific view of copper and bronze


The widespread use and preservation of copper and bronze, together with the long traditions of scholarship surrounding archaeological metal objects, and latterly metal production, make it a fundamental material for understanding societies in Bronze Age Europe (more so than other artefacts such as pottery). Numerous projects over the last half-century have applied science-based approaches to the study of metalwork to address archaeological questions of alloy selection, development, distribution, and provenance, the latter long considered the “Holy Grail” of the discipline.
But there are problems of data compatibility, issues of sample selection, lack of documentation, and an over-focus on isolated (regional) case studies. Responses to the questions mentioned above are not straightforward, even when cutting-edge analytical tools are being used. Chemical analyses can exhibit meaningful patterning for classification and provenance as well as for models of metal circulation. However, cautious is needed about the high-resolution detail of this patterning and with the fact that may people see what they want to see, i.e. what fits well with the existing or preferred narratives.
A single perspective or single strand of evidence is never sufficient for building explanatory models of the past. Models must show statistical significance, be based on open data, and ideally on archived sample material. Any data analysis must be replicable and samples available for re-analysis, making it possible for future generations of archaeologists to (re)address them with more advanced scientific methods. The legacy of data must be as thoroughly documented as possible. Carefully reconstructing past smelting/melting practices, both experimentally and theoretically, can provide useful models for understanding the whole production chain. This knowledge
of metal making is also important in the understanding the connectivity in the European Bronze Age, in particular the modes and extent of the transmission of ideas and/or skills over long distances.
At present, we see a rather dynamic 'metallurgical landscape' during the European Bronze Age, with numerous local and regional metal producers feeding the demand for metal, mostly copper and copper alloys. Eventually, some of these producers gained super-regional importance, possibly together with (or aided by) a recognisable “brand value,” such as the neck-ring and rib ingots of central Europe in the first half of the 2nd millennium BC or the oxhide ingots of the Mediterranean during the later Bronze Age (from ca. 1600/1500 BC). Some of the large centres produced relatively low-impurity copper from large and consistent ore deposits, such as the chalcopyrite veins in the eastern Alps or the ophiolitic copper deposits in Cyprus.

The recycling of metal is closely related to the notion of metal value, both as a material and for functional or ideological purposes. However, details of how, when, at what rate, and the degree to which it occurred are still obscure. While metal consumption in the Early Bronze Age seems to have been restricted to the elites, metal objects seemingly had a much wider application in the Late Bronze Age (producing another boost in metal production and consumption).

From the mid-2nd millennium BC onward, hoards appeared that contain broken metal objects. This has been traditionally interpreted as stored scrap metal for future recycling. However, their ritual connotation remains a potential explanation, too. It would appear that weapons, such as swords, were probably not produced from scrap metal, since their composition and hence the properties of the resulting alloy could not have been controlled. For instance, some scrap components such as lead would have had a detrimental effect on the fracture strength and hence weaken weapons made from such recycled lead-rich metal. In fact Late Bronze Age swords can usually be traced back to their source regions. But socketed axes were often made of highly recycled metal. I
ngot torques (I presume a type of ring ingot) were made mainly of two materials, low-impurity copper and metal containing appreciable amounts of arsenic, antimony, and silver. Since both materials have a very different composition which can be related to the smelting of different ore types, it was presumed that both copper types could be distinguished by the ancient metal-worker.

The compositional variation of bronze alloys, dating to the 3rd millennium BC, excavated at various Mesopotamian archaeological sites is quite similar. At the end of the 4th millennium BC and at the beginning of the 3rd millennium BC copper arsenic alloys with an arsenic concentration up to 5% were generally available. Tin bronzes were introduced during the middle of the 3rd millennium BC (ca. 2600–2300 BC). This introduction appears almost synchronously over the entire region of Mesopotamia, although there is an indication that the tin bronze introduction was slightly later in southern Mesopotamia. The concentration of tin ranges from low (ca. 2%) to high (>10%) contents. During the Akkadian period in Northern Mesopotamia the use of tin bronzes ceased, but then later they reappeared. In contrast, tin bronzes were continuously used in the southern cities (Susa, Ur) of Mesopotamia.

A remaining point of discussion in the literature is the origin of the tin used for the production of tin bronzes in Mesopotamia. There is a strong indication that the first discovery of tin bronze technology should be situated in the Taurus Mountains in Anatolia. The most ancient tin bronzes date to the end of the 4th millennium BC and the beginning of the 3rd millennium BC, but only appear at Mesopotamian sites about 500 years later.

We have focused on the copper-tin alloy, but we should not ignore the role of copper-arsenic. Metallic arsenic is a relatively volatile substance with a boiling point of 613°C, and arsenic trioxide (As2O3), boiling at 457°C, is even more so. However, if arsenical oxidised copper minerals are smelted, and this can only occur under reducing conditions, one would expect the product to retain some of the arsenic. In fact, calculation shows that most of the arsenic present below 7% will be retained. Even if the arsenical copper, after smelting, is held in a deep crucible under reducing conditions arsenic is only slowly lost from the surface. In fact, arsenic is lost at appreciable rates only when arsenical copper is hot forged.

Sulphide ores, on the other hand, must be roasted at some stage during the extraction process, and one would expect a considerable loss of arsenic during this stage. Therefore, you don't expect as much arsenic to be retained as in the case of an oxide ore.


Were arsenical coppers so desirable, or was it simply that the more easily mined copper ores contain arsenic? The mechanical properties of arsenical copper in the cast condition are not much better than those of pure copper. There is, however, a great difference in the wrought condition owing to the more rapid work hardening of copper-arsenic alloys. This effect is also seen with tin bronzes. Most pure and arsenical copper artefacts show considerable amounts of working, mainly hot but some cold, and this can be seen in the elongation of the slag and oxide particles present in the metal. It is probable that the preference for working rather than casting developed out of the working of native copper. It was a good technique for the material available and clearly persisted until the development of tin bronzes.

The second part of the question is best answered by describing the nature of copper deposits. Most, perhaps all, copper ores started as sulphides. In a typical sulphide deposit the surface minerals consist of iron oxides which represent the oxidised ferrous component of the sulphide deposit. In these near-surface layers you can also find precious metals, native copper and some oxidised copper minerals, but much of the copper will have been washed down to enrich the zone below, i.e. the
secondary enrichment zone. This is the zone which provides copper in the highest concentrations and, as the arsenical and antimonial minerals are relatively soluble, it usually also contains these in a high concentration. So you find copper-arsenic-antimony sulphides. The lowest zone represents the original deposit and contains copper sulphides in low concentration, usually about 1-4%. This is the grade of ore most worked today.

So one can see that, the surface deposit would be the first to be used and would give native copper, which could be lightly cold worked to give a good hardness in the centre and quite a high hardness on the edges. This is confirmed by the fact that some of the earliest copper artefacts found in Mesopotamia were free of arsenic. Later the Ubaid culture at Ur yielded axeheads with between 8.1% and 11.1% tin, and are undoubtedly the earliest tin bronzes known.


The fact that tin conferred on cast copper objects considerable additional strength in the cast state without needing cold working was, without doubt, a great discovery. But the idea probably developed very slowly, and in the Near East we have a period when arsenic and small amounts of tin were used together. While the deposits of arsenical and, to a lesser extent, antimonial copper ores are comparatively common, as shown by their use in early copper objects, tin is not often found together with copper. In fact, tin deposits are something of a rarity.

In all these places tin is found mainly as the mineral cassiterite (SnO
2) which is white in the pure state, but is more often contaminated with greater or lesser amounts of iron which render it brown or black. Today, the greater proportion of all tin comes, as in early times, from alluvial or mined deposits of cassiterite. This oxide is relatively stable and has a high specific gravity so that it collects like gold in the beds of streams or in gravels and sands. For this reason it has probably been recovered for as long as gold, although originally it would have been discarded as worthless.

Many early copper-base alloys from widely separated areas of Eurasia contain small amounts of tin, often together with arsenic. There is little doubt that most of the tin content is the result of smelting copper ore contaminated with tin minerals, although later some of this contamination could have been caused by the addition of tin bronze as scrap.

It is unlikely that the real tin bronzes originated from a mix of ores usually containing a limited tin concentration. It is far more likely that the tin needed came from well known deposits in Italy, Bohemia, Saxony, Asia Minor, or elsewhere. It must be remembered that high-tin bronzes did not enter the archaeological arena until after ca. 3000 BC
. For this reason it is highly probable that the earliest date for the use of the standard 7-10% tin-bronze in the Near East coincides with the ability of these civilisations to trade over considerable distances. Trade was certainly coming through the straits of Hormuz and up the Persian Gulf by ca. 2500 BC to supply the needs of the cities of Mesopotamia, and tin bronzes with 10% of tin did not appear much before this.
We tend to ignore that fact that during this period long-distance trade took place, and metals were a prime cargo. A wreck found off the southern Turkish coast was dated to ca. 1200 BC. It carried tin, copper and bronze ingots and scrap metal. Experts think it was a Syrian ship taking copper from Cyprus to the Mycenaean civilisations in Crete or Greece. If this was the case the tin was not originally obtained in Cyprus and must have been traded from elsewhere.


For the ores already containing arsenic and tin contamination, the effect on the mechanical properties is more or less additive, and cold-worked alloys of this type are a good deal stronger than pure or slightly impure coppers. In fact the addition of 1% arsenic in solid solution would confer about the same increase in hardness upon cold working as 1% tin. As an example a 2.4% tin alloy blade from the Royal Graves at Ur (ca. 2800 BC) contained wrought and annealed copper, proving that an arsenical copper tradition of forging remained active (in opposition to casting).
The next stage was the addition of tin, either as oxide or metal, to arsenical copper. This type of mix would have produced something like 2% arsenic and 7.5% tin, and would be about as hard and strong as a 9.5% tin bronze. In order to retain the arsenic, care would have to be taken to avoid oxidation and it is probable that the ingot material had a slightly higher arsenic content. Usually, much of the arsenic and some tin will be lost in the slag, and therefore not available for strengthening.
The pattern was established, from the Early Bronze Age through the Middle Bronze Age one there was a steady reduction in the arsenic content of copper-base artefacts. This may have been due to a change in smelting technique, whereby the metal is held longer under oxidising conditions, or to changes in the nature of the ore body mined at different depths. Another view is that this occurred because a steady trade connections brought to Mesopotamian cities a consistent supply of tin, allowing them to establishment the standard of 10% tin bronze. When they had adequate supplies of tin there was no longer any need to smelt in such a way as to retain the arsenic in the ore. No doubt, in many cases, this stage coincided with decreasing amounts of arsenic in the ore as the primary sulphide deposit was reached at lower levels.

The main body of evidence for understanding the techniques used lies in the furnaces and slag heaps. Unfortunately, it is often extremely difficult to date the slag heaps and sometimes even the furnaces. Copper smelting slag remained fairly uniform composition throughout metallurgical history, i.e.
fayalite with greater or lesser amounts of alumina and lime. The copper content was usually quite low, i.e. 2-4%. One would be lucky to find a stratified heap containing pottery, but most metalworkers seem to have been poor people and only in civilisations where pottery was extremely plentiful are we likely to find sherds in the slag heap. However, since most slag has entrapped pieces of charcoal, a carbon-14 date can often be obtainable. While it has not been easy to find evidence for very early smelting in some countries, the mines and their waste heaps have often given very early carbon-14 dates. With the furnaces themselves archaeologists may be more lucky, since some of them are in settlements or attached to workshops within palaces.

In the transition from bronze to iron


So we have this idea that North Mesopotamia was invaded and under Akkadian political control. We know this because royal inscriptions (propaganda) tells us this. But oddly enough they do not say how they controlled or administered the North. What little writing found in the North is identical to Akkadian administrative documents from the South, but no documents mention the Akkadian State. So we are only left with the massive “Naram-Sin” palace at Tell Brak (the mud bricks are stamped with the name of this Akkadian ruler), but architecture and other aspects of material culture evolved independently in the North from the forms and styles of Akkadian southern Mesopotamia.
Palace building became increasingly popular, with massive structures of mud brick walls set on stone foundations, all organised around large open courtyards (some paved with fired bricks). Temples also became bigger, but were a varied style (a good example is in
Tell Beydar with its walls decorated with elaborate niches and plastering).
Craft production became highly standardised and specialised, with elaborately decorated fine wares giving way to mass produced undecorated open bowl forms (a tool for ration distribution, but also a product easy to stack in large kilns). Precious metals and fine textiles were only for the elite (as seen in their burial rituals).
With the rapid expansion of the urban economy, the state controlled processed cereals via special storerooms, and the state also controlled the inter-regional exchange of mineral resources. Even if there is no word for “State” in either Sumerian or Akkadian. The most obvious sign of the dominance of an elite, was the construction of walls protecting them from the low status residents in the hinterlands. The distribution of small settlements (agro-production optimised locations) clearly shows a planned centralised economy typical of the Akkadian dynasty. As the large urban centres evolved they used a radial pattern of streets from a central high mound, with circular outer city walls. Street frontage and cross-street alignments of walls were regular, suggesting central planning. The North did not appear to adopt monumental palace and temple (propagandistic) designs and art from the South. And the North appeared to retain a 'household' metaphor for its palaces and temples. This does not mean that everyone lived in walled cities, in fact mobile pastoral groups were both economically and politically important well into the late Bronze Age (ca. 1500 BC). Nor in fact does it mean that the centralised control was extremely oppressive, since most families will have retained the possibility to transition to a mobile pastoral existence if needed.

In the period ca. 2200-2000 BC cities in northern Mesopotamia were rapidly depopulated, in some cases permanently (within the span of a single ceramic phase), When these sites were re-populated, they represented a clear break from the early period. There are suggestions that every documented old world civilisation either collapsed or suffered extremely during this period, and the Akkadian Empire was no exception. The larger urban settlements operated closer to the limit of sustainability, and collapsed when faced with a run of dry years.

So in the North, the period ca. 4400-3400 BC was geographically uneven and characterised by gradual development. There were only a few population agglomerations, but they were still able to develop sophisticated productive technologies, particularly in the area of metallurgy. These proto-urban settlements lacked social institutions which made them vulnerable to the Uruk expansion from the South. There were a small number of large settlements, but nothing between them and the usual small villages or hamlets.
Tell Brak and Tell al-Fakhar certainly grew steadily over 700-800 years before the colonists from southern Mesopotamia arrived. During the period ca. 3000-2500 BC the North went through a period of de-urbanisation, before cities re-emerged in a very different form (ca. 2600-2500 BC). They were densely populated, walled, and grew rapidly (less than 200 years). In addition they did not just re-appear in the alluvial plains, but also in the valleys and on the steppes (albeit smaller in scale). The earlier urbanisations were almost accidental, but the later rapid phase of re-urbanisation clearly was based on a general template of what a city and its institutions should look like (almost certainly based upon a model from southern Mesopotamia). This urban resurgence in the North occurred at a time when southern cities had been flourishing for centuries, and clearly they did not each evolve in isolation. But with the exception of the time of the Uruk expansion, the culture of the North was not a simple wholesale adoption of the culture of the South.

The Iron Age


In the beginning everything was about the 'strange stones' of native copper, gold, silver and meteoric iron. You could heat them, shape them, and they kept that shape when cooled. Man then developed the skills of hammering,
tempering, cutting, and grinding.
Then Man discovered how to 'reduce' ores (
smelt) using carbon and the ability to melt and cast metals, and how to make alloys. New metals were discovered in the ores (copper, lead, silver, …), and alloys were created first by mixing ores then later by mixing metals.
Initially bronze was a major improvement over copper, so there was no initial attempt to control the contents of the alloy. Experts look at this stage in the development of metallurgy as one dominated by ores.

Mesopotamia 1200 BC

The next stage people tend to simply call the 'Iron Age'. If we look beyond cast iron, wrought iron, and steel, what we see is a major shift from developments around alloys of naturally occurring ores to the controlled treatment of a specific ore. Hammering, tempering, quenching and annealing became far more important than the variations in the composition of a mix of ores. So the key to the 'Iron Age' is not iron, but the discovery and mastery of a complex and new metallurgical process.

Steel is an alloy of iron with 0.02% to 1.7% carbon. Carbon acts as a hardening agent, preventing dislocations in the iron crystal structure to slide past each other. Varying the amount of carbon affects hardness, ductility, elasticity, and tensile strength. Increasing the carbon content increases hardness and strength, but the artefacts become more brittle. The maximum solubility of carbon in iron is about 1.7% at 1130°C. Above about 6% carbon produces cementite, which is hard and brittle, a bit like a ceramic. Alloys with more carbon and a lower melting point are called 'cast iron', and those with less than 0.035% carbon are wrought iron.
Most people will know that iron is extracted from the ore by removing the oxygen and combining it with something like carbon (this process is called smelting). However the oxidation rate for iron increases rapidly above 800°C, so it is important to smelt in a low-oxygen atmosphere. Unlike copper or tin, iron dissolves carbon very quickly, so smelts can result in alloys with too much carbon.
Most people will not know that there are several different types of steel, so-called
allotropes of steel, each with a different structure and different properties. Ferrite is the most stable at room temperature (alpha-iron). This iron is fairly soft but can not dissolve carbon above 0.021% at 910°C. Above that temperature ferrite undergoes a phase change to austenite (gamma-iron), which is also soft but can absorb more than 2% carbon at 1154°C. As austenite cools it looks to revert to ferrite by leaving its excess carbon in something called cementite, resulting in a cementite-ferrite mix. Each of these 'types' of steel have a different crystal structure. Martensite is a metastable substance with four to five times to strength of ferrite, but more than 0.4% of carbon is needed to form it.
So when austenite is formed the key is to
quench the steel to form martensite. To do this the carbon must be 'frozen' in place as the crystal cell structure changes, resulting in a different structure with the carbon bonded in place. Austenite and martensite have the same chemical composition, so the transition requires little additional thermal energy. The key is to quench (cool) the hot metal in water or oil so rapidly that the transition to ferrite can't take place. However, austenite is more dense than martensite, and thus the steel expands slightly, which can build in stress and produce cracks. But if a significant concentration of martensite is formed, the steel will be very hard and very brittle.
Now the next step is to heat treat the metal at a lower temperature, enabling some of the martensite to be converted into cementite. This helps release internal stresses and defects, soften the steel, making it more ductile and fracture resistant (this is
tempering).

By ca. 3000 BC Sumerians were regular users of copper, bronze, gold and silver utensils and ornaments. They made intricate castings of animals, they made seals in lead and copper, and copper picks, double-axes, bowls, rings, tubes, mirrors, etc. Northern copper was mostly pure whilst southern copper was often an alloy with about 10% of lead. In the South we also find more artefacts that are forged, cast, and soldered. They also made copper cups, axes, fishhooks, forks, and socketed axes.
The Sumerians also knew the
filigree-, granulation- and incision-techniques, they used forging, engraving and inlay techniques, they soldered, and used core casting, 'lost-wax', and open and closed mold casting.

By the Iron Age mining and smelting had become separate 'industries' and there was an active trade in ores and cake (oxides), in metal ingots, and in finished metal objects.

Mesopotamian Networks

A recent report (2016) noted that glass rods found in the ancient Egyptian city of Akhetaten and glass beads found in graves in Scandinavia, northern Germany and Romania, all originated in Mesopotamia, ca. 1400 BC. This was not a 'one-off' since a total of 271 glass beads have been found in 51 burial sites in Denmark, and the majority of the beads came from Nippur in Mesopotamia. The glass bead below has amber embedded in it, and came from ancient Egypt. In fact the 23 blue glass beads came from the same workshop in Amarna that made the cobalt blue glass decoration that was used in the headdress of the death mask of King Tutankhamun (who died in 1323 BC).

Glass Bead

The trade route for these beads was the same as that used to move amber south. Around ca. 1200 BC these trade route collapsed possible due to a conflict with the 'sea peoples'.

We should not underestimate the time and effort it took to move from native copper to easily reducible
oxides and carbonate ores, and then to the more complex sulphide ores. Then to mixing of ores which contained 'impurities' of iron, antimony, arsenic, etc., and then to the intentional creation of alloys.

And we should not forget that copper metal has a melting point of 1085°C whereas pure iron has a melting point of 1530°C, so logic would tell us that iron would be discovered after copper. However this is a false logic since the reduction temperature of ores is a different topic, and
iron ores have a reduction temperature lower than that of copper ores. So in a primitive smelter with charcoal it would have been easier to produce iron than to produce copper. On top of that iron ores are far more abundant than copper or tin ores. Some archaeologist defend the idea that iron was in fact discovered before copper, but copper became quickly the more popular. And in fact in Africa iron appeared before both copper and bronze, but on the other hand iron never independently appeared in the 'New World' and Oceania.

If iron is more abundant, easier to produce, and better than bronze, why did we have to wait for 'steely' artefacts to appear? Part of the answer must be that working iron was simply more complicated for those who traditionally worked with copper.

Early copper workers produced in a furnace a flow of reddish metal. They certainly would have experimented with different types of ores, including iron. But iron would not have 'flowed' and so would probably have looked like just another failed experiment. The result in the furnace would have been a so-called '
bloom', and spongy mass of fused stone full of air holes and looking as un-metallic as possible. Any small globules of iron would have been hidden in the cinders and slag. A more persistent copper-worker might have tried to cold hammer the bloom, and when that did not work hot hammer it, but the result would have been no different.

Bloom

It must be said that there exist very few iron artefacts dating from the Early and Middle Bronze Ages, and they are all ornamental and most if not all are meteoric iron.

Meteoric Iron

These are meteoric iron beads found in an Egyptian tomb and dated to ca. 3500 BC. They are possibly the oldest iron artefacts found so far.

So what changed? We know more or less when iron artefacts started to appear in significant quantities, and of course we also know what techniques were needed. Iron appears to have arrived ca. 1400 BC, and by ca. 1200 BC prices were dropped quickly for iron tools. We assume that this happened when iron tools started to be seen as equal to, or better than, bronze tools. We also know that bronze agricultural tools were very rapidly replaced by iron, suggesting that the production techniques were rapidly diffused and 'industry' centres rapidly established.

Most experts think that the Iron Age started in Anatolia, where iron-using people have occupied the area from about 2000 BC. During the Bronze Age copper ores would have been smelted with the aid of iron fluxes, with a distinct possibility of iron being reduced in the bottom of the furnace. This would have caused the furnace bottoms to contain much slag and ductile iron. This possibility could have occurred anywhere in the Late Bronze Age and there is no reason why the peoples of Anatolia should have made use of it before anyone else given their long acquaintance with copper smelting. Anatolia is also home to one of the earliest man-made iron daggers.

Initially the supply of man-made iron in the 2nd millennium BC was spasmodic, and because of its rarity it was first used for jewellery and small dagger blades. But as we have indicated by ca. 1200 BC larger weapons were being made (below we have a 54 cm long Mesopotamian sword dated to ca. 1000-800 BC). But iron did not simply replace bronze, the use of iron developed slowly and bronze continued to be used. In fact by the ca. 1100 BC smiths were making bronze-hilted iron blades. These blades look to have been slightly more resistant to breaking than tin bronze weapons. The real breakthrough was
low-carbon steel (mild steel).

iron sword


What new techniques were needed? Firstly a larger furnace providing a larger body of heat was needed (including a more powerful blast of air into the fire to maintain the smelt). Secondly, a suitable flux was needed. Thirdly, the product must be subjected to a much more prolonged hammering at red-heat, removing the slag and cinders and creating a metallic mass (this is called hot forging or hot working).

Initially Man would have worked very small amounts of meteoric iron, and possible a small amount of iron sponge that might be found on exposed veins of ore that had been subject to bush fires, etc. This ore would not have been melted in the fire, but the removal of oxygen in the fire would have produced this iron sponge that could have been hammered into shape (
directly reduced iron). Hot forging of this type of iron might well have been known before cold forging, but there is no evidence for this since all the working of meteoric iron has been done by cold hammering.

Wherever iron working was first learnt, by ca. 1200 BC local smelting was operational in Anatolia, Phrygia, Syria, and perhaps Cyprus. By 800 BC iron smelting had reached Assyria, Persia, India, Egypt, Crete, Greece, Italy and Central Europe.

The objective was to produce an iron tool or weapon that was superior to the bronze equivalent. And the key was a hot furnace and a cycle of forging and reheating which folded in to the iron some carbon from the charcoal. This made a low carbon steel that could then be shaped and quenched in water to harden it. Quenching un-carburised iron produced a poor quality product (typical of so-called
wrought iron). We know that ca. 1400 BC some of this poor quality iron was in circulation. Well through the 10th C AD the technique for carburising iron was still poorly practiced. Stories abound of warriors in battle having to straighten their bent swords underfoot.

The alternative to carburising-quenching was to find an ore that include just the right impurities, a manganiferous ore (
manganese rich iron ore). This was the key to Noricum, which yielded a 'natural steel' for the Roman armies, and thus made the Hallstatt culture (ca. 800-500 BC) so famous.

So what is the first step? If we want to cast a metal we want to melt it so that we can pour it. But if we have an ore we want to smelt it. We need to separate the metal from the worthless gangue. It is the gangue that affects the efficiency of the smelt and the purity of the iron. Pounding and washing the ore can help, but usually the mixture is too intimate. Sometimes the gangue separates easily from the metal. It liquifies and flows out of the metal-slag-cinders mix (called a 'bloom'). But usually the molten slag is too viscous and does not separate readily from the metal, so a flux is needed.

The main problem in deciding what sort of furnace was used in any period is the fact that, in most cases, only the base of the furnace remains. This has meant that many talk of 'bowl' furnaces in which the diameter is much the same as the height when excavated. But the reality is that we do not often know the original height but it is clear from experiments that the height/diameter ratio does not need to exceed 2:1 to get iron smelting conditions and that, with proper manipulation, smelting can be carried out with ratios less than this. The bowl furnace is usually thought of as the simplest type of furnace. This is often no more than a hole in the ground or rock into which air from bellows can be directed through a 'tuyere' with a short, probably cylindrical-shaped, superstructure of clay above. The broken ore and the charcoal are mixed together or charged in layers on to a hot charcoal fire. The maximum temperature should be at least 1150°C. This type of furnace has no outlet for slag, and the slag rans down to the bottom forming a cake or furnace bottom or, in some cases, just small rounded particles or 'prills' of slag. The 'bloom' remains above the slag, and after the process was completed the clay superstructure was broken away, the bloom removed, and the furnace cleaned out.

Some experts suggest that initially iron smelting was not invented by advanced copper smelters but by more primitive copper smelters, or even by a completely new group who did not even know the technique of smelting copper. Whatever the explanation, the Early Iron Age in Europe was typified by the shaft furnace, while the art of tapping slag used in the developed bowl furnace was not introduced into Europe until Roman times. The shaft helps maintain the reducing conditions which are even more important for iron than for copper. The smithing furnace does not need reducing conditions the work the iron. All that is needed is a tuyere held down by a stone and long enough to keep the bellows from scorching. A pile of charcoal is then ignited with air from the goatskin bel- lows. The smith places his piece of iron in the charcoal near the mouth of the tuyere and a good heat can easily be obtained (1200°C). With wrought iron, most of the work can be done by cold hammering and annealing at 700°C, and even today many smiths still work this way. There are hardly any remains of furnaces known from Anatolia or Persia.

The product of the 'bloomery process' can be very heterogeneous, with areas of high and low carbon and variable amounts of such elements as arsenic and phosphorus. This make it difficult.to determine the level of skill applied at a particular time in the past. If there are relict areas of steel in a corroded iron object, was the whole artefact intentionally made of steel? If martensite is found in the structure, was the quenching necessary to produce it intentional or merely done to cool it?

Martensite is a hard form of steel crystalline structure that is formed in carbon steel by rapid cooling (quenching). The quenching resulting in strain being retained in the structure, thus a large number of dislocations, thus producing a stronger and harder steel.

If the ratio of fuel to ore is large and the bellows are efficient, the iron can be made to absorb so much carbon that it forms an alloy of iron and carbon or 'cast iron', which melts at 1150°C and forms pools at the bottom of the furnace (pure iron melts at 1540°C). These liquated lumps could have been broken up and remelted in a crucible in a hot smithing fire, and cast like bronze.

Wrought iron had to be made either by conversion of cast iron in a smith's fire or directly using a lower fuel/ore ratio. There is no doubt that the heterogeneity of bloomery iron was well known to early smiths, and that the high-carbon areas could be separated from the softer iron. Also they would have noted that an increase in the fuel/ore ratio produced more of the harder iron.

To give you a clearer picture of this process have a look at this webpage on "
The Ancient Art of Smelting Iron".

Then there is the question of flux. The idea is to pick a flux that when added to the smelt helps the gangue fusion and separate itself from the metal. Lime is added to iron ores containing siliceous gangue, but each type of ore has a specific flux. Given the amount of iron left in slag and gangue by early metalworkers, it is clear that some did not use a flux (or the correct flux for that particular ore). However, the majority did use a flux, the selection of which required much skill and experience.

The 'bloom' would be re-heated and re-hammered to remove the slag and cinders. This was both time consuming and costly in terms of fuel for the furnace. The tools used were completely different from those used for copper and bronze casting. Iron smelting involves handling large, heavy, red-hot matter.

Once they had a 'bloom' they would need to carburise it, temper it, and quench it. This would turn a ductile wrought iron into a hard, tough
low-carbon steel. After the first round of hammering, the iron would be re-heated and re-forged, leading involuntarily to carburising. It is still not clear today if they really knew about this process, or had just experimentally found that it worked.

And they discovered that it was important to rapidly cool the metal by
quenching after carburising at high temperature. A slow cooling is air does not harden low-carbon steel.

The Greeks and Romans would later discover the effect of
tempering or annealing, which softens the hardening effect of quenching, and makes the iron-steel less brittle (and a little less hard as well).

One Egyptian lugged axehead, dated to 900 BC, had never been used as it was covered with a thin layer of magnetite from the last heating. The carbon content varied from zero in the centre to 0.9% at the blade edge. The whole axe had been quenched from a temperature of 800-900°C, giving a hard martensitic cutting edge. The really interesting thing is a kind of auto-tempering. This edge had been tempered by the conduction of heat from the thicker parts of the axe, which had not been cooled to ambient temperature before removal from the quenching liquid. The final hardness, therefore, varied from 70 HB away from the edge, to 444 HB at the edge itself. The result was a first-rate axe correctly heat-treated to the hardness one would expect from an axehead today. A second axe of the same period was also found, but it had been used and was corroded. The edge showed that this blade had also been quench hardened, although the hardness was not high owing to the low carbon content of the remains. It is therefore clear that some Egyptian smiths knew the art of quench-hardening, but that the technique was probably not well practiced. Equally it is not clear that they knew about tempering, but experience might have suggested to them that it was a good idea. However this axe was never used, so no one would have know about its 'unique' qualities.

So without complex equipment and with no temperature control of the furnace, the ancient smiths found the right mix of smelting, carburising, and quenching to create a high-quality, low-carbon steel. Not surprising that archaeologist spend a lot of time analysing failures. The important point here is that the skills and experience for an iron worker was different from a copper/bronze worker.

As such
iron brought a new metallurgical revolution. It was no longer about creating alloys and exploiting impurities in the ores. Iron was about playing with handling the metal, using heat to carbonise, and the fast quenching and annealing. In may ways this became the true age of the 'smith'.

Radio-carbon dating of iron


Over the three webpages we have mentioned several times
carbon-14 dating, so now we will look at this technique in a little bit more detail. Looking through these three webpages on Mesopotamian Science and Technology we must be struck by the constant desire to put a date to something. When? How old? Which came first? Man spends considerable time and effort trying to determine the age of everything from the universe down to a simple dairy product in a supermarket. So it's not surprising that archaeologists love to use carbon-14 dating whenever they can. We are going to look at carbon-14 dating for iron.

Natural carbon exists with three isotopes, stable carbon-12 and carbon-13, and the radioactive isotope carbon-14 (often called radiocarbon). The abundance of carbon-12 is 98.9% and carbon-13 1.1%, whereas the abundance of carbon-14 is ca. 1.2 x 10
-12. Radiocarbon undergoes a ß- decay to the stable 14N releasing 156 keV. Carbon-12 and carbon-13 were and still are created in red giant stars through what is called the triple-alpha process. Carbon-14 is created by cosmic rays in the upper atmosphere. Technically speaking there are 15 different isotopes of carbon, but only three are naturally occurring, the others are all man-made isotopes (11C has a half-life of about 20 minutes, and all the others have half-lives in seconds and milliseconds).
Cosmic rays are mostly high-energy protons (ca. 90%), free electrons and some hydrogen and helium ions. When they hit the atmosphere they also create secondary cosmic rays which then collide with oxygen and nitrogen in the atmosphere (a process called spallation). One of these secondary particles is the neutron. So carbon-14 is continuously formed in the stratosphere and upper troposphere by interaction of these neutrons with nitrogen atoms (14N + n > 14C + H). There are several other reactions that also result in the production of carbon-14, but they of minor importance. On average about 2 carbon-14 atoms are produced every second for every cm2 of the Earth's surface.

Plants fix radiocarbon via photosynthesis and animals consume it in their food intake (as part of their carbon skeleton). Equilibrium between the specific activity of atmospheric carbon and that of organic material is then finally reached and maintained by carbon recycling. When the plant or animal dies no new carbon is absorbed, and what they have absorbed starts to decay (carbon-14 had a half-life of 5,730 years). Carbon-14 dating is presented in a 'Before Present' (BP) format using 1950 AD as zero (also the term 'radiocarbon age' is used). Carbon-14 production fluctuate depending upon geographical location, because the neutron intensity (flux) at the geomagnetic poles is about 5 times higher than at the magnetic equator. So latitude and altitude can both affect carbon-14 dating (+/- 40 years). Also the results can be affected by a number of natural phenomena, such as the variation in the intensity of cosmic radiation (e.g. 11-year solar cycle). Marine carbonates and marine mammal remains cannot be dated accurately due to fossil carbon reservoirs and slow cycling of carbon in oceans. Also volcanic gases include carbon dioxide which will dilute local carbon-14 concentrations. There are also a number of additional problems with dating artefacts less than 300 years old, but this does not concern us here. And nuclear weapons testing between 1955 and 1963 almost doubled the uptake of carbon-14 by terrestrial organics (including us). Having noted all this we must also note that the tropospheric radiocarbon abundances in CO2 were within 5% almost constant during the last 5,000 years.

When the general technique was first tested it required about 1 gram of carbon, and with 2% carbon steel that meant a 50 gram sample, or for carbon iron with 0.1% carbon that meant 1 kilogram (provided you could extract carbon at 100%). Not many museums were willing to see someone cut 50 grams out of one of their precious ancient iron artefacts. The 'general technique' was a measurement with a so-called
Geiger-Müller counter, then a liquid scintillation counter was used. Now with Accelerator Mass Spectrometry (AMS) only a few milligrams are needed, and the sample preparation has been simplified. AMS counts individual rare isotope atoms using an ion beam, which in not related to radioactive decay. So the efficiency of the system depends upon the ion beam and the transport of the beam through the detection system. Thus with AMS the time to obtain the results is measured in minutes and not days as with other techniques.

Reading the above someone might ask what was wrong with the older techniques, which certainly cost much less. Firstly we have already mentioned the sample size, but also the other techniques measure the 'passive' emission of the sample as it decays which in many cases will be very, very low, of the order of a few counts per day. Whereas AMS is a kind of sample interrogation method with ca. 4 orders of magnitude improvement in signal. Using the same sample, to get the same measurement accuracy as an AMS measurement taking 10-20 minutes (including standards and repeat measurements) it would take 180 years using a 'passive' radiation counter system.
Someone with a technical bent might ask the question about using just a mass spectrometer. The simple answer is that the carbon-14 signal is quite weak and with a normal mass spectrometer it would be drowned in a background of interfering isobars, i.e. other chemical elements or compounds that have the same mass number. AMS is selective to
a specific isotope through momentum and energy selection.

The most important feature is that
when iron is smelted it incorporates carbon from the fuel used. If the carbon is from charcoal made from freshly cut trees, the carbon-14 inside the metal will reflect the time of manufacture of the artefact. Luckily charcoal was the most common and most effective metallurgical fuel in antiquity, and because it has a high affinity for oxygen it can reduce iron ore with ease. Charcoal was usually made from young trees, 10-25 years old, and this does not affect significantly the radiocarbon dating. Again luckily ancient Man would strip off the bark to allow the cut wood to dry more quickly (bark contains a lot of phosphorus which would dilute the quality of iron). On top of that the trees would be cut in summer when they contained less sap and by the same token less phosphorus. The antique smelting process would require more than 500m2 of forest for just five swords or ten axes.

Various forms of manufactured iron can exist (wrought iron, steel, and cast iron) and the incorporated carbon can vary from
ferrite (max. 0.035%), cementite (up to 6.7%), as discrete graphite flakes or chunks, or as precipitates of carbides. The key is to be sure that the carbon that is present relates directly to the date of manufacture of the artefact.

Samples need to be prepared so that they can be used in an ion source to make an ion beam. The
sample preparation is a significant component in the overall analysis schema, starting with an exacting acid cleaning and washing. Then the sample is placed in a quartz tube and outgassed at 900°C for 3 hours, followed by combustion with ultra-pure oxygen. As the sample is heated in a rich O2 atmosphere carbon diffuses to the surface of the molten metal and it and the iron are oxidised. The carbon is driven into CO2 and the sample gases are transferred from the combustion vial to a reduction vial. The condensable gases CO2 and H2O are separated from the non-condensing gases such a N2 and SO2 which are then removed. This phase is performed at 1000°C for at least 10 hours. The resultant CO2 is then be 'graphitised' using a hydrogen reduction. The carbon is deposited on powdered iron and the mixture pressed into a small target.
This sample preparation process destroys all chemical information present in the sample. And only permits measurement of the isotopic ratios of carbon, i.e.
14C/12C.

Now the sample can be measured in the Accelerator Mass Spectrometer. But what does this machine look like?

Accelerator Mass Spectrometer

As you can see it is a substantial and complex piece of equipment, costly to buy, costly to run, and costly to maintain. It's difficult to put a price on a measurement, but many laboratories will quote $300-500 per sample (excluding the sample preparation, etc.). One report put the cost of an AMS at $10 million, as compared with a 'passive' radiation measurement system costing about $100,000.

vienna_ams

So the graphitised sample is placed in a sample wheel with other samples, and they become the cathode for the ion source. The ion source consists of a beam of carbon C- ions produced by bombarding the surface of the graphite sample with cesium Cs+ ions. So the AMS spectrometer works by forming an ion beam in what is called a cesium-sputter ion source. The C- ion beam produced is accelerated through two magnetic fields that select for the masses of interest, and then measures the number of ions that are in those masses of interest. For example, during radiocarbon analysis, the ion source produces a beam of both elemental and molecular ion particles (e.g. negatives ions of 12CH2 and 13CH) that are first passed through a magnetic field to bend the beam 90° and only carbon ions of mass 13 or 14 are selected for injection into the particle accelerator. Ions of mass 12 are quantified in an off-axis Faraday cup. The so-called 'injector magnet' bends the beam of ions, but heavier ions are bent less than lighter ones because of higher momentum, and a slit is calibrated to only allow certain ions of a certain mass to pass. This injection process first injects mass 14 ions and then the 'bouncer' changes the energy of the ion beam to inject mass 13 ions. This process is repeated many times per second. The tandem accelerator (long grey tube in the photograph) accelerates the ions to high velocity (~1% the speed of light) before collisions with a diffuse gas or thin foil eliminates any molecular ions from the ion beam and strips off one or more electrons in a molecular ion dissociator. During this acceleration the ions travel through a gas channel which strips of some of the electrons. If an ion is a molecule it breaks apart and is eliminated. If it is an atom it becomes positively charged and is collected at the ground potential. The fragment of the initial ion beam that exits the accelerator is then passed through another magnetic field to separate the ion of interest from the ion beam. The remaining mass 13 ions are quantified in a Faraday cup and, after final energy analysis, the mass 14 ions are detected at the end of the beam path. This process demonstrates the “bag of tricks” used in performing AMS: isotopes are selected by dispersing the ion beam according to mass in a magnetic field, the destruction of any interfering molecular isobars, and the highly efficient atom counting done by the carbon-14 detector at the end of the beam path. Absolute quantification is performed by taking the output of the spectrometer, which is counts of carbon-14 per unit of carbon-13 in the sample, normalising that measurement versus a similar measurement of a standard material, and expressing that measured ratio as the amount of carbon-14 per amount of carbon in the sample.

The results of an AMS measurement are expressed as an isotope ratio of isotope of interest per common isotope for a given element. However since this isotope ratio is a dimensionless number, it can be converted in to many useful units.

The only measured parameter of the sample is the isotope ratio. AMS does not provide molecular identification, the only information provided is a ratio of rare to abundant isotope in a sample. All other information about a sample must be obtained using other methods.

The technique is not infallible, and some types of contamination can introduced depleted carbon-14, which will give the impression that the artefact is old than reality. This can be particularly important when artefacts are made from recycled material. Often recycled material is inhomogeneous, and the tell-tale sign is when different samples from the same artefact produce differing results.

This measurement system produced a ratio such as
14C/12C, but an archaeologist generally would like an age in years. The relationship between an isotope ratio and age is not so simple (we have mentioned a few of the problems above). However there are are 'natural archives' that can help, foremost are tree rings. There is a continuous annual record going back 12,000 years, and the calibration curve for this period is very precise.
But the first step is to determine the 'radiocarbon age' by comparison with the activity present is modern samples (and including a background corrections using a geological sample of 'infinite' age). One commonly used standard is 'Oxalic Acid I' (HOx1), which is an International Radiocarbon Dating Standard. Ninety-five % of Oxalic Acid I is equal to the measured activity of the absolute radiocarbon standard which is wood from 1890. This 'wood standard' was grown before the effects of fossil fuel was felt. And the activity of 1890 wood is corrected for radioactive decay to 1950. Some people mention the use of 1950 (set to 0 BP 'Before Present') as being linked to nuclear testing, etc. but in fact the year 1950 was simply taken to honour the first publication on the topic of radiocarbon dating published in December 1949. The Oxalic Acid I standard was made from a crop of 1955 sugar beet. This standard has been used up and is replaced by Oxalic Acid II made from 1977 French beet molasses. An inter-laboratory comparison was made between the two standards, and also permitted some laboratories to make secondary radiocarbon standards.
All this boils down to comparing the net activity of the modern standard against the residual normalised activity of the sample, i.e. the proportion of radiocarbon atoms in the sample compared to that present in the year 1950 AD.
This allows experts to attribute a radiocarbon date to the artefact. Now the last step is to relate the radiocarbon date to a real date. This is really just an 'offset' to take into consideration that the carbon-14 production rate has not been constant over time.

Radiocarbon Calibration Data

Finally. the only remaining challenge is the attribution of an error to the estimated date. The technique is usually considered to be limited to ca. 50,000 years ago. Often laboratories will just attribute a counting error to the measurement, which certainly underestimates the true error. So as a word of warning, look at the estimated age of the artefact, look to see if the experts have given an error range, and be careful to view both results with some circumspection if the experts do not discuss in detail the potential limits of the estimate and place it in a coherent archaeological context.

References


Archaeology in Practice, ed. Jane Balme and Alistair Paterson, Blackwell Publishing (2006)
Metallurgy in Antiquity, R.J. Forbes (1950)
A History of Metallurgy, R.F. Tylecote, Institute of Materials (1992)
Material and Process Selection Charts, Cambridge University (2010)
Materials Selection In Mechanical Design, Michael F. Ashby, Butterworth Heinemann (1999)
Comparing Axe Heads of Stone, Bronze, and Steel: Studies in Experimental Archaeology, James R. Mathieu and Daniel A. Meyer, Journal of Field Archaeology, Vol. 24, No. 3 (1997)
Copper and the Copper-Based Alloys (chapter in a unknown book, pages 374-405)
The Provenance, Use, and Circulation of Metals in the European Bronze Age: The State of Debate, Miljana Radivojevićm, et.al., Journal of Archaeological Research (2018)
The Material of Early Bronze Age Ingot Torques, dissertation from Dipl.-Ing. Margrit Junk (1973)
An Overview of Mesopotamian Bronze Metallurgy during the 3rd Millennium BC, I. De Rock, et. al., Journal of Cultural Heritage 6 (2005)
A History of Materials and Technologies Development, A.V. Valiulis, VGTU Press TECHNIKA (2014)
Bronze in Archaeology: A Review of the Archaeometallurgy of Bronze in Ancient Iran, Omid Oudbashi, et. al., pre-print (2018)
The Composition and Technology of Copper Artefacts from Jericho and Some Related Sites, Lutfi A.H. Khalil, dissertation (1980)
A Study of Diet in Mesopotamia (ca. 3000-600 BC) and Associated Agricultural Techniques and Methods of Food Preparation, Elizabeth Rosemary Ellison, dissertation (1978)
The Sumerians, C. Leonard Woolley, The Norton Library (1965)
Handbook to Life in Ancient Mesopotamia, Stephen Bertman, Facts on File Books (2003)
A History of Babylonia & Assyria, Leonard W. King, Chastto & Windus (1915)
Investigating Agricultural Sustainability and Strategies in Northern Mesopotamia, Mark Altaweel, Journal of Archaeological Science 35 (2008)
Ancient Mesopotamia, A. Leo Oppenheim, The University of Chicago Press (1977)
Letters from Mesopotamia, A. Leo Oppenheim, The University of Chicago Press (1967)
They Wrote on Clay, Edward Chiera, The University of Chicago Press (1966)
Beyond the Ubaid, Robert A. Carter and Graham Philip, The Oriental Institute of the University of Chicago (2006)
Bricks and Clay Tablets, P. Delougaz, The University of Chicago Press (1933)
Dating the Fall of Babylon, H. Gasche, et.al., The Oriental Institute of the University of Chicago (1998)
Art as a Source of Information on Horticultural Technology, Jules Janick, Proc. XXVII IHC on Global Hort,: Diversity and Harmony (2007)
Astonomy and the Fall of Babylon, V.G. Gurzadyan, Sky & Telescope, Vol.100, No.1 (2000)
Exploration of Animals Resources from the Pre-Pottery Neolithic Tell AbuSuwwan Site in Jordan - an Archaeozoological Perspective, Abuhelaleh Bellal, dissertation (2010)
Re-Modelling Political Economy in Early 3rd Millennium BC Mesopotamia, Giacomo Benati, Cuneiform Digital Library Journal (2015)
An Interdisciplinary Overview of a Mesopotamian City and its Hinterland, Robert McC. Adams, Cuneiform Digital Library Journal (2008)
Sumerian Beer - The Origins of Brewing Technology in Ancient Mesopotamia, Peter Damerow, Cuneiform Digital Library Journal (2012)
Cycles of Civilisation in Northern Mesopotamia 4400-2000 BC, Jason Ur, Journal of Archaeological Research (2009)
Early Mesopotamian Urbanism, Joan Oates et.al., Antiquity 81 (2007)
Medicine and Doctoring in Ancient Mesopotamia, Emily K. Teall, Grand Valley Journal of History, Vol.3, Issue 1 (2014)
Early Humans and the Prehistoric Record - Human-Plant Interaction (a lecture)
Irrigation and Autocracy, Jeanet Sinding Bentzen, et.al., pre-print (2015)
Progress of Building Materials and Foundation Engineering in Ancient Iraq, Entidhar Al-Taie, et.al, Advanced Materials Research, Vol. 446-449 (2012)
Lost World of Elam, Walther Hinz, Sidgwick & Jackson (1972)
Inventing Metallurgy in Western Eurasia, M. Radivojevi
ć, Cambridge Archaeological Journal, Vol. 25 (2014)
Development of Metallurgy in Eurasia, Benjamin W. Roberts, et.al., Antiquity 83 (2009)
Ancient Farming, Hervé Reculeau, Members' Magazine of
The Oriental Institute of the University of Chicago (2017)
Insights into the Production Technology of North-Mesopotamian Bronze Age Pottery, T. Broekmans, et.al., Appl. Phys. A 90 (2008)
Salt and Silt in Ancient Mesopotamian Agriculture, Thorkild Jacobsen and Robert M. Adams, Science, Vol. 128, No. 3334 (1958)
First Steps into the Ancient World (Mesopotamia)
The Architecture and Pottery of a Late 3rd Millennium BC Residential Quarter at Tell Hamoukar, C. Colantoni and J.A. Ur, IRAQ Vol. LXXIII (2011)
The Emergence of Ceramic Technology and its Evolution as Revealed with the use of Scientific Techniques, Y. Maniatis, Chapter 2 in "From Mine to Microscope", Oxbow Books (2009)
Mesopotamian Medicine in Practice, Van Laere Emmy, dissertation (2018)
Households and the Emergence of Cities in Ancient Mesopotamia, Jason Ur, Cambridge Archaeological Journal 26 (2014)
Terrestrial Cartography in Ancient Mesopotamia, Elizabeth Ruth Josie Wheat, dissertation (2012)
Money, Prices and Market in the Ancient Near East, Bert van Der Spek, pre-print (2015)
Ancient Near Eastern Economics, Luca Peyronel, 6th International Congress on the Archaeology of the Ancient Near East (2008)
Oxhide Ingots, Copper Production, and the Mediterranean Trade on Copper and Other Metals in the Bronze Age, Michael Rice Jones, dissertation (2007)
Discovering Metals - A Historical Overview, ed. A.C. Reardon, ASM International (2011)
Domination and Resilience in Bronze Age Mesopotamia, Paulette Tate, University of Colorado Press (2012)
History of Mining and Metallurgy in Anatolia, Ünsal Yalçin and Hadi Özbal (pre-2008)